name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
611453 | The Chebyshev iteration revisited. | Compared to Krylov space methods based on orthogonal or oblique projection, the Chebyshev iteration does not require inner products and is therefore particularly suited for massively parallel computers with high communication cost. Here, six different algorithms that implement this method are presented and compared with respect to roundoff effects, in particular, the ultimately achievable accuracy. Two of these algorithms replace the three-term recurrences by more accurate coupled two-term recurrences and seem to be new. It is also shown that, for real data, the classical three-term Chebyshev iteration is never seriously affected by roundoff, in contrast to the corresponding version of the conjugate gradient method. Even for complex data, strong roundoff effects are seen to be limited to very special situations where convergence is anyway slow.The Chebyshev iteration is applicable to symmetric definite linear systems and to nonsymmetric matrices whose eigenvalues are known to be confined to an elliptic domain that does not include the origin. Also considered is a corresponding stationary 2-step method, which has the same asymptotic convergence behavior and is additionally suitable for mildly nonlinear problems. | Introduction
The Chebyshev iteration [2{4] has been one of the favorite Krylov space methods
for solving a large sparse linear system of equations in a parallel environ-
ment, since, unlike methods based on orthogonalization (such as the conjugate
gradient (CG) and biconjugate gradient (BiCG) methods and GMRes|to
name a few), it does not require to compute communication-intensive inner
products for the determination of the recurrence coecients. Only the monitoring
of the convergence, that is, the determination of the norm of the residuals
requires inner products, and even this norm needs to be evaluated only
occasionally because its time-dependence, that is, the convergence rate, can
be forecast reliably.
The Chebyshev iteration, which in the older literature has often been referred
to as Chebyshev semiiterative method, requires some preliminary knowledge
about the spectrum (A) of the coecient matrix A: an elliptic domain E
(A) with 0 62 E is normally assumed to be known in advance. Denote the
center of the ellipse by , its foci by c, and the lengths of the large and
the small semi-axes by a and b. When the elliptic domain turns into
a straight line segment (or, c]. At this point, both
and c may be complex. Manteuel [1] devised a technique to determine a
suitable ellipse from a given nonsymmetric matrix.
Mathematically the method can be dened by translating the Chebyshev polynomials
T n from the interval [ 1; 1] to the interval I and scaling them so that
their value at 0 is 1. On R the Chebyshev polynomials are dened by 3
T n is even or odd if n is even or odd, respectively. All three formulas are valid
when we extend the denition to the complex plane, which we will indicate
by using the variable . For example, we may dene
The translated and scaled residual polynomials p n that characterize the Cheby-
3 Denitions are marked by the symbol :, while := is used for algorithmic assign-
ments; often either one of the symbols could be used.
shev iteration are
c
c
If we let x 0 be an initially chosen approximation of the solution of the linear
system b that has to be solved, and if r denotes the
corresponding residual, then, by denition, the nth approximation and residual
The classical case for applying the method is when A is symmetric positive
denite (spd)|as assumed in CG|and, therefore, the interval I lies on the
positive real axis and contains the spectrum of A. In this case Chebyshev iteration
is known to be optimal in the sense that it yields, for every n, the smallest
nth maximum residual if the maximum is taken over all normal matrices with
spectrum on I; see [2{4].
Due to a wrong claim in [5] it has often been assumed that this optimality
also holds for the class of matrices with spectrum inside or on an ellipse
whose foci lie on the positive real axis, but Fischer and Freund [6,7] have
shown that this is not true in general; the exceptional cases are rather ill-
conditioned, however. In any case, for any elliptic compact set not containing
0 the correspondingly chosen Chebyshev iteration is asymptotically optimal,
as its recurrence coecients approach those of a second order Richardson iter-
ation, which is a stationary 2-step method based on conformal mapping [8{11]
and can be viewed as the limit of the Chebyshev iteration; see our discussion
in Section 6.
Of course, in practice we need an algorithm that generates the approximations
recursively. The usual approach is to derive a three-term recurrence from
the standard recursion for the Chebyshev polynomials. However, as has recently
been shown by Gutknecht and Strakos [12], Krylov space methods based
on three-term recursions for iterates and residuals may suer from a large gap
between recursively computed residuals r n and true residuals b Ax n , and,
therefore, may stagnate early with relatively large true residuals. In other
words, the ultimately achievable accuracy may be quite low. In particular,
this eect may even occur when CG is applied to an spd problem.
We will show here that the Chebyshev iteration, even in this implementation,
is not seriously aected by roundo. Moreover, we will discuss ve other implementations
that produce even more accurate solutions, that is, stagnate
ultimately with smaller true residuals. We also point out that the aforementioned
stationary second order Richardson iteration can as well be realized by
six analogous dierent algorithms.
We note that similar analytical techniques have been applied by Golub and
Overton [13] for the analysis of the behavior of the Chebyshev iteration when
a preconditioner is applied inexactly.
iteration with three-term recursion
Recursions for the residuals r n and the iterates x n of the Chebyshev iteration
are easily found from the standard three-term recursions for the classical
Chebyshev polynomials T n ,
The following rst realization of the method results.
Algorithm 1 (Three-term Chebyshev iteration) For solving
choose x 0 and let r 0 := b Ax 0 . Also set r 1 := x 1 := o. Choose the
parameters and c so that the spectrum of A lies on the straight line segment
c] or on an elliptical domain E with foci c that does not
contain
and compute, for
r n+1 :=
We cannot expect that a solver produces ultimately a much smaller residual
than what we get when we insert (the machine approximation of) the exact
solution into the denition of the residual: k
(b Ax ? )k. However,
due to the accumulation of rounding errors the achievable accuracy might be
perhaps much lower. Actually, the ultimate accuracy of Algorithm 1 (and
many others) is determined by the size of the gap f n between the updated
(or, recursively computed) residual r n and the true (or, explicitly computed)
residual b Ax
Here x n and r n denote the vectors computed in
oating-point arithmetic from
and (7). In fact, if A satises the spectral assumption, then, normally,
even in
oating-point arithmetic. Thus,
(b Ax n )k kf n k for large
A general result on this gap for methods updating residuals by three-term
recurrences was given in [12].
Theorem 1 ([12]) Assume iterates and residuals are updated according to
r n+1 :=
where
Then the gap f to
denotes the machine epsilon),
l 0
l 0
. (10)
l 0
l n 1
l
where
is the local error whose components come from
In (10) and (11) the quantities x k , r k , k , k 1 , and
are those computed in
oating-point arithmetic. If we assume that each row of A contains at most
nonzero elements and that matrix-vector products with A are computed in
the standard way, then for the local error holds componentwise
jbj (j
But more important than the size of the local errors is the size of the potentially
large factors k
k and of their products in (10). In Algorithm 1, the
factors and their products are (in exact arithmetic)
(0 k <
Strictly speaking, we should consider here the values of k 1 and
k that are
obtained in
oating-point arithmetic. Then the three relations (12a){(12c) are
only correct up to a roundo error of order O(). However, because we are free
to compute the recurrence coecients at little extra cost in multiple-precision
arithmetic, and since we are only concerned about quotients that are very
large, it seems well justied to neglect these errors here. Otherwise we would
have to analyze the roundo errors in the recursions (4) and (5), or in any
other formulas used to calculate n 1 and
n .
(as in the case when A is spd), so that < 1 and jT k
cosh(k arcosh(jj)), we have, since cosh is monotone increasing on the positive
real axis,
and therefore all the factors in (10) are less than 1 if the recurrence coecients
are computed accurately enough. Since l k appears in n k
it may still get amplied by a factor smaller than this is not
too serious, in particular since typically most of the factors are rather small.
Of course, (13) does not hold in general unless < 1 or > 1: for example,
we have, for purely imaginary and of suciently small absolute
value, jT 2k ()j > jT 2n+1 ()j for all k; n 2 N with k; n m, since jT 2k (0)j 6= 0
but jT 2n+1 in (12b) the index dierence between the numerator
and denominator polynomials is 2 and hence this argument is not applicable.
We show next that also in the other case relevant for real-valued problems,
namely when 2 R but c is purely imaginary (so that the ellipse is still
centered on and symmetric about the real axis), the quotients (12a){(12c) are
all of absolute value smaller than 1.
For any 2 C n[ 1; 1], we let # be the larger solution of the quadratic equation2
Note that the solutions come in pairs # 1 and that implies that
which is excluded by assumption.
Therefore, we may assume that j#j > 1 here. The mapping # 7! of (14)
is the well-known Joukowski transformation, which allows us to express the
Chebyshev polynomials simply as
In fact, if we let
so that e clearly, e #, and therefore, if 1,
relation can be seen to be
valid for any 2 C . Consequently, the single factors from (12b) can be written
as
Obviously, these factors are rational functions of both and #.
It is well-known that T n+1 has n+1 simple zeros in (
(after cancellation of the pole and zero at is even) has at most n+1
poles, and they lie all on ( 1; 1). If considered as a function of #, the quotient
has at most 2(n poles, and they all lie on the unit circle. Clearly, if we
choose close enough to a pole, but not on [ 1; 1], the quotient can be made
as large as we want. Consequently, as claimed, the factors are in general not
all of absolute value less than 1. So, amplication of a local error is possible.
If 0 < c < , we have seen already that (13) holds, and, by symmetry, the
same is true for 0 > c > . If is still real, say > 0, but c 2
since the Joukowski transformation maps the part
above i of the imaginary axis on the positive imaginary axis, we have
with > 1. Then, from (16) and by setting
e
so that e
the Joukowski transformation maps [1; 1) onto itself),
we obtain
n+1 (n+1)
U
if
Here, U n is the nth Chebyshev polynomial of the second kind. For e
can be expressed as
Noting that sinh is monotone increasing, we can conclude that U n+1 (e ) >
1. As we have seen before, also T n+1 (e
e
> 1. Therefore, also in this situation, the factors
n have an absolute
value smaller than 1. Summarizing, we have proved the following result.
Theorem 2 For an interval [ c; or an ellipse with foci c
symmetric about the real axis and not containing the origin, the factors (12a){
(12c), which appear in (10), are of absolute value less than 1 if the recurrence
coecients k 1 and
k have been computed with sucient accuracy.
In Section 8 we will come back to the question of the size of the factors (12a){
(12c) in the case where the assumptions of this theorem do not hold, that is
when the linear system to be solved is complex and does not have a spectrum
symmetric about the real axis.
Finally we note that a simple way to avoid the residual gap in the Chebyshev
iteration is to replace the recursively computed residuals by explicitly
computed residuals:
Algorithm 2 (Three-term recursion, explicitly computed residuals)
Same as Algorithm 1 except that the recursion (7) for computing r n+1 is replaced
by
This remedy could be applied in many Krylov solvers in order to increase
the ultimate accuracy. However, explicitly computed residuals are known to
slow down the convergence of projection methods like CG and BiCG due to
stronger roundo eects in the Krylov space generation process [14], as they
destroy the local (bi)orthogonality of the bases created. But here, unlike in
these methods, the recurrence coecients , n 1 , and
n do not depend on
r n , and therefore the error of r n will only have little in
uence on xm (m > n)
and the convergence of the method.
3 Rutishauser's Chebyshev iteration by updating corrections
The recursions (6) and (7) are of the form (8){(9) with the consistency condition
which implies that r
Subtracting x n and r n , respectively, on both sides of (8) and (9),
using the consistency condition, and setting
yields
This leads to the following reformulation of Algorithm 1.
Algorithm 3 (Chebyshev iteration by updating x n and r n ) Same
as Algorithm 1 except that the recursions (6) and (7) for computing x n+1 and
r n+1 are replaced by (19){(20) and
This is how Rutishauser [4] formulated the Chebyshev iteration and other
Krylov space solvers (which he called \gradient methods"). It is easy to also
modify this scheme so that the residuals are computed explicitly:
Algorithm 4 (Updating x n and explicitly computing r n ) Same as
Algorithm 1 except that the recursions (6) and (7) for computing x n+1 and
r n+1 are replaced by (19), (21), and (18), that is,
Algorithms based on coupled two-term recurrences
For Krylov space solvers based on two-term updates for x n and r n (involving
additionally direction vectors v n ) [15,16],
the gap between updated and true residuals is known to be often much smaller
than for those that update the residuals with three-term recurrences of the
form (8){(9) or even longer ones. It does not matter whether the recursion
long or just two-term as in
because the same possibly inaccurate v n is used in (24) and (25). Examples for
algorithms of the form (24){(25) with (26) are the standard Hestenes-Stiefel
or OMin version of CG and the standard BiOMin version of BiCG.
The above claim about the higher ultimate accuracy of algorithms with two-term
updates (24){(25) is based on a comparison between Theorem 1 and
the following result of Greenbaum [17], which improves on previous similar
statements in [19] and [18]. It explains why the gap between updated and true
residuals is relatively small: here, the gap is just a sum of local errors; these
are not multiplied by any potentially large factors.
Theorem 3 ([17]) Assume iterates and residuals are updated according (24){
(25). Then the gap f between the true and the updated residual
where
is the local error whose components come from
In particular,
where denotes the machine epsilon,
with m the maximum number
of nonzeros in a row of A, N the order of A, and
kn
5 Chebyshev iteration based on coupled two-term recurrences
Theorem 3 suggest to search for a coupled-two term recursion as an alternative
realization of the Chebyshev method. Recursions (24){(25) call for the
following \Ansatz" in a polynomial formulation:
with
To determine ! n and n we insert (27)
into (28), make use of (28) with n replaced by n 1, and then compare the
result with the polynomial reformulation of (7): if n 1,
| {z }
| {z }
| {z }
We obtain
(n 1);
(n 0); (29)
(n 1); (30)
and, conversely,
(n
(n 0): (31)
just
Like n 1 and
n we can express ! n and n 1 in terms of the Chebyshev
polynomials and derive recursions for them. First, inserting the left-hand side
equations from (4) and (5) into (31) we see that
Then, inserting the right-hand side equations from (4) and (5) we get
(if n 2);
and
Summarizing we obtain the following coupled two-term Chebyshev iteration
[20].
Algorithm 5 (Coupled two-term Chebyshev iteration) For solving
choose x 0 and let r 0 := b Ax 0 . Choose the parameters and
c so that the spectrum of A lies on the straight line segment I
or on an elliptical domain E with foci c that does not contain 0. Then let
and compute, for
(n 2); (36)
Also in Algorithm 5 we can avoid the residual gap by replacing the recursively
computed residuals by explicitly computed residuals.
Algorithm 6 (Two-term recursions and explicitly computed residu-
als) Same as Algorithm 5 except that the recursion (39) for computing r n+1
is replaced by r n+1 := b Ax n+1 .
6 The second order Richardson iteration as limiting case
For any 2 C n[ 1; 1] we have according to (15) in terms of # dened by (14)
and j#j > 1
as n !1. We can conclude that for any admissible value of the coecients
of both the three-term and the two-term Chebyshev iterations converge:
as
(The dependence on the center of the ellipse or interval is hidden in #.)
This gives rise to six additional related algorithms that are analogous to Algorithms
1{6 but use the limit values of the coecients. For example, for the
iterates hold the three-term recurrences
and the coupled two-term recurrences
These additional six algorithms are dierent implementations of the second-order
Euler method that can be associated with the ellipse E . This method
belongs to the class of iterative methods based on conformal mappings, introduced
by Kublanovskaya in 1959; see [8{11]. It is, at least in the case where the
ellipse E collapses to an interval I, better known as stationary second-order
Richardson iteration; see [3]. It can easily be generalized for mildly non-linear
systems of equations, and for those it seems more suitable than the nonlinear
Chebyshev iteration; see [21,22]. Note that
Therefore, for the three-term version of the second-order Richardson iteration,
all the multiplicative factors in (10) of Theorem 1 are actually smaller than 1
if and
are computed with sucient accuracy.
The conformal map associated with the recursion (43) is 4
In view of (46) f maps a neighborhood of the unit disk one-to-one onto the
exterior of an ellipse with the foci c. In particular, the disk D ^ around
0 with radius b
mapped onto the exterior of the interval
or line segment [ c;
, the disk D is mapped onto the
exterior of a confocal ellipse, and if all the eigenvalues of A lie in this ellipse,
the iteration converges asymptotically at least with the rate 1=. If all the
eigenvalues lie on [ c; + c], the asymptotic rate is 1=b . These asymptotic
rates are the same for the Chebyshev iteration.
7 Numerical results
We consider rst real matrices of order 500 whose eigenvalues are randomly
chosen as complex conjugate pairs in an ellipse with foci c and longer
semi-axis a. These matrices have been constructed by unitarily transforming
a block-diagonal matrix (with 2 2 blocks) with these randomly chosen
In [9{11,21,22] the mapping p related to f by
instead.
eigenvalues. Note that these matrices are not very ill-conditioned as long as
the ellipse does not come very close to the origin: they are normal and their
condition number is bounded by the quotient of the distances from the origin
of the farthest point and the closest point. However, if we considered very
ill-conditioned matrices instead, the rate of convergence would be very slow.
We report the number n 12 of iterations needed to reduce the residual norm by
a factor of 10 12 and the ultimate relative accuracy where the residual norm
stagnates. Table 1 summarizes the results for four such matrices for the three-
term, two-term, and Rutishauser versions of the Chebyshev iteration using
recursively computed residuals. Table 2 contains the corresponding results if
explicitly computed residuals are used instead. We see that in these examples
the number of iterations needed to reach relative accuracy 10 12 is not aected
by the choice of the version. The ultimate accuracy is worst for the three-term
version with updated residuals, and by replacing them by explicitly computed
residuals we gain nearly up to two orders of magnitude. In other words, for
the three-term version with updated residuals the loss of accuracy is notable,
but not really serious. This re
ects what we can expect from Theorem 2. For
all the other versions, the ultimate accuracy is higher than 10 14 .
Table
Comparison of the three-term, two-term, and Rutishauser versions of the Chebyshev
iteration using recursively computed residuals. Normal matrices with eigenvalues in
the ellipse with foci c and semi-axis a.
matrix 3-term 2-term Rutishauser
c a ult.acc. n 12 ult.acc. n 12 ult.acc. n 12
100 50 90 1.6e-14 195 1.6e-15 195 2.1e-15 195
100 70 90 5.9e-15 159 1.7e-15 159 2.3e-15 159
100 90 99 1.1e-13 1040 3.1e-15 1040 5.7e-15 1040
Table
Comparison of the three-term, two-term, and Rutishauser versions of the Chebyshev
iteration using explicitly computed residuals. Normal matrices with eigenvalues in
the ellipse with foci c and semi-axis a.
matrix 3-term 2-term Rutishauser
c a ult.acc. n 12 ult.acc. n 12 ult.acc. n 12
100 50 90 9.2e-16 195 1.0e-15 195 9.1e-16 195
100 70 90 9.1e-16 159 9.5e-16 159 9.3e-16 159
100 90 99 1.8e-15 1040 1.9e-15 1040 1.7e-15 1040
In
Figures
1{3 we show for the rst example with
90 the residual histories for the two three-term versions, the two two-term
versions, and the two Rutishauser versions, respectively. For the algorithms
with residual recursions, both the true residuals and the recursively updated
residuals are plotted. Needless to say that for the algorithms using explicitly
computed residuals there is no dierence between those and the true residuals
and thus only one curve is shown.
Iteration Number
Normalized
Residual
Norms
Chebyshev 3 w/rec.res.: rec residual
Chebyshev 3 w/rec.res.: true residual
Chebyshev 3 w/expl.res.: (true) residual
Fig. 1. Chebyshev iteration with three-term recursions
Iteration Number
Normalized
Residual
Norms
Chebyshev 2x2 w/rec.res.: rec residual
Chebyshev 2x2 w/rec.res.: true residual
Chebyshev 2x2 w/expl.res.: (true) residual
Fig. 2. Chebyshev iteration with coupled two-term recursions
Iteration Number
Normalized
Residual
Norms
Chebyshev rut w/rec.res.: rec residual
Chebyshev rut w/rec.res.: true residual
Chebyshev rut w/expl.res.: (true) residual
Fig. 3. Chebyshev iteration with Rutishauser's recursions for updating corrections
8 Discussion of the potential roundo amplication in the three-term
Chebyshev algorithm in the case of complex data
Now we want to try to construct an example with much stronger degradation of
the ultimate accuracy in case of the three-term version with updated residuals.
We know that the in
uence of the roundo hinges in this case mainly on the
factors (12a){(12c) in (10). Clearly, the absolute value of the factor (12c),
(which simplies to the absolute value of (12b) if
if the absolute value of the denominator is very small, that is if is close to a
zero of T n or T n+1 . These zeros all lie in the interval ( 1; 1), while jT n ()j > 1
if > 1 or < 1. Hence we need a complex to get a small denominator.
In
Figure
4 we display this factor for as a function of in
the domain 0 < Re 2, 0 Im 0:5. The poles of the function at the
three positive zeros of T 3 and T 4 are well visible, although the values of the
function on Re (where the poles are) are not plotted; the zero of T 3 at
with the one of T 1 . Clearly, we can make the fraction as large
as we want by choosing close enough to a pole. Then at least one term in
will be large.
However, if is close to such a pole (and, hence, to a point in the interval
Fig. 4. The factor ( 1 2 )=(
in (10) as a
function of
say to a zero of T n , then the residual polynomial p n of (1) is large
at some points of the prescribed, necessarily very
at elliptic domain. (Recall
that the straight line segment determined by the foci of the ellipse must come
very close to the origin, but the ellipse must not contain the origin. If the
ellipse were not
at, the quotient would not be close to a point
in the interval (-1,1) unless the ellipse would contain the origin.) Therefore,
the residual r n of a system with a matrix whose spectrum is spread in this
ellipse or on the straight line segment will most likely have some eigensystem
components that have not been damped or have even been amplied. It is not
of importance whether the matrix is diagonalizable or not.
There is the question what happens with the other quotients in (10). To explore
that, we show in Figures 5 and 6 the factors (48) for 0 k n 1 < 100
when 0:001i, respectively. In the rst
case, the plot shows a clear ridge where
of n, the quotient j
remains smaller than one.
In fact, since
(see (41)), and since the asymptotic
convergence rate
j is bounded by 1 (see (46)), this is what we must expect.
Moreover, this asymptotic rate is also the asymptotic convergence factor of
both the Chebyshev iteration and the second order Richardson iteration if the
eigenvalues of A lie on the straight line segment [ c; rate close to
1 means that, in general (that is, when the eigenvalues of A can be anywhere
on the line segment), the iteration will converge very slowly. In Figure 5 this
rate is around 0:83. Away from the ridge, the factors (48) quickly decay.
The second plot shows a few very high peaks and a regular pattern of many
Fig. 5. The factors (48) in (10) for 0:05i as a function of k and n.2060100
Fig. 6. The factors (48) in (10) for 0:001i as a function of k and n.
smaller peaks that are still higher than 1. (Note the new scale of the vertical
axis!) But, in view of what we just said, this can only mean that on the line
the quotients are still far away from their asymptotic value, which is
around 0:996 here. So, in an example with a matrix with this kind of spectrum
we might notice a serious in
uence of roundo propagation on the ultimate
accuracy, but the method would converge so slowly that we rather do not
want to apply it. In the initial phase the residuals may strongly increase in
this situation, because some of the residual polynomials are large on the line
segment.
9 Conclusions
We have compared six dierent implementations of Chebyshev iteration with
respect to convergence speed and ultimate accuracy attained. Several conclusions
can be drawn from both theoretical and experimental investigations.
The same theoretical conclusions also hold, and the same experimental ones
can be expected to hold, for the related stationary method, the second order
Richardson iteration.
In our fairly well-conditioned examples, the number of iterations needed to
reduce the residual norm by 10 12 did not depend on which of the six versions
is applied.
The ultimate accuracy turned out worst for the classical 3-term recursion
with recursively computed residuals, as had to be expected from theoretical
results.
Explicitly computed residuals yield the higher ultimate accuracy, and, for
all three types of iterations, roughly the same.
In contrast to CG, BiCG, and related methods, explicitly computed residuals
do not cause a slowdown of convergence. They also do not have higher
computational cost. Therefore they should be preferred.
If the (standard) three-term recursion for the residuals is applied nevertheless,
the ultimate accuracy is still likely to be quite high, and this for the following
reasons:
If the Chebyshev iteration is applied to a matrix with spectrum on an
interval or an ellipse with foci c symmetric about the
real axis, then, in contrast to CG and BiCG, the loss of ultimate accuracy is
never very pronounced, because the multiplicative factors in (10) in front of
the local errors in the expression for the residual gap are all of absolute value
smaller than one if the recurrence coecients are computed with sucient
accuracy.
If the Chebyshev iteration is applied to an ellipse whose foci c do not
lie on the real axis, but for which the line segment [ c;
close to the origin (which implies that the ellipse must be very
at or must
collapse to an interval), then some local errors may amplify dramatically and
might cause a large residual gap, so that the ultimate accuracy deteriorates.
However, this can only happen when the Chebyshev iteration converges very
slowly.
Acknowledgment
. The authors are grateful to Zdenek Strakos for pointing out
several misprints and suggesting a number of improvements.
--R
Numerical determination of fundamental modes
successive overrelaxation iterative methods
Theory of gradient methods
Further results on polynomials having least maximum modulus over an ellipse in the complex plane
On the constrained Chebyshev approximation problem on ellipses
Chebyshev polynomials are not always optimal
Computational Methods of Linear Algebra
Iterationsverfahren und allgemeine Euler-Verfahren
The analysis of k-step iterative methods for linear systems from summability theory
A study of semiiterative methods for nonsymmetric systems of linear equations
The convergence of inexact Chebyshev and Richardson iterative methods for solving linear systems
Generalized conjugate-gradient acceleration of nonsymmetrizable iterative methods
Changing the norm in conjugate gradient type algorithms
Estimating the attainable accuracy of recursively computed residual methods
BiCGstab(l) and other hybrid Bi-CG methods
Accuracy of computed solutions from conjugate-gradient-like methods
Stationary and almost stationary iterative (k
--TR
K-step iterative methods for solving nonlinear systems of equations
The convergence of inexact Chebyshev and Richardson iterative methods for solving linear systems
On the constrained Chebyshev approximation problem on ellipses
Chebyshev polynomials are not always optimal
Predicting the behavior of finite precision Lanczos and conjugate gradient computations
Changing the norm in conjugate gradient type algorithms
Estimating the Attainable Accuracy of Recursively Computed Residual Methods
Accuracy of Two Three-term and Three Two-term Recurrences for Krylov Space Solvers | coupled two-term recurrences;chebyshev iteration;second-order Richardson iteration;sparse linear systems;roundoff error analysis |
613633 | Scalable Human-Friendly Resource Names. | Currently, Uniform Resource Locators (URLs) are used to name and access Web-based resources. However, URLs pose a significant scalability problem because they cannot be used to refer to replicated Web pages. The authors propose a new URI scheme called Human-Friendly Names (HFNs) to solve this scalability problem. HFNs are high-level names that are easy-to-use by humans and name Web resources in a location-independent way. This article describes a scalable HFN-to-URL resolution mechanism that is based on URNs and makes use of the Domain Name System (DNS) and the Globe Location Service. | Introduction
Resources in the World Wide Web are named using Uniform Resource Identifiers
(URIs). The most common and well-known type of URI is the Uniform Resource
Locator (URL). A URL is used in the Web for two distinct purposes: to identify
resources and to access resources. Unfortunately, combining these two leads to a
scalability problem since resource identification has different requirements than resource
access. Consider, for example, a popular Web page that we want to replicate
to improve its availability. Currently, replicated Web pages are named by means
of multiple URLs, one for each replica, as shown in Figure 1(a). However, to hide
replication from users, that is, to make replication transparent, we need a name that
only identifies the page. In other words, that name should not refer to a specific
replica, but instead, should refer to the set of replicas as a whole.
Uniform Resource Names (URNs) provide a solution to this scalability prob-
lem. A URN is also a type of URI, but differs from a URL in that it only identifies
a Web resource. A URN does not indicate the location of a resource, nor does it
This paper is an updated version of technical report IR-466.
contain other information that might change in the future. A good example of a
URN is an ISBN number for a book. An ISBN number only identifies a book, but
not any of its copies.
To access the resource identified by a URN, the URN needs to be resolved into
access information, such as a URL. Using URNs to identify resources and URLs to
access resources allows one URN to (indirectly) refer to many copies at different
locations, as shown in Figure 1(b). This separation allows transparent replication
of Web resources. Moreover, since a URN is a stable reference to a resource and
not its location, we can also move the resource around without changing its URN.
AURN can thus support mobile resources by (indirectly) referring to a set of URLs
that changes over time.
URL URL URL
Replica
Replica Replica
URL URL
URL URL
URL URL
URN
URN
HFN
Replica Replica
Replica Replica
Replica Replica
(a) (b) (c)
Figure
1: Naming a replicated resource. (a) Using multiple URLs. (b) Using a
single URN. (c) Using an HFN combined with a URN.
Since URNs are intended to be primarily used by machines to identify re-
sources, there is no requirement to make them easy-to-use or remember by humans.
The only requirement on URNs regarding humans, as stated in RFC 1737, is that
URNs are human transcribable. For instance, ISBN numbers can easily be written
down and copied by humans, but are not easily remembered. However, humans do
need a way to name Web resources in such a way that those names can easily be
shared and remembered.
To fill the gap between what URNs provide and what humans need, a new kind
of URI is needed, as suggested in RFC 2276. We propose the introduction of a new
URI scheme, called Human-Friendly Names (HFNs), to meet this need. HFNs
are tailored to be convenient to use by humans and therefore explicitly allow the
use of descriptive names, unlike URNs. There are different approaches to human
friendly naming. Two well-known approaches are the "yellow pages" and "white
pages" services.
The "yellow pages" approach is to use a directory service, such as those based
on LDAP [6]. Such a service allows a user to search for a resource based on
attribute values that have been assigned to that resource. The main drawback of directory
services is their limited scalability. In practice, only implementations based
on local-area networks offer acceptable performance. Large-scale, worldwide directory
services are yet to be developed. At best, the current implementations are
constructed as federations of local directory services in which searches are not allowed
to span multiple sites unless severely restricted.
The "white pages" approach is to make use of a (possibly hierarchical) naming
graph, such as used in file systems. The Domain Name System (DNS) is the
prime example of traditional naming services. Although naming services offer
less advanced facilities than directory services, they have proven to easily scale
to worldwide networks with millions of users. From this perspective, we choose
to base our HFNs on a hierarchical name space implemented using the DNS. The
hierarchical name space provides users a convenient and well-known way to name
resources. We return to our choice for DNS later.
Like a URN, an HFN needs to be resolved to one or more URLs when the
user needs to access the named resource. We propose a two-step process to HFN
resolution. In our approach, we bind an (hierarchical) HFN to a URN, and bind a
URN to possibly multiple URLs, as described above. HFN resolution then consists
of first resolving the HFN to its associated URN, and then resolving the URN to its
associated URLs, as shown in Figure 1(c).
There are many advantages to this two-step approach. If a resource is repli-
cated, or moved to another location, this will not affect the name it was given by
its users. Likewise, a user is free to change the HFN since this will not affect the
placement of replicas. Moreover, a user may even decide to use several names to
refer to the same resource, similar to the use of symbolic links in file systems.
Our HFN-to-URL resolution mechanism pays specific attention to two scalability
issues. First, we can support a large number of resources. Second, we can
support resources distributed over a large geographical area. To the best of our
knowledge, our design provides the first solution to large-scale HFN-to-URL resolution
Model
For our naming system, we restrict ourselves to naming only highly popular and
replicated Web resources. Support for other resource types, such as personal Web
pages or highly mobile resources, is yet to be incorporated. We also assume that
changes to a particular part of the name space always originate from the same
geographical area. The reason for choosing this restricted resource model is that
it allows us to make efficient use of the existing DNS infrastructure. Given these
restrictions, our HFN scheme is currently not appropriate as a replacement for
URLs in general.
Since our HFNs are implemented using DNS, their syntax closely follows the
structure of domain names. An example of a HFN that refers to the source code of
the current stable Linux kernel is hfn:stable.src.linux.org. The hfn: prefix identifies
our URI scheme; the rest is the actual name of the resource. Our security policy is
minimal, we just want to prevent unauthorized changes to the HFN-to-URL map-
ping. We do not make the HFN-to-URL mapping confidential since we assume
that HFNs will be shared in the open, in much the same way as URLs are shared
today.
Since the use of locality is of prime importance for scalability, we want the
HFN resolution service and its various components to use locality when possible.
When resolving a name, locality should be used in two distinct ways. First, the
resolution service should provide a user with access to the nearest replica. This
type of locality is needed for a scalable Web system.
The second form of locality requires that the name resolution process itself
should also use nearby resources when possible. For example, assume we have a
user located in San Francisco who wants the DNS name vu.nl to be resolved. In the
current DNS, name resolution normally proceeds through a root server, the name
server for the nl domain (which is located in The Netherlands), and the name server
for the Vrije Universiteit (which is located in Amsterdam). If the resource named
by vu.nl happens to be replicated and already available in San Francisco, the lookup
request will have traveled across the world to subsequently return an address that
is close to the requesting user. In this case, it would have been better if the name
resolution process itself would have used only name servers in the proximity of the
user.
Architecture
In its general form, the HFN-to-URL mapping is an N-to-M relation. In other
words, multiple HFNs may refer to the same set of URLs. This mapping may
change regularly. For example, a resource is given an extra name, a replica is
added, or moved to another location. To efficiently store, retrieve, and update
the HFN-to-URL mapping, we split it into two separate mappings, as discussed
before. The first mapping is the HFN-to-URN mapping. The second mapping is
the URN-to-URL mapping. We use URNs to provide us with a stable, globally
unique name for every resource. By splitting the HFN-to-URL mapping in two
separate mappings, we have an N-to-1 relation and a 1-to-M relation, which are
each far easier to maintain compared to a single N-to-M relation.
The main purpose of the HFN-to-URN mapping is to uniquely identify a resource
by providing its URN. The HFN-to-URN mapping is maintained by a name
service. The URN-to-URL mapping is maintained by a location service whose
sole purpose is to locate a resource. HFN resolution thus consists of two steps. In
the first step, the HFN is resolved to a URN by the name service, and in the second
step, the URN is resolved to a URL by a location service. The type of URN used
in our naming scheme is determined by the location service.
To use our naming system, we add three new elements to the normal setup of
Web browsers and HTTP servers: an HFN-to-URL proxy, a name service, and a
location service. It is the task of the HFN-to-URL proxy to recognize HFNs and
resolve them by querying the name and location service. As such, it operates as a
front end to these two services. With the URL obtained from the location service,
the proxy accesses the named resource. In our design, we chose the proxy to be
a separate process that can interact with any standard Web browser. However, a
plug-in module can introduce the same functionality directly into a Web browser.
Figure
2 shows the setup we propose to retrieve Web resources named by
HFNs. When a user enters an HFN in the Web browser, the browser contacts
the HFN-to-URL proxy to obtain the Web resource named by the HFN (step 1).
The proxy recognizes the HFN and contacts the name service in step 2. The name
service resolves the name to a URN, and returns it to the proxy (step 3). The proxy
then contacts the location service in step 4. The location service resolves the URN
to a URL, and returns it to the proxy (step 5). The proxy can now contact the HTTP
server storing the named resource in step 6, which returns an HTML page in step
7. The proxy then returns this HTML page to the Web browser (step 8).
proxy
Web
browser
Name
service
server
Location
service2 357
Figure
2: The setup to retrieve Web resources named by HFNs.
Name Service
We use the DNS to store the mapping from an HFN to URN. DNS is at the moment
primarily used to name Internet hosts and email destinations. We can, however,
reuse the existing DNS infrastructure for HFNs with only minimal changes, as we
explain next.
The Domain Name System
DNS provides an extensible hierarchical name space, in which more general naming
authorities delegate responsibility for parts of their name space (subdomains)
to more specific naming authorities. For example, the naming authority responsible
for the com domain, delegates the responsibility for the intel.com domain to the
Intel Corporation. A naming authority is responsible for providing the resources
needed to store and query a DNS name, and can decide for itself which names to
store in its subdomain. The Intel Corporation can thus create whatever host name
or email destination it wants in its subdomain.
Resolving a host name in DNS consists, conceptually, of contacting a sequence
of name servers. The domains stored by the sequence of name servers are increasingly
specific, allowing the resolution of an increasing part of the host name. For
example, to resolve the host name www.intel.com, the resolution process visits, in
turn, the name servers responsible for the root, com, and intel.com domains, respec-
tively. The last name server will be able to resolve the complete host name.
To enhance its performance, DNS makes extensive use of caching. When a
name server is asked to resolve a DNS name recursively, it will contact the sequence
of name servers itself to resolve the name. The name server can then cache
the intermediate and end results of the resolution process. This procedure avoids
having to contact the sequence of name servers a second time when the same or a
similar name is looked up. However, for effective caching, DNS needs to assume
that the name-to-address mapping does not change frequently.
DNS uses resource records to store name mappings at name servers. A DNS
name can have zero or more resource records. There are two kinds of resource
records. The first kind stores user data, like the resource records for naming Internet
hosts and email destinations. This kind of record associates an IP address
or a mail server with a DNS name. The second kind is the name server resource
record, which is used internally by DNS to implement the name space delegation.
This resource record associates another DNS server with a DNS name, indicating
another name server at which to continue name resolution.
Using DNS to Store HFNs
We introduce a new type resource record to store the association of a URN with
a DNS name. When a user introduces a new HFN, we create a resource record to
store the URN associated with that HFN. This record will subsequently be inserted
into the DNS name space. The proper name server to store the record is the one
responsible for the parent domain of the HFN. For instance, to insert the HFN
hfn:devel.src.linux.org, we need to contact the server responsible for the src.linux.org
domain. The actual insertion at that server can be done dynamically using the DNS
update operation, as described in RFC 2136.
Location Service
We use the Globe location service [9] to resolve URNs into URLs. It allows us
to associate a set of URLs with a single URN. Since the location service uses so-called
object handles to identify resources, we use these object handles as URNs in
our two-level naming scheme. However, to ease our discussion, we will continue
to use the term URN. The location service offers, in addition to a lookup operation
for URNs, two update operations: insert and delete. The insert and delete operation
are used to modify the set of URLs associated with a URN.
Architecture
To efficiently update and look up URLs, we organize the underlying wide-area
network (i.e., the Internet) into a hierarchy of domains. These domains are similar
to the ones used in DNS. However, their use is completely independent of DNS
domains, and they have been tailored to the location service only. In particular, the
domains in the location service represent geographical, administrative, or network-
topological regions. For example, a lowest-level domain may represent a campus-wide
network of a university, whereas the next higher-level domain represents the
city where that campus is located. Another important difference is that the domain
hierarchy is a completely internal structure, unlike DNS it is not visible to users.
Each domain is represented in the location service by a directory node. Together
the directory nodes form a worldwide search tree. A directory node has a
contact record for every (registered) resource in its domain. The contact record
is divided into a number of contact fields, one for each child node. A directory
node stores either a forwarding pointer or the actual URLs in the contact field.
A forwarding pointer indicates that URLs can be found at the child node. Contact
records at leaf nodes are slightly different: they contain only one contact field
storing the URLs from the leaf domain.
Every URL stored in the location service has a path of forwarding pointers
from the root down, pointing to it. We can thus always locate a URL starting at
the root node and following this path. In the normal case, URLs are stored in leaf
nodes, but storing URLs at intermediate nodes may, in the case of highly mobile
resources, lead to considerably more efficient lookup operations, as discussed be-
low. However, since our current model excludes (highly) mobile resources, we can
safely assume that all URLs are always stored in leaf nodes.
Figure
3 shows as an example the contact records for one URN. In this example,
the root node has one forwarding pointer for the URN, indicating that URLs can be
found in its left subtree, rooted at the USA node. The USA node, in turn, has two
forwarding pointers, pointing to the California and Texas nodes, respectively. Both
of these nodes have a forwarding pointer to a leaf node where a URL is actually
stored.
Operations
When a user wants to know the URL of a resource, it initiates a lookup operation
at the leaf node of the domain in which it resides. The user provides the resource's
URN as a parameter. The lookup operation starts by checking whether the leaf node
has a contact record for the URN. If it has a contact record, the operation returns
World
Los Angeles Houston Miami
California Florida
Texas
USA
Empty contact field
Contact field with forwarding pointer
Contact field with URL
Figure
3: The organization of contact records in the tree for a specific resource.
the URL found in the contact record. Otherwise, the operation recursively checks
nodes on the path from the leaf node up to the root. If the lookup operation finds a
contact record at any of these nodes, the path of forwarding pointers starting at this
node is followed downwards to a leaf node where a URL is found. If no contact
record is found at any of the nodes on the path from the leaf node to the root, the
URN is unknown to the location service.
As an example, consider a user located near the leaf node of Miami, as shown
in
Figure
3. When the leaf node is contacted by the user with a request for a URL,
it will forward the request to its parent, the Florida node, since it does not contain
a contact record. The Florida node also does not know about the URN, and will,
in turn, forward the request to its parent, the USA node. The USA node does
know about the URN, and forwards the request to one of its children indicated by
a forwarding pointer. The lookup operation then follows the path of forwarding
pointers to one of the leaf nodes, for instance, the Houston leaf node. By going
higher in the search tree, the lookup operation effectively broadens the area that is
searched for a URL thus resembling search algorithms based on expanding rings.
The goal of the insert operation is to store a URL at a leaf node and create a
path of forwarding pointers to the leaf node. When a resource has a new replica
in a leaf domain, the URL of the new replica is inserted at the node of the leaf
domain. The insert operation starts by inserting the URL in the contact record of
the leaf node. The insert operation then recursively requests the parent node, the
grandparent node, etc., to install a forwarding pointer. The recursion stops when a
node is found that already contains a forwarding pointer, or otherwise at the root.
The delete operation removes the URL and path of forwarding pointers analogous
to the insert operation. Further technical details can be found in [10].
Improvements
The basic search tree described so far obviously does not scale yet. In particular,
higher-level directory nodes, such as the root, pose a serious problem. They have
to store a large number of contact records and handle a large numbers of requests.
Our solution is to partition an overloaded directory node into multiple directory
subnodes. Each subnode is responsible for only a subset of the contact records
originally stored at the directory node, and therefore has a much smaller load. We
use a hashing technique to decide at which subnode to place a contact record. The
hashing technique determines the subnode using only the contact record's URN.
A second way to alleviate the load on higher-level nodes is to make use of
caches. We cannot effectively use a scheme in which URLs are cached since URLs
can easily change in the presence of mobility. We therefore devised a caching
scheme called pointer caches. Assume that a resource changes its URL mainly
within a domain D, but hardly ever moves outside that domain. In that case, it
makes sense to let the directory node for D store the URL, and subsequently let
other nodes cache a pointer to the directory node for D. Since the resource will
hardly ever move outside domain D, a cached pointer will remain valid despite that
the URL of the resource may change regularly.
In this approach, whenever a lookup operation finds a URL at node N, it returns
the URL as well as a pointer to N. All nodes visited during the lookup will then
subsequently store the pointer to N in their local pointer cache. The next time a
lookup operation visits any of these nodes, it can be immediately directed to node
N. In this way, the lookup operation avoids visits to higher-level nodes. Details
on this caching scheme can be found in [1]. The effects of both improvements are
described below.
Discussion
An important aspect of our HFN-to-URL resolution scheme is its scalability. As
explained in the introduction, we can distinguish two types of scalability: the support
of a large number of resources and the support for resources that are distributed
over a large geographical area. For our resolution scheme to be scalable, both kinds
of scalability need to be addressed in the name service and in the location service.
Name Service
The first form of scalability requires our name service to deal with a large number
of resources, that is, deal with a large number of HFN-to-URN mappings. The current
DNS infrastructure supports in the order of 10 8 host names and email destina-
tions. By supporting only popular Web resources, we do not significantly increase
the number of names stored in DNS, and we thereby ensure that we do not exceed
its capacity.
The second form of scalability requires our name service to deal with names
distributed over a large geographical area. We tackle this second problem by ensuring
the use of locality in lookup and update operations. The locality of lookup
operations in DNS is provided by caches. If a resource named by an HFN is popu-
lar, its HFN-to-URN mapping will be stored in the caches of name servers, providing
users located near the cache with local access to the HFN-to-URN mapping.
A DNS query to obtain the URN can thus be answered directly, without the need
to contact a name server located far away. By assuming the use of popular Web
resources and a stable HFN-to-URN mapping, we ensure that caching remains ef-
fective. Update operations in the name service exploit locality as well. Since we
assume that changes to a specific part (i.e., subdomain) of the name space always
originate from the same geographical area, we can place the name server responsible
for that part in or near the area where the changes originate.
With the restrictions discussed above, DNS is an attractive name service, given
its existing infrastructure. Unfortunately, if we want to drop those restrictions on
the resource model, scalability problems could arise in DNS that prevent our HFN
resolution mechanism from scaling further. If we want to support resources that are
unpopular, caching will be ineffective, and DNS might become overloaded. If we
want to support mobile resources, the caching mechanism might cache mappings
in the wrong place. Therefore, if we want to support a more general resource
model, we need to replace DNS with a more scalable name service. We describe
the design of such a name service in [2]. Note that we do not criticize DNS: it has
never been designed to support the HFNs as we propose and it can be argued that
we are actually misusing the system.
Location Service
The problem of storing a large number of URN-to-URL mappings in the Globe
location service can be divided into a storage and a processing problem. We can
show by a simple computation that the storage requirements of the location service
are not a problem. Consider, for example, the root node, and assume that a single
contact record has a size of 1 KByte at the root. This 1 KByte of data should
contain the URN, the forwarding pointers, some local administrative information,
and still leave space for future additions, like cryptographic keys. If we assume a
worst-case scenario, where our system supports in the order of 10 8 resources (as
discussed above), this would mean that the root node has to store 100 GByte. Using
the partitioning scheme mentioned earlier, we can distribute the 10 8 contact records
over, say, 100 subnodes, resulting in 1 GByte per subnode. Using our partitioning
scheme, storage requirements are clearly not a problem.
The processing of lookup requests poses a more serious threat. We can ignore
update requests since they are rare compared to lookup requests. Our partitioning
scheme clearly also increases the lookup processing capacity, but what if it still is
not enough? To investigate the processing load further, we calculated the effect
of replicated resources and simulated the effects of pointer caching on the lookup
processing load.
As a metric of the scalability of our location service, we introduce the lookup
length. The lookup length is the number of nodes visited during a lookup operation,
and provides an intuitive measure of the processing load in the tree. A large value
means that many nodes have been visited, resulting in a load increase in all those
nodes. It also means, in general, that nodes higher up in the tree (i.e., the more
centralized nodes) have been visited. In essence, we would like to keep the lookup
length as small as possible.
We first investigate the effect of resource replication on the location service.
When a resource becomes more popular, invariably, more replicas of the resource
will be added. This results in more URLs being stored in the location service. To
provide optimal local access, the replicas will be distributed far away from each
other, and this results in a tree in which the paths of forwarding pointers from the
root down to the different URLs, will meet only in the root node. Assume each
node in the tree has a fanout of N and that M replicas have been created, evenly
distributed across the leaf domains. In this case, we can expect that M of the N
children of the root node will have registered a replica in their respective domain.
As a consequence, M out of N lookup requests will no longer need to be forwarded
to the root node. If replicas are evenly distributed across leaf domains, the load on
the root node thus decreases linearly with the number of replicas until
the root node is no longer used by lookup operations.
To investigate the effects of our pointer cache system, we conducted a simulation
experiment. The basic idea is that with an increasing number of lookup
operations, pointer caches should incur higher hit ratios, in turn, decreasing the
average lookup length. In our simulation, we built a search tree of height four with
a fanout of 32, leading to just over a million leaf nodes. The simulation consists of
inserting a single URL at an arbitrary leaf node, and initiating lookup operations at
randomly chosen leaf nodes. Each operation makes use of pointer caches, possibly
creating new entries, as explained above. For each lookup operation, we compute
its length by counting the number of nodes visited, and in the end, compute an
average length. This average lookup length should decrease with the number of
performed operations.
Figure
4 shows the result of our simulation, and confirms that with an increasing
number of lookup operations the lookup length decreases, putting less load on
the higher nodes in the tree. More importantly, the figure also shows that this effect
is already present will small numbers of lookup operations. Since we only support
popular Web resources, we know pointer cache entries will be reused, and caching
will therefore be effective.
The location service deals with URLs distributed over a large geographical area
by using locality through its distributed search tree and related lookup algorithm.
By starting the lookup operation at the leaf node to search the nearby areas first,
and continuing at higher nodes in the tree to search larger areas, the location service
avoids using remote resources when a URL can be found using local resources
only. Given our goal to support popular replicated Web resources, there is always
Average
Lookup
Length
Number of Lookup Operations (logarithmic)
Cache Effect on Lookup Operation
Figure
4: The average lookup length of a lookup operation.
a replica nearby.
Related Work
Most work regarding URIs is done within the working groups of the Internet Engineering
Task Force (IETF). The URN working group has been primarily responsible
for defining URNs. For instance, it has defined the overall URN name space
in RFC 2141, provided an example URN namespace for IETF documents in RFC
2648, and outlined a general architecture to resolve URNs in RFC 2276. In this
architecture, the URN name space actually consists of several independent URN
name spaces, and every URN name space has (potentially) its own specific URN
resolver. Resolving a URN thus requires the selection of the appropriate URN re-
solver. This selection of URN resolver is done by a Resolver Discovery Service
(RDS). Daniel and Mealling propose to build an RDS using DNS [3]. In their pro-
posal, DNS contains resource records specifying rewrite rules. When a URN needs
to be resolved, these rewrite rules are applied to the URN, resulting in a resolver
that can resolve the complete URN, or possibly even the resource itself. Our re-search
has not included an RDS since we focused on one specific URN namespace,
that is, the object handle space.
Another related working group of the IETF is the Common Name Resolution
Protocol (CNRP) working group. The group is relatively new, and deals with the
notion of human friendly naming through so-called "Common Names" [7]. Examples
of common names are trade names, company names, and book titles. The goal
of the working group is to create a lightweight search protocol. In this protocol,
a user provides parameters beside the common name to further specify the information
being searched. Common names can be resolved at different information
providers to get different types of information. The implementation of a scalable
common name resolution service is outside the scope of the working group.
Related to the work done by the URN working group, is work done by the
International DOI foundation [4]. Its goal is to develop the Digital Object
(DOI). This is a system for identifying and exchanging intellectual property
in the digital environment. This work was initiated by the American publishing
community. The current DOI implementation uses the Handle system as its location
service. The Handle system maps a DOI (known as a handle) consisting of a
prefix and suffix to access information, for instance a URL. The prefix of the handle
specifies a naming authority, and the suffix specifies a name under that naming
authority. Resolving a handle consists of contacting a Global Handle Registry to
find a Local Handle Registry, where the handle can be fully resolved. The Handle
system supports scalability by allowing both the global and local handle registries
to be replicated. However, it does not ensure that the access information it provides
refers to resources local to the user, nor does the handle resolution process use local
resources when possible.
Kangasharju et al. [5] describe a location service (called LDS) that is based
solely on DNS. Their system maps URLs to IP addresses, whereas in our approach,
HFNs are mapped to URLs. In LDS, IP addresses of the servers are directly stored
in DNS, while we store a URN in DNS and use a separate service to provide a
set of URLs for the named resource. Since LDS stores IP addresses in DNS, the
DNS server needs to be updated every time a replica is added or removed, making
their system more dynamic and caching less effective. In addition, we can easily
and efficiently provide the URL that is nearest to the user, which is not the case in
LDS.
The location of servers storing replicated Web resources is an integral part of
the commercial content delivery system of Akamai and Sandpiper. In both systems
the original URL of the replicated resource needs to be changed to point to servers
of the delivery system. Akamai uses a modified Web server to redirect clients to
servers, while Sandpiper uses a DNS-based solution. Both systems are said to
take both the client location as the current network condition into account when
providing the client with a Web server. While both systems provide local access to
the Web resources they support, their naming system is not local.
Conclusions and Future Work
We have developed a location service that, together with DNS, can be used to re-solve
HFNs to URLs in a scalable fashion. Scalability is achieved by using two
distinct mappings, one for naming resources and one for locating them. Using this
separation, we can apply techniques specific to the respective services to obtain
scalability. An important part of our design is the reuse of the existing DNS in-
frastructure. This provides us with benefits in the form of an existing infrastructure
and experience using it. We are aware of the limitations imposed by DNS, which
has never been designed to support naming as proposed by us. As such, DNS is
to be seen as an example naming system that can be used for demonstrating the
feasibility of our approach.
We have implemented our HFN resolution scheme using the software of the
BIND project and software we developed ourselves as part of the Globe project.
The implementation is currently used in an initial setup involving four European
sites, one site in the USA, and one site in the Middle-East. Our future work will
consist of using the implementation in two experimental applications to gain more
experience. The first application deals with replicating Web documents, and the
second deals with the distribution of free software packages. These experiments
will allow us to substantiate our scalability and human-friendliness claims.
--R
Efficient Tracking of Mobile Objects in Globe.
Scalable Naming in Global Middleware.
Resolution of Uniform Resource Identifiers using the Domain Name System.
The International DOI Foundation.
Locating Copies of Objects Using the Domain Name System.
Big Book of Lightweight Directory Access Protocol (LDAP) RFCs.
Context and Goals for Common Name Resolution.
Architectural Principles of Uniform Resource Name Resolution.
Locating Objects in Wide-Area Systems
Algorithmic Design of the Globe Wide-Area Location Service
--TR
--CTR
N. J. E. Wijngaards , B. J. Overeinder , M. van Steen , F. M. T. Brazier, Supporting internet-scale multi-agent systems, Data & Knowledge Engineering, v.41 n.2-3, p.229-245, June 2002
Michael Walfish , Hari Balakrishnan , Scott Shenker, Untangling the web from DNS, Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation, p.17-17, March 29-31, 2004, San Francisco, California
Hari Balakrishnan , Karthik Lakshminarayanan , Sylvia Ratnasamy , Scott Shenker , Ion Stoica , Michael Walfish, A layered naming architecture for the internet, ACM SIGCOMM Computer Communication Review, v.34 n.4, October 2004
Jeffrey Pang , James Hendricks , Aditya Akella , Roberto De Prisco , Bruce Maggs , Srinivasan Seshan, Availability, usage, and deployment characteristics of the domain name system, Proceedings of the 4th ACM SIGCOMM conference on Internet measurement, October 25-27, 2004, Taormina, Sicily, Italy
Arno Bakker , Maarten Van Steen , Andrew S. Tanenbaum, A wide-area Distribution Network for Transactions on Internet Technology (TOIT), v.6 n.3, p.259-281, August 2006
Bogdan C. Popescu , Bruno Crispo , Andrew S. Tanenbaum , Arno Bakker, Design and implementation of a secure wide-area object middleware, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.10, p.2484-2513, July, 2007
Sylvia Ratnasamy , Scott Shenker , Steven McCanne, Towards an evolvable internet architecture, ACM SIGCOMM Computer Communication Review, v.35 n.4, October 2005 | DNS;scalability;resource location;wide-area systems;keywords: naming |
613982 | Assessing the Performance of the New IBM SP2 Communication Subsystem. | IBM has recently launched an upgrade of the communication subsystem of its SP2 parallel computer. This change affects both hardware and software elements: high-performance switch, message interface adapters, and a new implementation of the MPI message-passing library. To characterize to what extent these changes will affect the execution times of parallel applications, these authors have run a collection of benchmarks on a SP2 with the old communication subsystem and on the same machine after upgrade. These benchmarks include point-to-point and collective communication tests as well as a set of complete parallel applications. The performance indicators are the latency and throughput exhibited by the basic communication tests and the execution time in the case of real applications. Result indicate that only under certain circumstances does a significant performance increase result. | Introduction
A long time has passed since the high-performance computing community realized that
the highest computation speeds at reasonable cost can only be reached via massive
parallelism. During the eighties and early nineties a sort of "building euphoria" led to the
* This work has been done while R. Beivide and J.A. Gregorio were at the Department of Electrical
and Computer Engineering, University of California, Irvine, as visiting researchers.
design and use of massively parallel processors (MPPs) with thousands or even tens of
thousands of processing elements. However, nowadays the community seems to think that a
few hundred processors represent an upper limit on the feasibility of parallel computing
systems. Additionally, and mainly due to the lack of stability in the supercomputing sector,
constructors are making relatively conservative decisions regarding the hardware and
software elements of MPPs, as the only way of guaranteeing that their systems will survive
in the market for a reasonable period. IBM's SP2 parallel computer, built around
workstation-based hardware and software, represents one of the most successful approaches
to MPPs, and seems capable of surviving in these difficult times [HP96].
Most current MPPs (including the SP2) have been shown to be suitable for running coarse
grain parallel applications that interchange very large data structures. In many cases, these
systems are used simply to increase the throughput of sequential jobs of multiple users
sharing a machine. However, the big challenge for the SP2 and similar machines (distributed
memory parallel computers) is to be efficient when running communication-demanding,
medium and fine grain parallel applications, reducing their execution times.
Nowadays, and following the principle of allowing the production of portable software,
MPPs are programmed using conventional imperative languages, enhanced with
communication libraries such as PVM and MPI to implement message passing and
synchronization among processes [Gei94, MPI94]. In this environment, the efficiency of
parallel applications is maximized when the workload is evenly distributed among processors
and the overhead introduced in the parallelization process is minimized: the cost of
communication and synchronization operations must be kept as low as possible. In order to
achieve this, the interconnection subsystem used to support the interchange of messages
must be fast enough to avoid becoming a bottleneck.
Traditionally, latency and throughput are the two parameters used to indicate the
performance of an interconnection network for MPPs. While throughput increases from one
generation of MPPs to the next, a significant reduction of latency seems to be a tougher
problem for designers and industry. Unfortunately, the performance of parallel applications
is very sensitive to latency and, while an increase in throughput can help processing long
messages, it does not help significantly when interchanging reasonably-sized messages of
tens to thousands of bytes-i.e., several orders of magnitude smaller than those which are
required to reach the maximum achievable throughput in current MPPs.
Most of the current attempts to measure and characterize the different components of
latency in modern MPPs lead to the same conclusion: the interconnection network itself
(routers, wires) is over-dimensioned compared to the nodes' ability to send and receive
messages. In other words, the largest components of latency are not in the network. Network
interfaces and communication software cause most of the message passing overhead. These
two elements (I/O hardware and software) must be greatly improved to reduce message
latency in MPPs and, thus, to achieve better performance when running parallel applications
on them. The importance of this issue is being highlighted by several research and
development projects, including [Cul96, Myr96].
Recently IBM has developed a new version of the high-performance switch which
constitutes the interconnection network of the SP2. The upgrade affects the switch itself, and
also the I/O adapters connecting the processors to the switch. Before that, IBM upgraded the
operating system from AIX 3 to AIX 4. The latter includes a new version of MPI, specially
tailored for the SP2; with AIX 3, MPI was available as an additional layer of software over
MPL, the native IBM message passing library. Both changes are aimed to increase parallel
application performance.
In this paper we report the experimental data obtained from running a collection of
benchmark applications on a SP2, first with the old and after that with the new
communication subsystem. This information is used to make a performance comparison
between two SP2 computers whose only difference is the communication subsystem. We don't
try to report an exhaustive evaluation of the new subsystem, nor to provide an exact
characterization of latency components-that will be done in future papers. However, it is
important to allow the community to know the preliminary experimental data, especially for
those in search of a machine to run parallel applications, and those designing new MPPs (or
re-designing existing ones).
This paper is structured as follows. In the next section a brief introduction to the IBM SP2
parallel computer is made, including the (limited) information available about the recent
change in its communication subsystem. Then we describe the collection of programs that
have been used to test the machine (Section 3). The results of running these programs are
analyzed in Section 4. The paper ends with some conclusions (Section 5) and
acknowledgements (Section 6). The reader can find in the References the access path to most
of the parallel code used to elaborate this paper.
An Overview of the IBM SP2
The SP2 is a distributed memory parallel computer where processors (nodes) are
interconnected through a communication subsystem. IBM offers several alternatives for this
subsystem. It is possible to provide communication among processors using standard
networking technology, such as Ethernet, FDDI or ATM. However, for high-end SP2 systems
IBM offers a high-performance switch, which provides better characteristics for parallel
computing. Figure 1 shows the high-level system structure of the SP2 [Age95].
Micro channel
controller
Switch
adapter
System
I/O bus
Memory
Processor
High-performance switch
Other adapters
Detail of a node
Other nodes
Figure
1. The SP2 system.
IBM also offers several alternatives for the nodes. We had access to the SP2 of C4 (see the
acknowledgements section) composed of nodes, each of them powered by a 66 MHz
processor attached to 256 MB of memory by means of a 128-bit
wide bus. These nodes have 32 KB of instruction cache, 128 KB of data cache and 2 MB of L2
cache. A Micro Channel controller governs the I/O bus which connects the processor/memory
subset to devices such as disks, Ethernet networks and the high-performance switch, by
means of the appropriate adapters.
The SP2 communication subsystem is composed of the high-performance switch, plus the
adapters that connect the nodes to the switch. Both elements need to be carefully designed to
allow them to cooperate without introducing bottlenecks. The adapter contains an onboard
microprocessor to offload some of the work of passing messages from node to node, and some
memory to provide buffer space. DMA engines are used to move information from the node to
the adapter's memory, and from there to the switch link.
The high-performance switch is described by IBM as "an any-to-any packet-switched,
multistage or indirect network similar to an Omega network". An advantage of this network
is that the bisection bandwidth increases linearly with the size of the system (in contrast
with direct networks such as rings, meshes and tori), so that it guarantees system scalability.
The core of the network is a crossbar chip offering 8 bi-directional ports, that can be used to
build small SP2 systems. For larger systems, boards composed of two stages with 4 of these
chips each (for a total of bi-directional ports) are used. These systems always have at least
one stage more than necessary, in order to provide redundant paths (at least 4) between any
pair of nodes. The reader can find in [Age95, Stu95] sketches of configurations of systems
with 16, 48, 64 and 128 nodes.
A frame is a building block for SP2 systems. Each frame contains a switch board and up to
nodes. The SP2 available at C4 has two frames of 16 nodes each. One of the frames is used
to run sequential and batch jobs, while the second one is available for running parallel
programs. This is the one we used for the experiments reported in this paper. Originally the
system had a communication subsystem with the following characteristics:
. The adapters were built around an Intel i860 processor with 8 MB of RAM. The
theoretical peak transfer ability of the adapter was 80 MB/s, although the
achievable maximum was 52 MB/s, due to overheads associated with accessing and
managing the Micro Channel.
. The links to the high-performance switch provided 40 MB/s peak bandwidth in each
direction (80 MB/s bi-directional), with a node-to-node latency of 0.5 -s for systems
with up to 80 nodes.
Recently this system has been upgraded. The information about the new system available
when writing this report was very limited, but we were able to confirm that the new adapters
are built around a PowerPC 601 processor, allowing the doubling of the peak transfer
bandwidth-reaching 160 MB/s. The new switch offers 300 MB/s peak bi-directional
bandwidth, with latencies of less than 1.2 -s for systems with up to 80 nodes [Gar96].
Regarding the software environment, each SP2 node runs a full version of AIX, IBM's
version of UNIX. It includes all the UNIX features, plus specific tools and libraries for
programming and executing parallel programs. Our first tests on C4's SP2 were carried out
with version 3 of AIX, which included MPL, an IBM-designed library for parallel
programming using the message-passing paradigm [Sni95]. MPL is actually an interface that
can be implemented over several communication subsystems. In particular, it can run over a
software layer designed to make the best use of the high-performance switch. Alternatively,
MPL might also run over IP (and, thus, over almost any communication media, including the
switch). In this environment, an MPI library (MPICH, see [Bri95]) was also available, but it
was an additional software layer over MPL.
With the introduction of AIX version 4, IBM decided to adopt MPI as the "native"
language for programming SP2 systems, replacing the non-standard MPL. This way,
applications programmed with MPI could run with less overhead (compared to the previous
version). The introduction of the new switch was accompanied by a new version of the MPI
implementation (although AIX itself remained without significant changes), in order to take
advantage of the characteristics of the new hardware. MPL is still available to ease
migration and to ensure the usability of old code. Both MPI and MPL can be used from
Fortran 77 and C programs.
In this paper we will report the results obtained running the experiments and bechmarks
described in section 3 with the following three configurations of C4's SP2 (Table 1):
Communication subsystem Software
Old switch and adapters AIX v3. MPICH implementation of
MPI (on top of MPL)
Old switch and adapters AIX v4. Native version of MPI
n-v4 New switch and adapters AIX v4. Native version of MPI
Table
1. SP2 configurations under test.
3 Experiments and Benchmarks
In order to make a fast-but fair-comparison between the two versions of the SP2, we
selected and ran a small collection of test programs; most of them have already been used by
other researchers, while some others have been prepared by us. When selecting these
programs our aim was to perform a progressive evaluation of the machines, starting with
simple, point to point communications and then going through collective communications,
numeric kernels and fine-grain, non-numeric applications. The measurements obtained from
each experiment have provided information about the latency and throughput of the
communication channels and about the complete network. This information can be used as a
first-order indicator for predicting how a change in the communication subsystem will affect
the execution times of parallel applications. In the next subsections the selected benchmarks
are briefly described.
3.1 Point to Point Communication
A first group of tests are based on the code provided by Dongarra to characterize point to
point communications using MPI [DD95]. Two processors, 0 and 1, engage in a sort of ping-
pong, with processor 0 in charge of the measurements. This processor reads the value of a
wall-time clock before invoking a MPI_Send operation and then blocks in a MPI_Recv
(meanwhile, processor 1 performs the symmetric operations). Once the latter operation
finishes at processor 0, the clock is read again. Thus, the delay of a two-message interchange
(one in each direction) has been measured; the latency is computed as one half of this time.
The achieved throughput is also computed, considering the latency and the message size.
These operations are done a few times to avoid warm-up effects, and then another 1000 times
to average results. The message size is provided as an input parameter.
The program measures minimum, maximum and average latency and throughput. We
have considered average values, because they are more representative of the performance the
user can obtain from the machine; other authors [XH96] consider minimum values for
latency because they are supposed to be free from the influence of the operating system and
other users. In any case, minimum and average values are very close in most cases.
3.2 Collective Communications
Several MPI collective operations have been the object of detailed measurement, similar
to that taken with point to point communication. We will pay special attention to two of
them, broadcast and reduction, because the experiments performed with the others exhibit
very similar behavior. In addition, a random traffic test has been performed. In all these
experiments several messages might compete for the network resources as well as for
accessing a common destination. Next we describe the details of the tests.
The broadcast test involves all the processors performing a broadcast from processor
(the root of the operation) to all of them (including itself). The test performs 1000
iterations, again to average values. In each iteration, all the processors perform a broadcast
(MPI_Bcast), with processor 0 designated as root, and then barrier-synchronize
(MPI_Barrier). To compute the broadcast delay, the root measures the time from the moment
the broadcast starts until the time the barrier finishes, and then subtracts a pre-computed
barrier delay. The obtained throughput figures indicate the amount of information received
by any of the participants.
The reduction test is like the previous one, but using MPI_Reduce instead of MPI_Bcast.
In this case, the throughput figures consider the amount of information sent by any of the
participants. MPI offers a collection of pre-defined operations (MPI_MAX, MPI_MIN,
MPI_SUM, etc.) to be performed in the reduction, and also allows the user to define new
operations. For this test, a user-defined, null operation, has been performed.
The last test of this group, random-2 sets , separates the available processors into two
groups: those with even rank and those with odd rank. An even processor sends data to
anyone of the odd processors, randomly chosen, which responds immediately. The delay of a
two-way interchange, and the achieved throughput, is computed like in the point to point
case.
3.3 Parallel Applications
In order to assess the behavior of the SP2 when running complete applications, we have
selected four benchmarks (MG, LU, SP and BT) from the NAS Parallel Benchmarks (NPB)
version 2 [Bai95], a shallow water modelling code (SWM) from the ParkBench suite of
benchmarks [WF95], and a parallel simulator (PS) developed by our group [Mig95]. Next we
briefly describe these programs.
Version 2 of the NPB can be obtained in source code form (in contrast with version 1),
written in Fortran 77 plus MPI. The programs have been compiled and executed without any
change in the source code * . Each benchmark can operate with several input problem sizes
(number of grid points). The NPB specify three classes (A, B and C) depending on the
problem size. In our experiments we used Class B (i.e., medium-size problems).
MG uses a multigrid method to compute the solution of the three-dimensional scalar
Poisson equation. LU is a simulated computational fluid dynamics application which uses
symmetric successive over-relaxation to solve a block lower triangular-block upper triangular
system of equations resulting from an unfactored implicit finite-difference discretization of
the Navier-Stokes equations in three dimensions. SP and BT are simulated computational
fluid dynamics applications that solve systems of equations resulting from an approximately
factored implicit finite-difference discretization of the Navier-Stokes equations. BT solves
block-triangular systems of 5x5 blocks, while SP solves scalar pentadiagonal systems
resulting from full diagonalization of the approximately factored scheme [Bai95].
SWM is a parallel algorithm testbed that solves the non-linear shallow water equations
on a rotating sphere using the spectral transform method. It has been programmed, using
the message-passing paradigm, in Fortran 77 plus MPI, and forms part of the ParkBench
benchmark suite. The developers are Patrick H. Worley (Oak Ridge National Laboratory)
and Ian T. Foster (Argonne National Laboratory). This benchmark includes several input
files to select the problem size and the algorithms to use. We have used the medium-size
problem, which requires approximately 1 GB of workspace when run with 64 bit precision,
and 1000 Gflop. The default parallel algorithms are distributed Fourier transform and
distributed Legendre transform.
PS is a parallel discrete-event simulator developed by our group to evaluate the
characteristics of message passing networks with 2-D torus topology and cut-through flow
control. The parameters of the simulator are basically the problem size (in terms of number
of switching elements), the load of the network (in terms of a percentage of the maximum
theoretical bandwidth: that of the network bisection) and the number of time steps to
* The appropriate makefiles are distributed with the code.
simulate. For the results reported in this paper, a torus of 32x32 routers is simulated for
40000 cycles. The load varies from 5% to 90%. This means that the number of messages
needed to perform the simulation varies from 2.5 million to nearly 9 million. Processes are
organized in a (logical) 4x4 torus. A process always communicates with its four logical
neighbors. Communication does not follow any particular temporal pattern. Messages are
short: just bytes. The code has been written in C plus MPI.
Performance Evaluation
Latency and throughput are the basic parameters that characterize the applications' view
of the performance of a communication subsystem. Both in conjunction determine the
adequacy of a given system to execute a given type of parallel application. First, message
latency imposes restrictions on the granularity of applications and, second, throughput
imposes a limit on the maximum amount of information processes can interchange per unit
of time. For this reason, in the following sections we will analyze the results of the above
described benchmarks in terms of these two parameters, in order to show how the upgrade in
the SP2 communication subsystem has affected them.
In this preliminary attempt to evaluate the performance improvement achieved with the
new SP2 communication subsystem, we do not intend to perform an exhaustive analysis of
each and every MPI function. The results discussed here are just a minimum kernel, with the
aim of offering a first overview of the potential performance changes an application might
experience.
4.1 Point to Point Communication
A first approach to assess the communication performance of a parallel computer is to
measure the minimum time required to send a message between two processes located on
different nodes. For this reason, many performance evaluation studies concentrate on point
to point communication, in order to establish an initial comparison point among different
platforms [Cul96, DD95, Hoc94, XH96].
Table
2 and Figure 2 show the results obtained running the point to point test described
in the previous section. As can be observed in the table, the introduction of the new version of
MPI implies a reduction in the software overhead. As a consequence, start-up times are lower
and the latency is noticeably reduced when messages are short. However, for messages over 4
KB the new version performs worse (the explanation of this phenomenon can be found in
[Fra95]; we will deal with this later on in this section). The maximum achievable throughput
remains the same with the old and the new MPI implementation.
The introduction of the new switch clearly brings about a reduction of latency for all
length ranges. This latency reduction is specially significant for medium and long messages,
thus allowing the achievement of a higher throughput.
Latency (-s) Throughput (MB/s)
(bytes) o-v3 o-v4 n-v4 o-v3 o-v4 n-v4
128 64.54 50.54 49.58 1.98 2.53 2.58
512 95.59 78.16 61.95 5.36 6.55 8.27
Table
2. Average values of latency and throughput for point to point communication. L
is the message length. Configurations (o-v3, o-v4, n-v4) are described in Table
Latency
(-s)
Message
Th-o-v3
Th-o-v4
Th-n-v4
Throughput
(MB/s)
Message length
Figure
2. Latency and throughput for point to point communication. Data from Table 2.
A preliminary analysis of the results can be easily established through a characterization
of latency. In general, the latency T of a message of length L can be decomposed into two
components: the start-up time ( T h ), which is the time used by the message header to reach
the destination, and the spooling time ( T s ), which is the time required to transmit the
remainder of the message when the path from origin to destination has already been
established.
The spooling time T s is usually modelled as a linear function of L. Thus, T(L) can be
expressed as:
represents the time required to transmit a byte, once the path has been
established. The other parameter of interest, throughput, is defined as:
In general, the asymptotic behavior of T as a function of L is as follows:
when
when L -
THmax being the maximum (asymptotic) throughput achievable from the communication
subsystem. From equations (3) and (4), we see that THmax can be calculated as 1/T b .
The latencies of Table 1 can be fitted * to equation (2), and the obtained maximum
throughputs (for uni-directional, point to point communications) are:
(MB/s)
The system behavior is not the same for short messages as for long messages. To see this
more clearly, a detailed set of experiments have been carried out with the new switch with
the results shown in Figure 3. In this case, minimum latency values are represented, because
average values are too noisy to produce a clear graph.
* All the curve fittings have been done using least squares.
Latency
(-s)
Message length (KB)103050700
Throughput
(MB/s)
Message length (KB)
Figure
3. Minimum latency and maximum throughput with the new switch, for messages
smaller than 192 KB.
A detailed analysis of the data indicates the existence of three different regions, defined
by the message length (see Figure 4). In each one of these 3 regions, T can be fitted to
equation (2), but with different parameters:
Latency
(-s)
Message length (KB)
Region 1200400600800
Latency
(-s)
Message length (KB)
Region 25001500250032 64 96 128 160 192
Latency
(-s)
Message length (KB)
Region 3
Figure
4. Message latency in the three different message length regions.
It is interesting to observe how T h is smaller in the regions that correspond to smaller
messages, while T b is smaller in the regions that correspond to longer messages. This means
that the system is optimized to minimize latency of short messages while maximizing
throughput for long messages.
The above mentioned reference [Fra95] explains the change from region 1 to region 2
perfectly: for short messages, an "eager" protocol is used (i.e., a message is sent to its
destination immediately), while for longer messages, a "rendezvous" protocol is used (i.e., a
message is sent only when the receiver node agrees to receive it). Switching to the
rendezvous protocol incurs in higher start-up costs, but reduces the number of times
information is copied, thus reducing the cost per byte. This change of protocol comes with
IBM's native version of MPI, so the discussion is also valid for experiments ``o-v4''. In the
third region, the behavior of the latency is not as linear as in regions 1 and 2. The figures
show a clear saw-edge effect, with jumps every 16 KB. At the moment, without any
information from the manufacturer, we are unable to offer a reasonable hypothesis to explain
this behavior.
From the set of equations, it is easy to see that the maximum throughput is achieved in
region 3 (long messages), and can be computed from the parameter T b in equation (7), while
the minimum latency is obtained in region 1 (short messages) and is dominated by the
parameter T h in equation (5).
Going back to the comparison among several SP2 configurations, we can consider, for
example, the data of Table 1 for bytes, a quite short message. It can be seen that the
latency is basically the same with the old and with the new switch. In contrast, for long
messages (e.g., the latency is noticeably reduced, and so the throughput is
increased. In short, the effect of the communication subsystem upgrade has been: T h-n - T h-o
and THmax -n - 2.4 THmax -o . Therefore, the upgrade has basically had an effect on the
spooling time, which has been reduced to less than one half of the original. However, the
start-up time used by the message header to reach its destination is almost the same.
Summarizing, the channel throughput has been doubled, but the start-up time remains the
same.
These results, in a first approach, tell us that coarse grain parallel applications requiring
the interchange of very large data structures will experiment a significant performance
improvement. This improvement will also be noticeable at the operating system level, and
when using the system to increase the throughput of batch tasks-when the nodes run in a
de-coupled fashion.
In contrast, those applications that require a frequent interchange of short messages will
not experiment a significant reduction of execution time after the upgrade. In fact, it happens
that the message size required to obtain a reasonable performance has been increased after
the change. Hockney defines L 1/2 as the length of the messages that allows a utilization of
one half of the maximum channel bandwidth [Hoc94]. With the old subsystem, L 1/2 was
aprox. 3000 bytes (see Table 2). Now it has been increased to reach around 32 KB (see Figure
3). Therefore, the minimum message size required to efficiently run parallel applications has
been increased in an order of magnitude. If the goal is to increase the performance of the SP2
when running applications requiring frequent interchange of short messages, it is imperative
to reduce the start-up time in the same proportion as, or even more than, the maximum
throughput has been increased.
Table
3 summarizes the parameters T h , THmax and L 1/2 for several well-known MPPs,
including the three considered SP2 configurations.
Machine T h (-s) THmax
(MB/s)
Convex SPP1000 (PVM) 76 11 1000
Convex SPP1200 (PVM) 63 15 1000
Cray T3D (PVM) 21 27 1502
Intel Paragon 29 154 7236
Meiko CS2 83 43 3559
SP2-o-v4 44 35 3000
Table
3. Latency and asymptotic throughput of some parallel computers. Data for the
Convex, Cray, Paragon, KSR-1 and Meiko taken from [DD95].
4.2 Collective Communication
We have measured the characteristics of some representative communication patterns
involving more than two processors: several MPI collective operations, and random traffic
between two sets.
Regarding the broadcast collective operation, the experiments offered us the information
summarized in Figure 5. It can be seen that the behavior is very much like that of the point
to point case. The change in software means a change in the behavior of the latency curves:
the use of the native MPI does not always reduce latency. Regarding the hardware upgrade,
the graphs show a significant improvement-although it is only significant for messages
longer than about 4 KB. Even with large enough messages, the latency reduction of the
complete broadcast operation is not as spectacular as it is in the point to point case and,
therefore, the throughput increase is not that impressive.
The performance of the remaining collective operations (including the reduction ,
described in the previous section), have been measured applying the same strategy used for
broadcast, and show very similar behavior. Therefore, the same comments are applicable-so
they are not repeated.
Latency
(-s)
Message
Th-o-v3
Th-o-v4
Th-n-v4
Throughput
(MB/s)
Message length
Figure
5. Latency and throughput for the broadcast operation.
The results obtained running the experiment with random communication between two
sets of 8 processors each have been plotted in Figure 6. As in the above-mentioned broadcast
case, significant latency reductions are achieved only for message sizes over 4 KB.2000600010000
Latency
(-s)
Message
Th-o-v3
Th-o-v4
Th-n-v4
Throughput
(MB/s)
Message length
Figure
6. Latency and throughput for the random-2 sets test.
4.3 Parallel Applications
As we have commented in Section 3, the execution times of the 6 benchmark applications
have been measured, in order to get an insight into how the changes in the communication
subsystem may affect real parallel applications which combine computation with
communication. From the data analyzed in the previous sections, the reader can infer that
reductions in execution times will be marked only in those cases where the parallel
applications require processes to interchange very long messages.5001500MG LU SP BT SWM
Time-old
Time-new
Execution
time
Benchmark2006001000
MG LU SP BT SWM
Mflop/s-old
Mflop/s-new
Mflop/s
Benchmark
Figure
7. Results of the NPB and SWM benchmarks.
The graphs of Figure 7 show the execution time (in seconds) and performance (in Mflop/s)
of BT, LU, MG, SP and SWM. We should mention that only the native version of MPI has
been used for these tests; therefore, it is basically the change in the hardware that affects the
performance. Results are as expected: improvements are quite modest in most cases. Things
are not better with the parallel simulator: no performance increase has been measured.
Figure
8 shows the execution times when varying the load of the simulated model (which is
an indicator of the number of messages that the application manages). As a reference point
for comparison, execution times of this experiment on 16 nodes of an Intel Paragon are 2.5
times those achieved on the SP2.50150250350
Old
Execution
time
Load
Figure
8. Execution times of the parallel simulator.
To summarize this section, we can state that a change in the communication subsystem
will be effective for running parallel applications only if the designers are able to achieve
important reductions on the start-up time, i.e., the latency for short and medium messages.
Conclusions
Our aim when writing this paper has been to offer a preliminary evaluation of the effect
that the upgrade introduced by IBM in the communication subsystem of the SP2 will have on
the performance of parallel applications. It should be clear that we don't pretend this
analysis of the upgraded system to be comprehensive-it is mainly to offer a snapshot of its
behavior. To do so, a collection of experiments have been performed using an SP2 frame with
running some benchmarks that measure the performance of specific
communication operations, plus others that represent typical parallel applications. The
purpose of these experiments was to analyze progressively the effects of the upgrade on the
overall system performance: from point to point communication, to collective operations,
kernels of numeric applications (NAS Parallel benchmarks, SWM), and a non-numeric
application with fine grain characteristics.
After analyzing the obtained measurements, we can conclude that an important
improvement has been achieved in the bandwidth of the communication channels, which
allows applications to reach throughput values much higher than those achievable with the
old network-although this is only possible when applications interchange long enough
messages (10 KB - 1 MB). In absolute terms, for point to point communications, the
asymptotic throughput of the new communication subsystem more than doubles the previous
one. In contrast, if messages are short, the improvement in bandwidth does not translate in
better performance, because the start-up time has not been reduced. In other words, the
latency for short and medium messages has not changed significantly from the old to the new
system. Consequently, only those applications requiring a frequent interchange of massive
amounts of information will experience clear reductions in execution time.
The results shown in this report confirm the conclusions of other researcher's studies: the
real bottleneck in the MPP communication lies in the message-passing software and in the
message interfaces that attach the nodes to the interconnection network. Therefore, these are
some points where the MPP designers should focus their interest. In our opinion, (1) the
overhead of message passing software has to be minimized, even if this means introducing
changes in the architecture of node processors and (2) the message interface should be
located as close to the processor as possible, connecting it to the system bus, or even to the
bus between the off-chip cache and the processor.
Acknowledgements
We want to express our grateful acknowledgement to the following institutions:
. C4 (Centre de Computaci- i Comunicacions de Catalunya) for providing access to the
machine under test, and for the technical support.
. CICYT. This research has been done with the support of the Comisi-n Interministerial
de Ciencia y Tecnolog-a, Spain, under contract TIC95-0378.
. DGICYT, the Direcci-n General de Investigaci-n Cient-fica y T-cnica, which, with grant
allowed J.A. Gregorio to stay at UCI as Visiting Associate Researcher.
. The Department of Electrical and Computer Engineering, University of California at
Irvine, for providing support and equipment access.
--R
SP2 System Architecture
The NAS Parallel Benchmarks 2.0.
User's Guide to MPICH
Assessing Fast Network Interfaces
MPI Programming Environment for IBM SP1/SP2
IBM Power Parallel Division
PVM: A User's Guide and Tutorial for Networked Parallel Computing.
The Communication Challenge for MPP: Intel Paragon and Meiko CS-2
Computer Architecture.
An Empirical Evaluation of Techniques for Parallel Simulation of Message Passing Networks
Message Passing Interface Forum.
Myricom's Myrinet information.
The Communication Software and Parallel Environment of the IBM SP2
The SP2 High-Performance Switch
PSTSWM v4.
Modeling communication overhead: MPI and MPL performance on the IBM SP2
--TR
--CTR
Sangman Moh , Chansu Yu , Ben Lee , Hee Young Youn , Dongsoo Han , Dongman Lee, Four-Ary Tree-Based Barrier Synchronization for 2D Meshes without Nonmember Involvement, IEEE Transactions on Computers, v.50 n.8, p.811-823, August 2001
Jess Labarta, Sensitivity of Performance Prediction of Message Passing Programs, The Journal of Supercomputing, v.17 n.3, p.291-298, Nov. 2000
Zoltan Johasz, An Analytical Method for Predicting the Performance of Parallel Image Processing Operations, The Journal of Supercomputing, v.12 n.1-2, p.157-174, Jan./Feb., 1998
J. A. Gregorio , R. Beivide , F. Vallejo, Modeling of interconnection subsystems for massively parallel computers, Performance Evaluation, v.47 n.2, p.105-129, February 2002
Manuel Prieto , Ignacio M. Llorente , Francisco Tirado, Data Locality Exploitation in the Decomposition of Regular Domain Problems, IEEE Transactions on Parallel and Distributed Systems, v.11 n.11, p.1141-1150, November 2000 | massively parallel computers;message-passing network;performance evaluation;IBM SP2;communication subsystem |
614294 | Visualization of Multidimensional Shape and Texture Features in Laser Range Data Using Complex-Valued Gabor Wavelets. | AbstractThis paper describes a new method for visualization and analysis of multivariate laser range data using complex-valued non-orthogonal Gabor wavelets, principal component analysis and a topological mapping network. The initial data set that provides both shape and texture information is encoded in terms of both amplitude and phase of a complex valued 2D image function. A set of carefully designed oriented Gabor filters performs a decomposition of the data and allows for retrieving local shape and texture features. The feature vector obtained from this method is multidimensional and in order to evaluate similar data features, further subspace methods to transform the data onto visualizable attributes, such as R, G, B, have to be determined. For this purpose, a feature-based visualization pipeline is proposed consisting of principal component analysis, normalization and a topological mapping network. This process finally renders a R, G, B subspace representation of the multidimensional feature vector. Our method is primarily applied to the visual analysis of features in human faces_but is not restricted to that. | Introduction
One of the major challenges of vision research was, and still is, to develop methods for the automatic
modeling of complex geometric objects or scenes. In spite of countless efforts during the
last decades [2], [42], [32], [36] there is not yet a generic solution to this problem. Electronic
photogrammetry, however, has invented active vision methods, like laser range finders [40], that
are widely used to yield complex surface shape information elegantly and efficiently. Once only
available in military applications for robust terrain following navigation systems, different types
of laser scanners nowadays serve as helpful tools for automatic modeling tasks in many different
areas [57]. Recent advancements in the development of laser scanners allow the capture of shape
and color information of the object in question [59]. Hence, sophisticated methods to encode and
analyze range image maps have become essential requirements [5], [62].
Human faces in particular are attractive objects for laser range scanners [53], [45], since they contain
complex geometric features and intrinsic symmetries on the one hand [33], and are very appropriate
for human visual analysis on the other hand. Accurate geometric data from human faces
can be used to enhance the information in simple photographs and allow for the development of
higher quality techniques for facial surgery, facial recognition, facial reconstruction, simulation
of aging, etc. The question arises how to encode this shape and texture data into representations
that visualize important data features.
A lot of research has been done to find appropriate geometric model descriptions for regular [18]
and irregular [19], [47] surface data and nonuniform rational B-splines (NURBS) have been
shown to be a flexible scheme for controlling complex shapes. Yet, it turns out to be very difficult
to obtain robust features from such descriptions. Moreover, when considering range data, there
needs to be a way to also encode and analyze texture. Apart from that, wavelets and multiresolution
analysis, as proposed by [14], [29] or [41] have becomevery attractive for many applications.
In computer graphics mostly orthonormal wavelets and their separable 2D and 3D extensions
have been used for hierarchical data decomposition and approximation. Taking advantage of the
compact coding scheme and of the local support of the basis functions in [44], [26] and [60], different
approaches for volume rendering and isosurface reconstruction have been suggested. [51],
for instance, used wavelets for fast radiosity computations and [34] for control of 3D morphing.
Especially in image processing, the power of wavelets has been investigated for feature extraction
and analysis [7], [11], [55]. While in [11] orthonormal wavelets are stressed, [15] employes
nonorthogonal and nonseparable 2D-Gabor wavelets for image analysis. Other interesting re-search
is reported by [8], who combines graph matching with Gabor functions for face recogni-
tion. It is evident that in most cases the requirement for compact support and orthonormality
along with a smooth wavelet shape and a dense spatial orientation staggering for high performance
in both analysis and coding presents a problem. In those cases, where the analysis properties
of the basis function are superior to the approximation and coding behavior, non-orthogonal
wavelets have to be considered as well.
This paper addresses two fundamental areas of scientific visualization: First, it describes how to
extract multidimensional features from complex data sets, such as laser range images, using a
complex-valued decomposition scheme with Gabor wavelets. Secondly, it provides a generic
scheme to visualize these features in orthogonal subspaces. For this purpose we consider shape
and texture as amplitude and phase of a complex-valued 2D image function and perform a hierarchical
decomposition with a carefully designed set of Gabor wavelets. Although there is no
straightforward way to perfectly reconstruct the initial data set from the filter pyramid, the Gabor
wavelet has been shown to be a much more powerful feature descriptor than an orthonormal
wavelet [25]. Specifically in contrast to the non-separable 2D Gabor filters, the tensor product
extensions of orthogonal or biorthogonal wavelets [55] accomplish compact coding, but their
directional selectivity is much poorer with regard to the diagonal components. Moreover, Gabor
wavelets meet uniquely the lower bound of space-frequency resolution as it is stated by the Heisenberg
principle. Hence, Gabor wavelets provide advantages for any type of feature analysis,
that bases on local spectral estimates, such as the one we introduce in this paper. The complex-valued
interpretation of the image function allows to encode the range and texture information
elegantly and to compute the Gabor decomposition efficiently via FFT methods. In particular,
convolutions with complex-valued Gabor functions turn out to become simple multiplications
with Gaussians in Fourier space.
However, once the decomposition of the data is computed, the result is a multidimensional feature
vector at each surface point and the problem arises how to inspect it visually. To this end,
subspace projections have to be applied in terms of a principal component analysis of the wavelet
features. After normalization we use a modified topological mapping neural network, as proposed
in [23] or [28] to accomplish cluster analysis and mapping onto the R,G,B color space. The
generic feature-based visualization pipeline, introduced here, exemplifies to some degree the demand
for complex multidimensional feature-based visualization techniques, as stated in [50].
The organization of the paper is as follows: First of all we briefly elaborate the mathematical basics
of the Gabor function and its relationship to non-orthogonal wavelet decompositions. This
section also elucidates how to build 2DGabor wavelets in the frequency domain. Section 3 sheds
light on our encoding scheme for laser range data sets and illustrates how to get local data features
from this transform. Chapter 4 addresses the feature based visualization pipeline we employ with
special emphasis on principal component analysis and topological mapping networks. Finally,
in chapter 5 we present results on the performance of our method applied to range data of human
faces. R,G,B representations of multidimensional face features are depicted as derived from the
Gabor wavelets and how this method generalizes onto different data sets is outlined.
Mathematical Foundations
2.1 The Gabor Function
The Gabor function [21] used to decompose the range data in our approach is of fundamental
importance in signal processing [1] and has been widely used in a broad range of applications
[7], [15]. It provides an effective way to analyze images and has been elaborated as a framework
for understanding the orientation and spatial frequency selective properties of simple cortical
neurons [16]. The use of Gabor functions for data analysis therefore has its foundations both in
signal analysis and in biology. Gabor functions of different frequency range and orientation can
provide a non-orthogonal function basis for any finite energy function f # L 2 R n [3] and one way
to rapidly obtain data features is to compute a Gabor transform as an expansion of f. The 2D version
of the complex-valued Gabor function can be expressed as below. Its basis consists of a
Gaussian envelope of a harmonic oscillation term. Important inherent advantages are infinite
smoothness and exponential decay in frequency, but it has to be stated again that there is no
straightforward way to reconstruct the data from the expansion basis. The Gabor function can
be defined in both the spatial and frequency domain, although the latter one turns out to be easier.
In Cartesian coordinates, we obtain # G (x,y):
(1)
stand for the translation of its origin and (u 0 , specify the modulation coordinates in
the frequency domain. The effective width and length are given by (# x , # y ) that specify the elliptic
envelope.
Gabor functions are localized in both space and in frequency and they moreover uniquely achieve
the theoretical lower bound of uncertainty as it is dictated by the Heisenberg relation:
(2)
The resolutions in space # xy and in frequency # uv are computed by the 2nd order moments and
correspond to the variances and covariances of the Gabor functions.
dx dy
(v
Fourier transform of # G (x,y)
y, u, v: first order moments (centers)
does not yet account for spatial orientation, frequency # and orientation # of the
Gabor function are defined by
The coordinates x and y can be rotated by replacing them as
sin # 0
respectively. Note that rotation leads to cross terms xy and to a nonseparable expression.
The complex oscillation term represents the orthogonal quadrature components sine and cosine.
Fig. 1 illustrates the real (symmetric) and imaginary (antisymmetric) component of the Gabor
function from eq. 1. The shape of both functions strongly resembles an oriented wavelet function.
Fig. 1. a) Real (cosine) and
imaginary (sine) component of the complex valued 2D Gabor
function in the spatial domain for
2.
The effective number of oscillations under the Gaussian envelope can be expressed as # 0.
Fortunately, the Fourier transform G(u,v) of # G (x, y) has exactly the same functional form with
inverted or interchanged parameters. For functions located at the origin it reduces to a simple
Gaussian in the frequency plane. Introducing the spatial frequencies (u, v) we obtain the following
-#
with #
This relationship which is depicted in fig. 2, shows the Fourier transforms of the sine and of the
cosine wavelet respectively. Due to the symmetry and antisymmetry of the harmonics the negative
Gaussian peaks cancel each other and the remaining function reduces to one Gaussian controlling
7the wavelet's position, size and orientation. Evidently, implementations of Gabor filters
can be accomplished easily employing eq. 7.
Fig. 2. a) Fourier transform of the Gabor function of fig. 1.
It's decomposition into b) sine and c) cosine components.
From there, it is straightforward to analyze data using Gabor functions. The so-called Gabor-
transform [1] as it is often stressed in the signal processing literature turns out to be a short time
Fourier Transform (STFT) using a basis of Gabor functions. In the 2D-case, we get
-#
-#
where w is a Gaussian window of constant size # x , #
2.2 Definition of a Gabor Family
The constant size of the envelope that is assumed in eq. 8 leads to an uniform partitioning of the
frequency plane when expanding f. Thus the localization of the Gabor transform is equal through-out
the entire spectrum. However, sophisticated data analysis should provide a high spatial resolution
for high frequencies and a low resolution of lower frequencies. Therefore, we have to adapt
the size of the Gaussian window to its frequency position. Due to the scaling theorem of the Fourier
transform shifting and increasing the width and length of the Gaussian in frequency space go
in hand with different modulation and a decrease of width and length of the wavelet's envelope
in the spatial domain. In fig. 3a both frequency position and effective width of the Gaussian were
doubled. Fig. 3b illustrates the corresponding real part of the wavelet. Obviously, we can generate
a whole family of self-similar functions of this type, simply by shifting and scaling a Gaussian
in the frequency plane.
Fig. 3. The effect of scaling and shifting in frequency space:
a) Gabor function of different size, position and frequency in
Fourier domain.
Real part of the resulting wavelet in spatial domain
a # 2).
Hence, a scheme to construct a set of self-similar Gabor functions as an expansion basis for f can
be set up.
The idea of building an expansion basis from self-similar functions of different size and orientation
immediately leads to the generic wavelet transform, whose mathematical principles are described
in [13], [41], [37] or [14]. Basically, it can be considered as the inner product of any finite
energy function f with a set of self-similar basis functions # a,b , that are derived from each other
by scaling and shifting of one prototype using the parameters a, b # respectively. Its 2D representation
can be addressed as:
-#
with
# a,b (x, y) #|a x a y |
a x
a
The brackets <,> denote the inner product operation.
Its Fourier transform is given by
# a,b (u, v) # |a x a y |
Note that the normalization forces the bases to hold Parseval's energy equivalent.
In wavelet theory, the bases are usually assumed to satisfy the constraints of orthonormality
and band-pass behavior:
Note furthermore, that due to its definition the Gabor function has an intrinsic DC-fraction in
it's real (cosine) component. Hence, any wavelet derived from it will not meet eq. 13. More spe-
cifically, any two scaled and shifted versions of Gabor type will not satisfy eq. 12 as well.
2.3 Frequency Decomposition with Gabor Wavelets
Depending on the job for which they are tailored, additional constraints on the bases are often
demanded, such as compact support, smooth shape, fast decay, closed-form descriptions, etc. It
has turned out that the construction of bases satisfying these criteria is a non-trivial optimization
process and usually a compromise has to be found, as semi-orthonormality [56]. However, relaxing
the orthonormality and bandpass constraint, we can construct a wavelet basis using the Gabor
functions of different scale, shift and orientation [15]. In this case a complete set of self-similar
functions is defined by
Here, we assume a circular scaling with a a. The parameters represent transformations
of the mother wavelet in scale (a,m), translation (p,q) and orientation (#). The index m runs from
lower to higher frequencies in our paper, since the wavelets are constructed accordingly as in appendix
A.
As stated earlier, the most effective way to construct a multiresolution wavelet basis is to find
appropriate criteria for specifying the Gaussian in the frequency plane. Once constraints for setting
a and # are found, a complete set of Gaussians represents our filter pyramid and the decomposition
is obtained by subsequent filtering and inverse Fourier transform. In our implementa-
0tion, we define the decomposition using a lower frequency bound # 0 and an angular resolution
Figure
4 shows how the frequency plane is decomposed by superimposing Gaussian filters.
The mathematical details of the construction rules we propose here are described in Appendix
A at the end of this paper.
Fig. 4. Decomposition of the frequency plane with Gabor wavelets. The
parameters are #
# a # according to Appendix
A.
Let i(x,y) be a 2D image function and I(u,v) be it's Fourier transform, we obtain a set of convolution
products g m,i (x,y) by multiplying G m,i (u,v) with I(u,v) and subsequent inverse Fourier transform
-#
-#
G m,i (u, v) I(u, v) e
where G m,i (u,v) denotes the Fourier transform of # G
mpq# (x, y) according to Appendix A.
This scheme provides a more effective way of building up the wavelet pyramid - a way which
is preferable to computing all inner products of the scaled and shifted Gabor functions
explicitly. This is because the convolution products computed by eq.
the inner products of all shifted versions of a wavelet given at m and #. The implementation
scheme from above corresponds to a filter bank, indicated in fig. 5.
Fig. 5. Filter bank, that implements the Gabor decomposition.
The method introduced represents the feature extraction module in our visualization pipeline and
can be employed to decompose laser range data sets.
It should also be noted here that wrap-around problems and cut-off errors arising from the FFT
of finite data sets have to be avoided. Therefore, we strongly recommend reflecting the data at
their boundaries and fading them out exponentially. All computations in this paper are performed
that way.
3 Shape and Texture Coding using Gabor Wavelets
3.1 Characteristics of Laser Range Data
This section elaborates the decomposition of data sets using the Gabor wavelets introduced
above. Although our further investigations are focussed on laser range data, the decomposition
scheme as well as the feature based visualization pipeline is not restricted to that. We will show
that in particular when dealing with complex-valued data this method provides an elegant and
efficient method for visual data inspection. Consequently, any type of shape and texture informa-
tion, such as terrain data and aerial photographs, are well suited for this scheme.
We employ laser range data from a Cyberware laser scanner, which provides highly accurate
shape and color values of scanned objects. The data sets can be interpreted as R,G,B range images
defined on a cylindrical coordinate system, as depicted in fig. 6. The grid size is 512 x 480 at a
resolution in the range of about 0.3 mm. A typical scan is rendered in fig. 7, that shows the range
information (fig.7a) and the color mapped as a texture onto the shape (fig.7b).
The scanner moves a diffuse line light source around the object synchronously to the range detec-
tor. This is the reason why the illumination is preserved constant at each surface point and the
texture information can be interpreted in terms of local surface properties. It is not affected by
specific illumination conditions. It has to be mentioned, that the albedo of human facial skin is
very high and a respective reflection model has been developed by [31].
Due to the cylindrical coordinate system of the scanner, the data have to be projected into Cartesian
coordinates representing the 2D image function in (x,y). The projection can be considered
as taking the normalized z S -coordinate of the scanner system as the y-coordinate of the image
and the angular coordinate # S as it's x-coordinate. Consequently, this operation unrols the cylinder
onto a plane, as in fig. 6b.
where (z S , r S , # S ) denote the cylinder coordinates of the scanning system.
Fig. 6. a) Coordinate system employed for data acquisition with the laser
scanner.
b) Transformation of the range data onto a rectangular grid.
Fig. 7. Shape and texture obtained from a female face (Sylvia) and its
encoding scheme.
a) Range rendered with Gouraud-shading.
Texture-mapped color information.
c) Encoded range information.
d) Encoded grey-level texture information.
(raw laser range data: courtesy provided by Computer Graphics
Center, Darmstadt, Germany)
3.2 Complex-valued Coding of Shape and Texture Data
The question arises how to encode both color and shape in order to decompose them efficiently
with the Gabor pyramid. A straightforward way would be to treat R,G,B and range separately
in terms of four different decompositions. This is however very expensive and provides highly
correlated and high-dimensional features. Referring back to section 2, the Gabor function is composed
of a symmetric and an antisymmetric oscillation term, that account for the real and imaginary
components of the data, respectively. Moreover, sine and cosine wavelets are orthogonal to
each other. Hence, in our approach range and color are interpreted as amplitude and phase of a
complex-valued image function that is convoluted with a complex-valued wavelet.
Let r(x,y), g(x,y), b(x,y), s(x,y) be the color and range information at a particular point (x,y). In
order to compute the amplitude a(x,y) we extract luminance according to the rules of colorimetry
[61]:
In fact, the CCD sensor is not calibrated explicitly, but most systems lie near the CIE standard
of that equation. The range function is taken immediately as phase of the image function and the
description of i(x,y) is obtained as:
with its real part i R (x,y) and its imaginary part i I (x,y):
The range function is assumed to be normalized between 0 and 1.
Fig. 7c,d depicts also, how the data appears, as encoded by means of phase and amplitude. Although
information is lost by transforming color into the scalar-valued luminance, shape and
grey values are, however, still superior to color information when extracting features from objects
[32]. Furthermore the efficient encoding scheme provides computational advantages with
FFT methods.
Once the data is encoded as in eq. 19, we can compute it's Fourier transform I(u,v) and apply the
Gabor decomposition, as defined with eq. 16. This is illustrated in fig. 8, which presents a decomposition
of the data from fig. 7. The convolution products g m,i (x,y) are arranged according to the
orientation preference of their wavelet. The depth of the pyramid is M=4, at #/4 and the
DC parts are presented in the middle of each picture, respectively. Due to the complex-valued
results of g m,i (x,y), the inverse Fourier transform produces two resolution pyramids, one for the
real and one for the imaginary component, respectively. These pictures elucidate the orientation
and frequency selectivity of the wavelets. The low-pass functions at m=0 only account for rough
global features since their spatial location is rather bad. The high-pass wavelets at m=3, however,
reveal the fine-grain details in the data. Obviously, due to the non-orthogonality, the different
responses are correlated.
Fig. 8. Decomposition of Sylvia's image:
a) Real part of the Gabor pyramid.
Imaginary part of the Gabor pyramid.
Note, that the pyramid is not subsampled in this picture. A strict representation according to the
definitions of the WT, however would require further sampling of the g m,i (x,y) at the current rate
given by the choice of # and #.
3.3 Extracting Local Data Features
Evidently, this complex valued decomposition pyramid from fig. 8 gives a multiresolution view
onto the data. Due to the localization properties of the Gabor function a set of multidimensional
local data features can easily be derived from it just by evaluating the convolution products
m,i (x,y) at any point of interest. Fig. 9 illustrates the respective procedure of local feature extrac-
tion. Here, we assume the decomposition pyramid arranged into a multidimensional set, where
the dimension index k ranges from 1 (DC-part) to In the further sections
of the paper we refer to a feature vector as the set of scalar-valued coefficients
re
evaluated at any data point according to the illustrations of fig. 9.
This interpretation of the features implicitely transforms the former complex-valued vector into
a scalar-valued one of double length. Hence, the following steps in our pipeline rely on scalar-valued
computations although PCA, normalization and Kohonen map could be extended. This
however would restrict the method very much to the specific application of this paper without
any gain.
Fig. 9. Extraction of local data features from the wavelet decomposition
Although the reconstruction of the data from the Gabor decomposition requires the non-trivial
computation of a dual frame [3], it is still of interest to locally expand the data, as in fig. 10. In
this picture, the feature vectors are superimposed only within the area indicated around the right
eye and all others are neglected. Range and amplitude values are computed according to eq. 19
and mapped as displacements (fig. 10a) and texture (fig. 10b) onto a plain cylinder. Obviously,
they encode important data features, such as curvature or texture modulations around the center
of interest.
Fig. 10. Local reconstruction of the face shape and texture superimposing
the wavelet features within the demarcation of fig. 9:
a) Range around right eye.
b) Texture around right eye.
4 Visualization of Multidimensional Face Features
4.1
Overview
In the section above, we proposed a method for getting multidimensional data features by decomposing
them with multiresolution Gabor wavelets. Unfortunately, there is no immediate way to
visualize the topology of this feature space. Hence, we have to perform additional data analysis
steps to map the most important features down into a subspace that can be visualized.
The problem of visualizing multidimensional features in complex dataspaces is one of the key
issues in scientific visualization [43], [50] and is addressed in many applications, such as fluid
flow and tensor visualization [35], statistical data visualization [63] or multispectral imaging
[28]. The main point is to find orthogonal projection methods that preserve the most important
data features. One basic method that addresses this problem is the principal component analysis
(PCA). It optimizes the mapping procedure in a least-square error sense based on data statistics.
Consequently, we propose a visualization pipeline as figured out in fig. 11. This concept comprises
an embedded framework using different methods proposed by one of the authors in [25]
and [27]. Once the feature vectors g l are extracted from the decomposition, they are fed into a
subsequent processing pipeline of PCA-analysis, normalization and clustering into R,G,B color
space.
As mentioned earlier, the Gabor wavelets provide non-orthonormal expansions of the data and
the resulting feature vector is more or less highly correlated, depending on the choice of # and
#. The decorrelation can be accomplished by further expansion into its principal components.
Fig. 11. Feature-based visualization pipeline employed.
This first breakdown of the number of data dimensions is controlled by thresholding the eigenvalues
associated to each eigenvector. The coefficients of the feature vector transformed by the eigenvector
matrix are decorrelated and have to be normalized. Visualization of the data is figured
out by a self-organizing neural network, as proposed in [23]. This network clusters the data and
maps them automatically onto the R,G,B color space with the constraint of coding similar data
features in terms of similar colors. The method has the advantage of reducing similar features
to a limited set of clusters, rather than to visualize a whole subspace. Hence, the results obtained
are much more expressive than an immediate mapping of the first three eigenvectors into R,G,B.
4.2 Visualizing Principal Components
Generally, dimensionality reduction is strongly connected with finding subspaces satisfying respective
optimization criteria. In statistical data analysis [20], we can find several techniques for
this task. One of the most famous methods, taking the optimization as an eigenvalue problem,
is the principal component analysis or Karhunen-Lo-ve expansion. In general, this method aims
to find a subset of principal directions in a data set of any statistical distribution. The corresponding
basis vectors satisfy the constraint of orthonormality and the subspace defined by these vectors
diagonalizes the covariance matrix of the data. The transformation of an initial data set into
the space of principal components can be formulated as follows:
Let{ g l }: be a data set of the dimension K and g a mean vector:
The principal component analysis essentially looks for a set of orthonormal vectors with the
constraint of
and
eigenvectors and eigenvalues of the covariance matrix C, which can be
estimated as
Note, that its dimension is given by K # K.
Hence, the set of diagonalizes the covariance matrix, as
c 21 .
c K1
c K2
c 1K
c 2K .
k1 .
There are numerous methods for numerically handling this approach for huge covariance matrices
[20], such as singular value decomposition.
After solving this eigenvector problem, the feature vectors g l have to be projected into the eigen-space
spanned by the
and the coordinates in eigenspace are obtained by #
Conversely, any datum g l can be expanded as a linear combination of
The dimensionality reduction can now be achieved by ordering the eigenvectors according to
the absolute value of their corresponding eigenvalue # k and by taking only the most significant
subset for the data expansion in eq. 27.
The vector # can be interpreted as the decorrelated feature vector.
The remaining eigenvectors define a subspace that minimizes the average error of information
lost by the reduction of the number of dimensions.
This method is very popular for any kind of signal analysis and image coding, i.e. for removing
correlation from data like face images [23].
The principles of PCA force most of the data energy to be concentrated in a few significant eigen-
vectors. Therefore, once the PCA based feature vectors #
are computed for each initial g l they
have to be normalized for further processing. Since the a priori probability of these vectors is un-
known, we assume uniform distribution and normalize their components according to:
l
l
~
Fig. 12 shows the 6 most significant feature coordinates #
for l # {x, y} and the 3 least
significant ones as computed from the laser range data set from fig. 7. The respective eigenvalues
are indicated below the picture, as well.
Fig. 12. a) Coordinates of the feature vectors of Sylvia's image encoded
in eigenspace (see also fig. 7).
One straightforward way to visualize the multidimensional feature space of #
as it is derived
from the Gabor pyramid is to encode it into R,G,B. This is illustrated in fig. 13 where a smooth
color representation of the 3 most significant coordinates in eigenspace is depicted. Although this
presentation reveals a first sketch of similarities in the multidimensional feature space in terms
of similar colors, yet it does not provide a sophisticated feature extraction and the visual interpretation
of the resulting images remains difficult. Hence, additional clustering algorithms are
required to group similarities in the data. This is figured out by a topological mapping neural net-work
explained below.
Fig. 13. Distribution of the coordinates of the 3 most important eigen-vectors
of Sylvia's image encoded into R, G and B, respectively.
4.3 Topological Mapping
The 3D extension of the Kohonen map was introduced by the authors in [22] and employed in
different applications, such as [23] or [9]. It is basically a self-organizing network which is trained
with or without supervision. It aims at an organization of the input patterns to a topological structure
represented by its neurons, where the relations between different patterns are preserved as
much as possible.
The Kohonen map is a two-layered network. The first layer of neurons can be considered as similar
to a group of sensors picking up the data, entirely connected to a second, 3D-layer: the competitive
layer. Figure 14 shows the topology of the network. The weights associated with the connections
are adjusted during training where only one single neuron in the competitive layer can
be active at a time. This neuron represents the cluster to which the data set belongs in the spirit
of c-means clustering. Due to the training rules explained below, the Euclidian distance between
two neurons reacting on different input patterns can be a measure that determines the similarity
of the two patterns.
The training of the network is determined by presenting feature vectors #
randomly to the input
layer of the network whose connection weight vectors m h of all competitive neurons h are initialized
by random values. We choose K input neurons according to the data dimension and define
a Euclidean distance d h between #
and m h with
The neuron e with the minimum distance is then activated, where
The updating of the weights m hk associated with the neurons is only performed within a proximity
of e. This proximity N e (t) is reduced with increasing training time t. The updating
conforms to Eq. 32, where #(t) represents a time-dependent learning rate:
This rule refers directly to c-means clustering [17], where the m h represent the centroids. The
time-dependent neighborhood can be described for rectangular areas as follows:
where
Fig. 14. a) 3D Kohonen feature map and the representation of its neurons
as entities in the RGB color space.
b) Class assignment to single neurons.
The network has two essential properties related to our problem:
First, it separates clusters of the presented data by mean vectors m h that are associated as weights
to the neurons. Secondly, it performs a topological ordering of the competitive neurons in a sense
that neighboring neurons in the layer represent similar clusters in multidimensional space and
thus it achieves a further dimensionality reduction.
Since the neurons in the competitive layer are ordered topologically, neighboring neurons react
to similar data vectors in the input space. The mapping can be interpreted as reduction of any K-dimensional
input pattern into 3 dimensions, preserving the topology of the data as much as pos-
sible. The resolution of this discrete 3D space is given by the number of competitive neurons,
i.e., by the clusters.
Referencing the axes of the cube with the primaries R, G, and B, each neuron of the competitive
layer represents a discrete entity in R, G, B space, i.e., it corresponds to a particular color triplet.
Thus, the similarity of the color provided by the reacting neuron refers to a neighborhood in K-dimensional
space.
Hence, the dynamics of the network are strongly related to the PCA analysis. In this case, how-
ever, not only all feature vectors are projected into an orthogonal subspace, but a discrete subset
of centroids is computed, representing the features. This topological structure is further represented
in terms of R,G,B colors. Therefore, in our approach, the usage of the PCA for huge data
sets is limited to a rough scaling of the dimensionality and any further fine-tuning is accomplished
by the Kohonen map.
Figure
15 demonstrates the impact of clustering and visualizing the features from Sylvia's image.
In order to realize the geometric relationships, the output of the Kohonen map is rendered as a
texture onto the facial shape. The parameters were selected with 8x8x8 neurons and M=4,
In fig. 15a the inital 26-dimensional feature vector g l was feed immediately
into the network, which furnishes the R,G,B values at each surface point. The network clearly
distinguishes shape features, such as local curvature or textural consistency between the left and
right hemisphere of the face. The similarity of colors clearly appears in spatially coherent regions
unless in those of frequency components spread throughout the spectrum. Regions of high, low
and strong directional curvature are separated, such as around mouth, nose and eyes. Fig. 15b
contrasts to those results using a first rough breakdown of the dimensions from 26 to 7 through
the usage of the PCA. On the one side, the regions extracted appear even more homogeneous,
since some of the fine grain information was cut off by the PCA. On the other hand, however,
the topology preservation of the Kohonen map does not reveal as evidently as in the previous pic-
ture. This is because the PCA encodes the variance of the features which has not to correspond
to coherent facial regions.
Fig. 15. Cluster analysis and visualization of features derived from Syl-
via's image:
a) Immediate subspace mapping using the Kohonen map only.
Cascaded mapping with PCA and Kohonen map.
In order to stress the influence of the texture information and the importance of the PCA, fig.
depicts results derived from a segmentation of the texture data of Sylvia's image without including
range. Figure 16a shows the b/w texture image and fig.16b its segmentation using PCA pre-
processing. Due to local distortion in the texture, the image looks more noisy than those presented
in fig. 15. The smoothing properties of the range data cannot be harvested in this case. These results
strike even more in Fig. 16c where the features are segmented without PCA preprocessing.
One important aspect is related to the invariance against affine transforms. In principle, rotation
and scaling can be interpreted as a shifting of the respective coefficients in our feature vector. This
is because of the properties in Fourier space, where rotation is transformed into a rotated spectrum
and scaling results in a frequency modulation. Hence, the respective basis functions in the multi-resolution
pyramid respond in different frequency channels. Due to the finite dimensionality of
the feature vector, some coefficients are pushed out of the vector, whereas others will enter at the
lower and higher frequency bounds. Because of the Euclidian metrics employed for segmenta-
tion, we have to keep the most important coefficients in our vector in order to preserve the topology
and the distances in feature space. For this reason, a PCA based preprocessing helps to extract
those coefficients and to keep the segmentation tolerant against rotation and scaling. Translation
invariance of the segmentation is achieved by the localized features. An illustration of this important
subject is given in fig. 16d - 16f, where a scaled and rotated version of the texture data is
presented and the segmentations are generalized from, fig. 16b,c. Evidently, the color similarity
and thus the preservation of similar features is performed much better when using the PCA. For
instance, the high frequency regions around the eyes, nostrils and mouth appear in bluish colors
in contrast to cheeks and forehead, that come out in orange, red and green.
Fig. 16. Influence of range data and affine transforms on the segmentation
a) Texure data of Sylvia's image.
b) Segmentation with PCA.
c) Segmentation without PCA.
d) Scaled and rotated version of the image.
Generalization with PCA.
f) Generalization without PCA.
Besides non-supervised clustering, classification is an essential part in feature-based visualiza-
tion. In this case, the method has to match previously trained patterns. For supervised classifica-
tion, however, each neuron - and also each cluster centroid - has to be assigned to a certain class,
depending on the definition of the user. This can be done by interactive selection of training areas
and by a majority voting of each neuron stimulated by the training set (see fig. 14b). After this,
each neuron has an associated class and the network is able to classify. However, during the organization
process the goal was to find a limited set of centroids representing the data in a c-means
sense rather than to find optimal placements of the decision boundaries in a minimum error
(Bayes) sense. For this reason, the network can be once again trained with a supervised postprocessing
in order to move corresponding centroids towards the Bayes decision boundary and to
improve the classification result.
This postprocessing is well known as learning vector quantization (LVQ 1, 2, and the self-stabilizing
type 3 can be described as follows:
For a given input pattern #
be the closest centroids to #
. Wemodify the centroids
according to
The window is defined as a symmetric area around the midplane of m h and m j . Then #
falls into
the window if
Where d h and d j are the two distances of #
# to m h and m j . The threshold # is calculated according
to eq. 40 and the relative window size # is chosen to be about 20%.
A detailed study of LVQ and of related methods can be found in [38].
Applications
5.1 Face Data Base
The goal of the following investigation reports how the previously introduced method performs
and generalizes on multiple range image data sets. For this purpose, a data base of 10 entities was
built up, as shown in fig. 17. It consists of both male and female range images that were clipped
to the depicted masks. Both range and color information is presented. The left 7 images were
employed to train the method, whereas the right 3 images were used to investigate the behavior
of the generalization. The Gabor decomposition was applied separately on each of the 7 training
samples and the Kohonen map was trained by randomly selecting 1 mil. feature samples from
the resulting pyramids. We used a constant map size of 8x8x8 neurons, i.e. clusters. The parameters
5for the decomposition were selected to # and # and an additional overlap
of 2 was added to tune the localization of the wavelets.
Fig. 17. Initial face data base a) with and b) without texture information.
Upper rows and left: Range images used for training,
lower rows right: Range images used for generalization.
5.2 Results
In order to emphasize the influence of the PCA figure 18a illustrates the results obtained by the
method first without PCA. In this case, the features were mapped immediately from 34 dimensions
to R,G,B using the Kohonen map. Obviously, the wavelets extract similar facial features
that appear as similar colors on each face, such as tip of the nose in red-orange colors, or the
cheeks that come up either in blue-green or violet. The mouth, mostly revealed in red-pink and
moustache boundaries are extracted, as are eye brows (light orange). It is interesting to verify,
that the left and right hemispheres are clearly distinguished and because of the different colors
associated, belong to different clusters. Some of the results reveal a more or less symmetric or
antisymmetric color map, depending on the person analyzed. At this point, our method can be
contrasted with curvature analysis, or slicing methods. In particular, when the results on the training
data are compared with those obtained on the generalization data set (lower row), this technique
evidently outperforms robust results, since color similarities referring to anatomic regions
are rather well preserved. In general, due to the influence of the high-frequency components,
within a single face there is hardly any large coherently clustered region. The similaritiy of colors
in neighbored clusters is, however, preserved in regions of low spatial frequencies.
Figure
18b depicts the same investigation, but in this case with a PCA downmapping of the dimensions
from an initial 34 to 7, preserving 99 % of the signal energy encoded in the eigenvec-
tors. The topology preservation of the Kohonen map is not as obvious as in the previous figure,
since the PCA accounts for the maximum variance in the features. One of the advantages of the
PCA is that the regions clustered by the neural network appear in most cases more homogeneous
and coherent because some fine grain detail information gets lost with the downscaling proce-
dure. In both computations, the most discontinuous regions appear around the eyes. This is due
to the discontinuouities in range, which spread the local spatial frequencies throughout the spec-
trum. The pupils of most of the test persons, however, are detected in similar colors.
Fig. 18. Face features as similarities of colors mapped as textures to the
range data.
a) without PCA and
including PCA.
Other interesting investigations can be accomplished by trying to retrieve similar regions and
landmarks in the face data base. For this purpose the Kohonen map was trained with a supervised
LVQ postprocessing on the results of fig. 18b using 100000 additional cycles. During this proce-
dure, only features within the training areas demarcated onto the face in fig. 19a were fed randomly
into the network. Four different feature types were selected, one each for tip of the nose
(blue), nostrils (yellow), pupils (green) and corner of the mouth (red). Figure 19b depicts the re-
sults, where the distribution of the trained features throughout each face is presented. It can be
stated, that all regions are properly retrieved, such as the tip of the nose or pupils. Considering
the especially small training areas, features similar to the corner of the mouths are detected mostly
around the eyes, eye brows, and moustache, where discontinouities along with high local curvature
in range can also be located. Furthermore, green regions appear systematically around the
throat of each person. This is due to an artifact arising from the linear interpolation of the scanned
data, which usually lacks in the green region. Although both nostrils and pupils are dark regions
in the texture map, they are clearly separated in all images. The diagram in fig. 20 compares again
the results within the training areas of fig. 19. Besides the two peaks within the nostrils the overall
error is relatively small.
Fig. 19. Detection of similar features extracted from a supervised classification
postprocess.
a) Training areas demarcated on faces.
b) Detection of the similar regions.
Fig. 20. Error rate for matching the significant regions within the training
areas.
Conversely, if a robust detection of previously learned training areas is required for matching or
recognition, a procedure can be set up as indicated in fig. 21. In this simulation, the method was
trained exclusively on Sylvia's image and the DC components were neglected in the feature vec-
tor. Hence, the segmentation looks noisy, no color similarities can be recognized, and the result
is badly shaped for visual inspection. The LVQ applied subsequently, was trained to distinguish,
left and right eye (green and blue), tip of the nose (red), and left and right corner of the mouth
(pink and yellow). Fig. 21c shows the performance of the detection. The landmarks are retrieved
almost perfectly without further errors because of the highly localized features.
In order to illustrate the importance of the range and texutre information for the outperformance
of the method the results are contrasted to those achieved by using range and texture data only,
as depicted in fig. 21d and fig. 21e. Although both computations are performed with the same
parameters, yet the retrieval of the landmarks is figured out much worse than in fig. 21c.
Note again, that the texture data is recorded under normalized conditions and hence represents
facial surface properties and is not affected by specific illumination.
Fig. 21. Detection of facial feature points in Sylvia's face:
a) RGB representation of the segmentation.
Training areas.
c) Performance of detection of the landmarks.
d) Performance of detection of the landmarks using range data
only.
e) Performance of detection of the landmarks using texture data
only.
All computations of this paper had been performed on a SGI-Indigo 2 workstation with a R4400
processor and 64 MB main memory. The file-oriented implementation of the method requires
to generate the appropriate filters, which took about 205 CPU seconds. Generation and normalization
of the feature vectors was figured out in 61 CPU seconds and the PCA took 48 s. The self-organization
of the Kohonen map was computed in 157 s with 200000 cycles and 7 coefficients
whereas the same procedure needed 472 s for the full feature vector without PCA.Hence the PCA
increases the computational performance of the method. The segmentation and classification was
done in 42 s respectively in 83 s. Note, that the figures are valid for one image.
6 Conclusion
We introduced a generic and robust method for feature-based visualization of multidimensional
data sets by decompositions with Gabor wavelets. Results have been presented, that show the capabilities
of the method, even in very complex data sets. Although it was primarily applied to
encode and analyze laser range data from human faces it is not restricted to and can be extended
to any kind of complex data, such as 3D- or 4D-velocity fields. However, one of the main aspects
when encoding data in the way we propose is, that there is no immediate way to perfectly reconstruct
8them in the general case. Furthermore, the current scheme does not accomplish any sub-sampling
of the pyramid and requires extensive memory. This is specifically a serious criteria
in multidimensional applications. [3] for instance, shows that wavelet frame theory, however,
defines bounds for the design of dual function spaces for a perfect reconstruction of Gabor
frames. Since Gabor wavelets are powerful feature extractors, further research has to be conducted
to find perfect reconstruction frames within these strict limitations and to achieve more
compact coding schemes. Other alternatives have to be considered, such as nonseparable extensions
of biorthonormal wavelets preserving directional selectivity.
Another important aspect is how the method performs under noise. Due to the spectral estima-
tion-like method, we expect muchmore robust results than those, for instance, given by curvature
analysis techniques for range data. In this context, a common strategy is to weight the wavelet
coefficients according to their frequency localization when using them for coding. For data anal-
ysis, however, the weights have to be selected much more carefully and strongly depend on the
signal characteristics. This leads immediately to higher-order spectral estimation methods,
which should be in the scope of future activities as well.
Acknowledgement
The authors wish to thank the Computer Graphics Center in Darmstadt, from where the raw laser
range data set was provided. Thanks also to S. Spanier-Mason for proofreading the English
manuscript. The authors are also grateful to the referees for providing very useful criticism.
--R
Academic Press
"A reference model for the visualization of multi-dimensional data,"
of the EUROGRAPHICS
"Invariant surface characteristics for 3D object recognition in range images,"
Computer Vision
"A survey of curve and surface methods in CAGD,"
Geometric Design
"Multichannel texture analysis using localized spatial filters,"
shington: IEEE
in medical imaging
"Texture analysis and classification with tree-structured wavelet transform,"
"The wavelet transform, time-frequency localization and signal analysis,"
"High confidence visual recognition of persons by a test of statistical independence,"
IEEE Trans.
"Two-dimensional spectral analysis of cortical receptive field profiles,"
"Scattered data interpolation and applications: A tutorial and survey,"
Hagen and D.
"Theory of communication,"
"Subspace methods for the visualization of multidimensional data sets"
shop
"Integrated volume rendering and data analysis in wavelet space,"
"Multiscale image texture analysis in wavelet space,"
First IEEE Conf.
of human faces - A case study for man-machine-communication
Singapore: World Scientific
"Visualization of multidimensional data sets using a neural network,"
"Visualization of large data sets"
"Reflection from layered surfaces due to subsurface scattering"
Computer Graphics
"Machine identification of human faces,"
"Wavelet-based volume morphing,"
"Visualization of vector and tensor data sets,"
Visualization, in Rosenblum
"An overview of wavelet based multiresolution analyses,"
Department of Mathematics
"The self-organizing map,"
"Texture classification by wavelet packet signatures,"
"A theory for multiresolution signal decomposition: The wavelet representation,"
Computer Graphics (special issue)
"Volumetric shape description of range data using 'blobby model',"
"Multiscale 3D edge representation of volume data by a DOGwavelet,"
on Volume
"Learning object models from appearance,"
"Scattered data modelling,"
"Visualization in scientific and engineering computation,"
"Equilibrium and interpolation solutions using wavelet bases,"
Scientific Visualization.
"Wavelet Radiosity"
"Recordering and visualizing complex shape from range data"
Tokyo: Springer
"Frequency domain volume rendering,"
"Multiresolution feature extraction and selection for texture segmentation,"
and
"A family of polynomial spline wavelet transforms,"
"Facial surface scanner,"
"Wavelets and filter banks: Theory and design,"
"A multiresolution framework for volume rendering,"
New York: John Wiley
"Visualizing structure in high-dimensional multivariate data,"
of Research and Development
--TR
--CTR
M. H. Gross , T. C. Sprenger , J. Finger, Visualizing Informationon a Sphere, Proceedings of the 1997 IEEE Symposium on Information Visualization (InfoVis '97), p.11, October 18-25, 1997
Ming Xi Tang, Visualization and Genetic Algorithms in Minimax Theory for Nonlinear Functionals, Journal of Scientific Computing, v.18 n.1, p.49-68, February
Rolf M. Koch , Markus H. Gross , Friedrich R. Carls , Daniel F. von Bren , George Fankhauser , Yoav I. H. Parish, Simulating facial surgery using finite element models, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, p.421-428, August 1996
P. Cignoni , C. Montani , R. Scopigno , C. Rocchini, A general method for preserving attribute values on simplified meshes, Proceedings of the conference on Visualization '98, p.59-66, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Markus H. Gross , Roger Gatti , Oliver Staadt, Fast Multiresolution Surface Meshing, Proceedings of the 6th conference on Visualization '95, p.135, October 29-November 03, 1995
Markus H. Gross , Oliver G. Staadt , Roger Gatti, Efficient Triangular Surface Approximations Using Wavelets and Quadtree Data Structures, IEEE Transactions on Visualization and Computer Graphics, v.2 n.2, p.130-143, June 1996
Sylvain Fischer , Filip roubek , Laurent Perrinet , Rafael Redondo , Gabriel Cristbal, Self-Invertible 2D Log-Gabor Wavelets, International Journal of Computer Vision, v.75 n.2, p.231-246, November 2007 | gabor wavelets;wavelet transform;shape;feature extraction;subspace mapping;principal components;and texture analysis;multidimensional visualization;feature-based visualization |
614303 | A Predictor-Corrector Technique for Visualizing Unsteady Flow. | AbstractWe present a method for visualizing unsteady flow by displaying its vortices. The vortices are identified by using a vorticity-predictor pressure-corrector scheme that follows vortex cores. The cross-sections of a vortex at each point along the core can be represented by a Fourier series. A vortex can be faithfully reconstructed from the series as a simple quadrilateral mesh, or its reconstruction can be enhanced to indicate helical motion. The mesh can reduce the representation of the flow features by a factor of one thousand or more compared with the volumetric dataset. With this amount of reduction it is possible to implement an interactive system on a graphics workstation to permit a viewer to examine, in three dimensions, the evolution of the vortical structures in a complex, unsteady flow. | Introduction
In order to study the complex behavior of an unsteady (i.e., time-varying) fluid flow, one
might imagine being immersed within the flow but not disturbing it. One could then roam
about the flow field, free to observe its development or to measure quantities of interest. This
scenario is impossible in real life, of course. The physical presence of a human observer
would change the very flow under inspection. However, a direct numerical simulation (DNS)
of the flow produces all the relevant flow quantities that an appropriate visualization system
would need in order to let a viewer navigate through the flow. In order to develop such an
interactive system, one must (1) locate the salient structures within the three-dimensional flow
data, (2) represent the structures geometrically, and (3) display them to the viewer, preferably
at interactive frame rates of 20 updates per second or more.
What, exactly, are the important structures within an unsteady flow? Vortices are typically
considered the most important structures in flow fields. Consider the effects of vortices over a
range of spatial scales: large-scale vortices are responsible for hurricanes and tornadoes;
medium-scale vortices affect the handling characteristics of an airplane; small-scale vortices
are the fundamental building blocks of the structure of turbulent flow. Vortices control the
dynamics of the flow in the sense that if they are removed the flow becomes quiescent. As an
example, hairpin vortices are considered to be "a major sustaining flow structure involved in
the perpetuation of turbulent boundary layers" [1]. Leonard [2] emphasizes that
.it is mathematically correct and often very convenient to consider inviscid fluid
dynamics in terms of parcels of vorticity which induce motion on each other as an
alternative to pressure-velocity considerations.
One would like, therefore, to visualize a flow by locating and displaying its vortices. This
paper describes how a predictor-corrector technique can locate vortex structures in three-dimensional
flow data [3] with enough data-reduction to store and animate them on a workstation
The predictor-corrector technique is effective at locating vortices even in turbulent flow
data. Simulating an unsteady flow may require hundreds or even thousands of time steps, each
containing many megabytes of data. The vortices themselves may occupy significant subvolumes
of the original volumetric data. A typical scientific workstation does not have adequate
memory to store more than a few frames of the original data; data reduction is absolutely
essential for the interactive display of time-varying vortices. The predictor-corrector scheme
provides a terse, one-dimensional representation of vortex tubes, which offers significant
reduction of the flow data. This benefit suggests the design of an interactive visualization system
that can re-play the development of a computed flow while allowing a viewer to explore
the vortex shapes with a graphics workstation.
The paper is organized as follows. Section 2 presents a survey of other techniques that
attempt to identify vortices. Section 3 presents our predictor-corrector scheme and discusses
some of the programming considerations that are necessary to make the scheme efficient. Section
4 describes how we calculate the cross-sections of the vortex tube and how we represent
them in a compressed fashion using Fourier analysis. In section 5 we show how the vortex
skeletons, together with an efficient representation of the cross-sections, offer substantial data-reduction
in representing features of a flow. We describe the process of reconstructing the vortex
tubes from the compressed format and report on the successful development of an interactive
graphical system based on these techniques.
2 Survey of Identification Schemes
The term "vortex" connotes a similar concept in the minds of most fluid dynamicists: a helical
pattern of flow in a localized region. There are mathematical definitions for "vorticity" and
"helicity," but vortical flow is not completely characterized by them. For example, a shear
flow exhibits vorticity at every point even though there is no vortical motion. A precise definition
for a vortex is difficult to obtain - a fact supported by the variety of efforts outlined
below.
2.1
It is surprisingly difficult to establish a definition of a vortex that is robust enough to locate all
the coherent structures that a flow physicist would consider to be vortices. Robinson [4] suggests
the following working definition for a vortex.
A vortex exists when instantaneous streamlines mapped onto a plane normal to the
vortex core exhibit a roughly circular or spiral pattern, when viewed from a reference
frame moving with the center of the vortex core.
Robinson [5] and Robinson, Kline, and Spalart [6] use the above definition to confirm that
a particular structure is, in fact, a vortex. Unfortunately, this definition requires a knowledge
of the vortex core before one can determine whether something is a vortex. The definition,
therefore, does not lend itself to a convenient algorithm for detecting vortices.
2.2 Isovalue of a Scalar Field
Is there a scalar value that can be easily derived from flow quantities such that a single isovalue
yields surfaces surrounding the vortical structures? One might imagine a scalar field that
attains a non-negative value in the interior of the vortices but attains negative values elsewhere. The
zero-valued isosurfaces would define the boundaries of the vortices. Several attempts have been made
to locate vortices as isosurfaces of scalar quantities.
Low Pressure
Robinson and his colleagues find that elongated low-pressure regions in incompressible turbulent
flows almost always indicate vortex cores. Isosurfaces of low pressure are usually effective
at capturing the shape of an individual vortex (fig. 1a), especially if the flow field contains
no solid bodies. Pressure surfaces become indistinct where vortices merge, however, and a
high-quality image can easily require thousands of triangles to create the surface. The need to
compress the representation becomes acute when visualizing time-varying data.
Eigenvalues of the Velocity Gradient
Chong, Perry, and Cantwell [7] define a vortex core as a region where the velocity-gradient
tensor has complex eigenvalues. In such a region, the rotation tensor dominates over the rate-
of-strain tensor. Soria and Cantwell [8] use this approach to study vortical structures in free-
shear flows. At points of large vorticity, the eigenvalues of the velocity-gradient matrix are
determined: a complex eigenvalue suggests the presence of a vortex. This method correctly
identifies the large vortical structures in the flow. However, the method also captures many
smaller structures without providing a way to link the smaller vortical volumes with the larger
coherent vortices of which they might be a part (fig. 1b).
2.3 Geometry of the Vortex Core
Instead of defining the bounding surface of a vortex, some researchers have sought ways to
locate the one-dimensional core through the vortex center. Various schemes for determining
the geometry of a vortex core are described below.
Vorticity Lines
Vorticity is a vector quantity proportional to the angular velocity of a fluid particle. It is
defined as
where u is the velocity at a given point. Vorticity lines are integral curves of vorticity.
Moin and Kim [9] [10] use vorticity lines to visualize vortical structures in turbulent channel
flow. The resulting curves are extremely sensitive to the choice of initial location x 0 for the
integration. As Moin and Kim point out [9],
If we choose x 0 arbitrarily, the resulting vortex line is likely to wander over the whole
flow field like a badly tangled fishing line, and it would be very difficult to identify the
organized structures (if any) through which the line may have passed.
Fig. 6 illustrates the potential for vortex lines to create a tangle [10]. To avoid such a confusing
jumble, they carefully select the initial points. However, Robinson [5] shows that even
experienced researchers can be surprisingly misled by ordinary vorticity lines. The problem
with vorticity lines in a shear flow is not just that numerical techniques of integration propagate error.
Even an errorless analytic integration fails to follow a vortex core that is not aligned in the direction of
mean shear. Jiminez points out that a vortex tube "does not have vorticity perfectly aligned
along its axis [core], nor does a given vortex line necessarily remain within it over its entire
length" [11]. In order for an integral curve through a vector field to coincide with the core, the vector
field must be aligned with the core.
Vorticity and Enstrophy
Jiminez et al. propose a scheme for tracing vortex cores that shares the spirit of our technique
[11]. They consider points of maximum enstrophy (squared magnitude of vorticity) to lie
along vortex cores. Given such a point, they integrate along the core using a two-step process.
The first step is to follow the vorticity to the next grid plane. Then, within that plane, they
inspect the nearest four grid points and select the one with the largest enstrophy. The method
marches from grid point to grid point within the volume. They applied the technique to locate
vortices within isotropic turbulence. Near the wall of a shear flow, there is a large magnitude
of vorticity even when no vortices are present. Thus the technique is not well-suited to the
task of identifying vortices in a shear flow. Instead of consulting the enstrophy, our technique
uses pressure gradients for the corrector phase. In addition, we use higher-order interpolation
in order to resolve the vortex core between grid points.
Curvature and Helicity
Yates and Chapman [12] carefully explore two definitions of vortex cores. Unfortunately, the
analyses and conclusions for both definitions are appropriate only for steady flows. By one
definition, the vortex core is the line defined by the local maxima of normalized helicity (the
dot product of the normalized velocity and vorticity). Fig. 1c shows an isosurface of constant
helicity. Notice that the surface fails to capture the "head" on the upper-right side of the hairpin
5vortex. This shows that the local maxima fail to follow the core. In the other definition, a
vortex core is an integral curve that has minimum curvature. If there is a critical point on a
vortex core, then that point must be a spiral-saddle. The eigenvector belonging to the only real
eigenvalue of the spiral-saddle corresponds, locally, to an integral curve entering or leaving
the critical point. By integrating this curve, the entire vortex core may be visualized [13]. For
our particular flow data, however, we find that these curves (as calculated by FAST, the Flow Analysis
Software Toolkit [14]) can miss the vortex completely. It may be that the critical points are not sufficiently
resolved in the flow data for this technique to capture the cores; in that case the amount of data
must be more finely sampled in order to locate vortex cores with this technique, at the expense of
(a)
(b)
(c)
(d)
Figure
1.
Different schemes used to identify a vortex.
Each image visualizes the flow at the same time
step. From top: (a) isosurface of constant
pressure; (b) isosurfaces of complex-valued
eigenvalues of the velocity-gradient matrix; (c)
isosurface of constant helicity (dark line
indicates missing vortex head; (d) isosurfaces of
constant vorticity; (e) our predictor-corrector
technique with Fourier cross-sections.
increasing the data storage and slowing the numerical simulation. Since the technique is derived for
steady flows, it may be that even with finer sampling the cores would not be detected.
User-guided Search
Bernard, Thomas, and Handler [15] use a semi-automated procedure to identify quasi-stream-
wise vortices. Their method finds local centers of rotation in user-specified regions in planes
perpendicular to the streamwise direction of a turbulent channel flow. Experienced users can
correctly find the critical vortices responsible for the maintenance of the Reynolds stress.
Their method captures the vortices that are aligned with the streamwise direction, but in free-
shear layers and transitional boundary layers, the significant spanwise vortices go undetected.
Because it depends heavily on user intervention, the process is tedious and is dependent upon
the individual skill of the user.
2.4 Vortex Shape Detection
Vortices exhibit the characteristic shape of elongates tubes. Below we describe two identification
schemes that exploit this shape-knowledge to locate vortices.
Cylinder With Maximum Vorticity
Villasenor and Vincent [16] present an algorithm for locating elongated vortices in three-dimensional
time-dependent flow fields. They start from a seed point and compute the average
length of all vorticity vectors contained in a small-radius cylinder. They repeat this step for a
large number of cylinders that emanate from the seed point. The cylinder with the maximum
average becomes a segment of the vortex tube. They use only the magnitudes (not the direc-
tions) of vorticity; as a consequence the algorithm can inadvertently capture structures that are
not vortices.
Vorticity and Vortex Stretching
et al. [17] use vorticity |w| and vortex stretching |w -u| /|w| in an effort to understand
the dynamics of a vortex reconnection process. They fit ellipsoids to the regions of high
vorticity. Vector field lines of vorticity and of vortex stretching emanate from the ellipsoids. In
flows with solid boundaries or a mean straining field, the regions with large vorticity magnitudes
do not necessarily correspond to vortices (fig. 1d); hence, the ellipsoids do not always
provide useful information.
2.4 Summary of Survey
Some of the above techniques share a simple property: they aim to capture vortices by consulting a
scalar field derived from certain flow quantities. Without having a canonical scalar definition of a vor-
tex, one should only treat these techniques as heuristics. The experienced flow physicist is apt to identify
7vortices in a flow field based on his own knowledge of the flow characteristics, even if this
judgment is at odds with one of the above definitions.
Notice that the pressure surface in fig. 1a is smoother than the isosurfaces in figs. 1b and 1d. The
latter surfaces are based on derivatives of local flow quantities and are therefore subject to numerical
error due to differentiation. In contrast, pressure is obtained by integration which filters out noise. It
may be difficult, in general, to develop a robust technique for locating vortices if one appeals to quantities
derived through repeated differentiation.
The isosurfaces that define the boundaries of the vortices are unstructured sets of polygons. If one
wishes to archive the vortex geometry over the course of hundreds or thousands of time steps, the iso-surfaces
can require large quantities (hundreds of gigabytes) of storage. While techniques exist for
decimating isosurfaces, such decimation is not a trivial task. By contrast, the vortex cores can be represented
economically by one-dimensional curves or polylines. For vortices in the shape of elongated
tubes, skeleton curves together with a radius function provide a natural and efficient representation.
The methods in the survey all experience success in finding vortices under certain flow con-
ditions. But all of them have problems capturing vortices in unsteady shear flow and/or representing
them in the most economical way. We were led, therefore, to develop another
technique which could tolerate the complexity of a transitional flow (from laminar to turbu-
lent) and would offer substantial data reduction. For comparison, fig. 1e shows the results of
applying our predictor-corrector method with Fourier cross-sections.
3 The Predictor-corrector Method
We now present the heart of our vortex identification scheme: the velocity-predictor, pressure-correc-
tor method. The method was designed to capture elongated vortices (shaped like spaghetti)
rather than broad vortex sheets (shaped like lasagna). The method, like the techniques in the sur-
vey, relies on heuristics: if a point is in a vortex, then the point is expected to possess certain proper-
ties. Possessing those properties does not guarantee that a point is in a vortex, however. The method is
designed to locate the core of the vortex, rather than the surface bounding the entire vortex. The
method uses vector quantities for both the predictor and the corrector steps and uses scalar values as
thresholds.
The predictor-corrector method produces an ordered set of points (the skeleton) that
approximates a vortex core. Associated with each point are quantities that describe the local
characteristics of the vortex. These quantities may include the vorticity, the pressure, the
shape of the cross-section, or other quantities of interest. The method produces lines that are
similar to vorticity lines, but with an important difference. Whereas vorticity is a mathematical
function of the instantaneous velocity field, a vortex is a physical structure with coherence
over a region of space. In contrast to vorticity lines, which may wander away from the vortex
cores, our method is self-correcting: line trajectories that diverge from the vortex core reconverge
to the center.
In this section we discuss the procedure used to find an initial seed point on the vortex
skeleton. We then explain the predictor-corrector method used for growing the vortex skeleton
from the seed point. Finally, we address how to terminate the vortex skeleton.
3.1 Finding a Seed Point
Vorticity lines begin and end only at domain boundaries, but actual vortices have no such
restriction. Therefore we must examine the entire flow volume in order to find seed points
from which to grow the vortex skeletons. We consider low pressure and a large magnitude of
vorticity to indicate that a vortex is present. Low pressure in a vortex core provides a pressure
gradient that offsets the centripetal acceleration of a particle rotating about the core. Large
vorticity indicates that such rotation is probably present. These are heuristic arguments: vortical
motion is presumed to be sustained by pressure gradients and to be indicated by vorticity. It is certainly
possible to have low pressure (downstream of an obstacle, for example) or large vorticity (in a shear
flow, for example) without a vortex present. Even so, the combination of the two is a powerful indicator
of a vortex.
In our implementation, the flow field (a three-dimensional rectilinear grid) is scanned
along planes perpendicular to the streamwise direction. The scanning direction affects the
order in which vortices are located, but not the overall features of the vortices. In each plane,
the values of the pressure and the vorticity magnitude are checked against threshold values of
these two quantities. A seed point is a grid point that satisfies the two threshold values. Since
new vortex tubes can emerge at any time, we re-scan the 3D grid anew to locate seed points at
each time step. In a more steady flow, one could advect seed points from one time step as initial
guesses at the next time step. Threshold values can be chosen a priori, or they can be a
predetermined fraction of the extrema. The thresholds of pressure and vorticity-magnitude can be
fairly strict. It is not necessary to include every point of the vortex core in the set of candidate seeds; it
suffices to capture a single one. Even so, if the threshold of pressure is too low some structures
will be missed entirely. We selected thresholds of pressure and vorticity that capture the essential
structures in the flow field.
We next refine the position of the seed point so that it is not constrained to lie on the grid.
The seed point moves in the plane perpendicular to the vorticity vector until it reaches the
location of the local pressure minimum. From this seed point we develop the vortex skeleton
in two parts, forward and backward, to reach the endpoints of the vortex tube.
3.2 Growing the Skeleton
The predictor-corrector algorithm is illustrated in the schematic diagrams of fig. 2. The details
for continuing the calculation from one point to the next are indicated by the captions. Steps
1-2 represent the predictor stage of the algorithm. The corrector stage is summarized by steps
3-4.
Once a seed point has been selected, the skeleton of the vortex core can be grown from the seed.
The next position of the vortex skeleton is predicted by integrating along the vorticity vector (fig. 2,
top) which is equivalent to Euler integration of a vorticity line. The predicted point typically misses the
vortex core.
Next we invoke the heuristic that centripetal acceleration within a vortex is supported by low pressure
at the core. In a plane perpendicular to the core, the pressure minimum is expected to coincide
with the point where the core pierces the plane. The predicted point must be corrected to the pressure
minimum in the plane that (1) is perpendicular to the core and (2) contains the predicted point. The
location of the nearest core point is the unknown quantity, so condition (1) can only be satisfied
approximately. We approximate the desired plane by choosing the plane perpendicular to the vorticity
vector (fig. 2, bottom).
(1) (2)
Figure
2.
Four steps of the predictor-corrector
algorithm.
Compute the vorticity at a
point on the vortex core.
Step in the vorticity direction
to predict the next point.
Compute the vorticity at
the predicted point.
Correct to the pressure min
in the perpendicular plane.
Individually, integral curves of vorticity or of the pressure gradient are each unreliable at
capturing vortex cores. Section 2.2 points out the problems with vorticity lines. The pressure
gradient does not follow the core either; moreover, a vortex may have several distinct pressure
minima in its interior, which would require piecewise integration of the gradient in order to
connect the components of the core. Remarkably, the combination of the vorticity and the
pressure gradient provides a robust method of following the vortex core. The continuous modification
of the skeleton point lessens the sensitivity to both the initial conditions and the integration
details.
The effectiveness of the predictor-corrector scheme is illustrated in fig. 3, in which data
from the direct numerical simulations of Singer and Joslin [18] are analyzed. The transparent
vortex tube (a portion of a hairpin vortex) is constructed with data from the full predictor-corrector
method. Its core is indicated by the darker skeleton. The lighter skeleton follows the
uncorrected integral curve of the vorticity. It is obtained by disabling the corrector phase of
the scheme. The vorticity line deviates from the core, exits the vortex tube entirely, and wanders
within the flow field. By appealing to Robinson's ideal definition of a vortex we are able
to confirm that the predictor-corrector skeleton is the one that follows the core. The velocity
fields around the skeleton are consistent with nearly-circular streamlines in Robinson's char-
acterization; those around the vorticity line are eventually not.
3.3 Terminating the Vortex Skeleton
Vorticity lines extend until they intersect a domain boundary, but real vortices typically begin
and end inside the domain. Therefore, the algorithm must always be prepared to terminate a
given vortex skeleton. A simple condition for termination occurs when the vortex cross-sec-
Figure
3.
Vorticity line (light) compared to
predictor-corrector line (dark). Note
that the vorticity line exits from the
vortex tube while the predictor-corrector
skeleton line follows the
core.
tion (discussed in section 4) has zero area. As fig. 3 shows, the reconstructed vortex tubes
taper down to their endpoints, where the cross-section vanishes. The predictor-corrector method
is not guaranteed to terminate. On rare occasions the skeleton can enter a nearly-circular loop (fig. 4).
We have observed this undesirable phenomenon in a small fraction of the skeletons. The spirals seem
to occur in the vicinity of vortex junctures, but we have no reason to believe that the vortex core truly
enters a closed loop. There are examples of this phenomenon in other works, although those examples
do not receive any particular discussion. Figs. 5 and 6 show similar situations in other simulated flows
[19] [10] where a vorticity line enters a tight spiral. In order to guarantee termination, we exploit our
knowledge of the spatial extent of the 3D computational domain and limit the total arclength along a
skeleton to be about twice the longest grid dimension. By guaranteeing termination in this way, we
find that an average time step requires about 1400 Cray-seconds in calculating the 3D numerical
simulation of the flow and about 20 Cray-seconds in identifying the vortex cores and calculating
their cross-sections.
Figure
4.
Vortex skeleton at time 194.4 located by the predictor-corrector
method. Note the spiral in the center.
Figure
5.
Vorticity lines in a shear layer near a wall. Note the
spiral near the top. From Jiminez and Moin, JFM v.
225, p. 235. - Cambridge University Press 1991.
Reprinted with the permission of Cambridge
University Press.
Figure
6.
Tangle of vorticity lines in a turbulent flow. Note the
spiral near the bottom. From Kim and Moin, JFM v.
162, p. 343. - Cambridge University Press 1986.
Reprinted with the permission of Cambridge
University Press.
3.4 Filaments That Connect Vortex Tubes
Sometimes it is useful to continue the skeleton beyond the end of the vortex tube. For
instance, if a low-intensity region exists between high-intensity regions of the same vortex,
then the low-intensity region might not satisfy the criteria for a finite cross section. One would
like to see the connective filament between two strong vortices even if the connection does not
satisfy the requirements for a non-zero cross-section. The criteria for determining the cross-section
can be made more generous in order to capture the connection, but this strategy does
not solve the problem: in addition to capturing the weak connective vortex, we will also capture
unwanted low-intensity structures that may themselves possess regions that are weaker
still. Our resolution of this problem exploits the asymmetric nature of the predictor-correction
method.
Because the predictor-corrector method follows the core of a vortex regardless of the criteria
used to define the vortex cross section, the vortex skeleton can be extended even when
the cross-sectional area of the vortex is equal to zero. The vortex of interest may either re-
intensify or dissipate; if the vortex re-intensifies then the continuation of the skeleton line will
provide a link between the two more-intense regions of the vortex. This link can be visualized
as a thread that connects the two disjoint regions. On the other hand, if the vortex dissipates,
the continuation of the skeleton line will wander through the flow field and eventually either
intercept a domain boundary or enter a new vortex. If a domain boundary is reached, then the
segments of the skeleton that lie outside the last-found vortex (having non-zero cross-section)
are discarded. Similarly a potentially connective filament is discarded if it enters a new vortex
from the side, rather than through one of the vortex endpoints.
To determine whether a new-found region of finite cross section is a continuation of the
original vortex or an entirely different vortex, we march the predictor-corrector scheme backwards
for the same number of steps taken since the previous region of nonzero cross section
was exited. Some possible scenarios are illustrated in figs. 7-9. In fig. 7, the skeleton line
leaves the first vortex tube at point p 1 and continues for n steps until it encounters the second
vortex tube at point p 2 . The predictor-corrector scheme is then marched backwards n steps
from p 2 to p 3 . The distance between points p 1 and p 3 is small relative to the distance between
10-percent criterion is used); hence, the link between p 1 and p 2 is most probably
a low-intensity vortex, and we retain the connective thread between these vortex tubes.
However, in fig. 8 the vortex tube dissipates beyond point p 4 , and the continuation of its
skeleton lacks clear direction and wanders through the flow field. The line intercepts another
vortex tube at p 5 after m steps. The predictor-corrector method is marched backwards m steps
from p 5 to p 6 . Initially, the reverse integration retraces the forward integration, but halfway
between p 5 and p 6 the two lines diverge rapidly and become uncorrelated. The distance from
p 4 to p 6 is a large fraction of the distance from p 4 to p 5 , so the algorithm concludes that the
vortex tube intersected at p 5 is different from the vortex tube that ends at p 4 . The points on the
vortex skeleton line that connect the two tubes are discarded, and the vortex skeleton is terminated
Finally, in fig. 9, the continuation of the skeleton line of the vortex tube that ends at point
7 intersects the side of another vortex tube (shown as a wireframe) and is immediately carried
to the pressure minimum at p 8 . The reverse integration for this case follows along the axis
Figure
7.
Forward integration from p1 to p2 gives
approximately the same path as reverse
integration from p2 to p3. Points p1 and
p2 are therefore connected by a weak
vortex.
Figure
8.
Forward integration from p4 to p5 differs
markedly from reverse integration from p5
to p6. The two vortex tubes are not
connected; the core of the vortex on the left
terminates at p4.
Figure
9.
Integration from p7 to p8 intersects side of
vortex tube (wireframe). Reverse
integration from p8 to p9 follows axis of the
new vortex, away from original tube. The
vortex tubes are not connected.
of the new vortex tube away from the original vortex. The point p 9 is far from p 7 ; hence, the
two vortex tubes are distinct from each other and the line connecting them is discarded.
Implementation Details
Optimal performance of the predictor-corrector technique requires careful attention to implementation
details. This section addresses issues that are important to the successful use of the
method. It is not exhaustive; additional details are provided by Singer and Banks [20].
Eliminating Redundant Seeds and Skeletons
Recall that seed points are chosen based on pressure and vorticity-magnitude, allowing multiple seeds
to generate a given vortex core. Sampling every grid point produces an overabundance of seed
points and hence a multitude of nearly-coincident vortex skeletons (fig. 10). These skeletons
each follow the same core, sampling it at different locations; yet one representative skeleton suf-
fices. The redundancies are eliminated when points inside a tube are excluded from the pool
of future seed points. We accomplish this by flagging any 3D grid cell in the computational
domain that lies within a spherical volume of a skeleton point. The constant term of a Fourier
representation of the cross-section's radius (see section 4.2) is taken to be the radius of the
spherical volume. A future candidate seed is ignored if it lies in a flagged cell.
Eliminating Spurious Feeders
A seed near the surface of the vortex tube can produce a "feeder" vortex skeleton that spirals
toward the vortex center. Intuitively, these seeds lie within grid cells that should have been
flagged but were missed because they lie slightly outside the spherical volumes of exclusion.
Examples of these feeders are illustrated in fig. 11. We eliminate feeders by taking advantage
of the fact that the predictor-corrector method is convergent to the vortex core. A feeder skel-
eton, begun on the surface of the tube, grows toward the core; by contrast, a skeleton growing
along the core does not exit through the surface of the tube. To validate a candidate seed p 0 ,
we integrate forward n steps to the point p n and then backward again by n steps. If we return
Figure
10.
Multiple realizations of the same vortex
tube from different seed points. Each seed
point generates a slightly different skeleton
line, although all the skeletons remain close
to the vortex core.
very close to p 0 then the candidate was a "true" seed point. This is the same reverse-integra-
tion strategy that is used for establishing that a filament actually connects two vortical regions.
Numerical Considerations for Interpolation
Neither the predictor nor the corrector step is likely to land precisely on a grid point; hence,
we must interpolate the pressure and vorticity within the flow field. A linear approximation of
the pressure gradient (the corrector step) will possess minima only at grid points. A three-point
quadratic interpolation can produce minima within grid cells, but a three-point interpolation
within a cell introduces bias toward one side or the other. To reduce any bias from the
interpolation, we use a four-point Lagrange interpolation (found in textbooks on numerical
computation) in each of the three coordinate directions. The high-order interpolation is justified
by the accuracy of the numerical simulation, which is spectral in the spanwise and wall-normal
directions (Fourier and Chebysheff, respectively) and fourth-order in the streamwise
direction. The interpolation scheme works quite well, although it is the most expensive step in
our implementation.
The interpolation scheme makes the predictor-corrector method at least first-order accu-
rate: skeleton points are located to within the smallest grid dimension. This ensures that, on
data sets with well-resolved vorticity and pressure, the method successfully locates vortex
cores.
The vorticity integration can be performed with a variety of methods. First, we used a fourth-order
Runge-Kutta approach. This produced satisfactory results; however, step-size optimization
was difficult to automate. Instead, we developed a technique whereby the point-to-point
distance in the vorticity integration is always equal to the smallest dimension of the local grid
cell. The new point location is found by advancing this distance in the direction of the local
Figure
11.
Feeders merge with a large-scale hairpin
vortex. Three points that satisfy the
threshold criteria lie on the edge of the
vortex tube. Their trajectories curve inward
toward the core and then follow the main
skeleton line.
vorticity vector. This procedure ensures that successive points will not be more than one grid
cell apart, so that if the original calculation is well resolved, then the vorticity-line calculation
will also be sufficiently resolved. The procedure also reduces the chance of wasting many calculations
inside a single grid cell.
Our implementation of the pressure-minimum correction scheme uses the method of steepest
descent to find the local pressure minimum in the plane perpendicular to the vorticity vector.
The smallest grid-cell dimension is used as a local length-scale to march along the gradient
direction.
The corrector phase can be iterated in order to converge to the skeleton, but such convergence
is not guaranteed. We therefore limit the angle that the vorticity can change during a
repeated iteration of the corrector phase, requiring that the cosine of the angle between the
predicted and corrected vorticity be at least 0.9. In case it is not, we simply quit the corrector
phase. We could choose a smaller step-size and re-try, but we have not found this to be necessary
4 Finding the Cross-section
Having produced skeletons that follow vortex cores, we must next determine the shapes of the vortices
through which they pass. A vortex generally assumes an elongated shape which is well-approximated
locally by a cylinder. Our goal is to determine the cross-sections of the vortex tubes in planes perpendicular
to the core. Since it is unclear how to precisely define which points lie in a vortex (see section
2), it is also unclear how to determine the exact shape of a vortex tube's cross-section. Determining an
appropriate measure of the vortex cross-section has been one of the more difficult practical aspects of
this work.
A point on the vortex skeleton serves as a convenient center for a polar coordinate system
in the plane perpendicular to the skeleton line. We have chosen therefore to characterize the
cross-section by a radius function. Note that this scheme correctly captures star-shaped cross-
sections. Cross-sections with more elaborate shapes are truncated to star shapes, with discontinuities
in the radius function (fig. 12). In practice this choice does not seem to be very
restrictive, as section 4.2 indicates.
In examining the cross-section plane there are two important questions to address. First,
what determines whether a point in the plane belongs to the vortex tube? Second, how should
the shape of the tube's cross-section be represented? This section summarizes the strategies
that we found to be successful.
4.1 Criteria for Determining Membership
As the survey demonstrated, there are many heuristics for deciding whether a point is a member of a
vortical structure. Most techniques appeal to some scalar quantity derived from flow quantities: a certain
threshold of that quantity determines membership in a vortex. Since the predictor-corrector
method relies on pressure and vorticity, we wish to re-use these quantities for determining membership
in a vortex. For massive datasets there is a significant penalty for storing or calculating additional scalar
quantities.
For isolated vortices, a threshold of pressure provides an effective criterion to determine
whether a point belongs to a vortex. But when two or more vortices interact, their low-pressure
regions merge and distort the radius estimate of any single vortex. This difficulty is
resolved if the angle between the vorticity vector on the skeleton line and the vorticity vector
at any radial position is restricted. Any angle greater than 90 degrees indicates that the fluid at
the radial position is rotating in the direction opposite to that in the core. We have found that
the 90-degree restriction works well in combination with a low-pressure criterion for the vortex
edge.
For the actual computation of the radial distance, the pressure and the vorticity are sampled
along radial lines, emanating from the skeleton, lying in the perpendicular plane. We step
along each radial line until a point is reached that violates the vorticity or the pressure-thresh-
old criterion.
radius
Figure
12.
Representation of the cross-section in
polar coordinates. The star-shaped
interior (gray) of a non-convex curve
(black) is represented by a radius
function (bottom). In general, the vortex
cross-sections have continuous, periodic
cross-sections suitable for Fourier
4.2 Representation of the Cross-section
If the radius of the cross-section were sampled at 1-degree increments, then 360 radial distances
(and a reference vector to define the 0-degree direction) would be associated with each
skeleton point. That is a great deal of data to save for each point of a time-varying set of vortex
skeletons. We have found that an average radius is sufficient to describe the cross-section
of an isolated vortex tube.
When vortices begin to interact, the cross-section becomes non-circular and so the average
radius does not provide a good description of its shape. A truncated Fourier representation of
the radial distance provides a convenient compromise between the average radius and a full
set of finely-sampled radial locations. The series is easy to compute, easy to interpret, and
allows a large range of cross-sectional shapes. In our work, we keep the constant term, the first
and second sine and cosine coefficients, the vorticity w, and a unit reference vector x that defines
the 0-degree direction in the cross-sectional plane. The cross-sectional radius is thus parametrized by
In general, the magnitudes of the last two coefficients (a 2 and b 2 ) are comparatively small, indicating
that the neglected terms are not significant. That observation also validates our assumption that the
cross-section is well-represented by a continuous polar function.
Fig. 13 illustrates a single cross section of a vortex extracted from direct numerical simulation
data. The shaded region is the interior of the vortex tube, sampled at 1-degree intervals.
The thin line is a circle, centered at the skeleton, showing the average radius of the vortex
Figure
13.
Comparison of different ways to
represent the cross-section of a vortex
tube. The shaded region is the finely-
sampled radius function. The thin line
is an approximating circle. The thick
line is a 5-term Fourier
tube. The thick line is the truncated Fourier series representation of the vortex cross-section,
providing a better approximation than the circle.
In our time-varying data a single vortex develops into 44 vortices over the course of 231 time
steps. In total there are 3,584 individual cores and 365,839 positive-area cross-sections. We calculated
the relative energy represented by the last Fourier coefficients according to the fraction
In 87% of the cross-sections, the relative energy E rel due to the last two coefficients accounts for less
than one-tenth of the total energy.
5 Data Reduction and Reconstruction
Time-varying volumetric datasets generally consume vast amounts of storage. This section is
concerned with the problem of reducing the data size to permit an interactive examination of a
time-varying flow. The typical non-interactive avenue for producing an animation of 3D volumetric
structures is to extract isosurfaces at each time step, generate an image frame, and
record each frame to videotape or to disk. The individual datasets may take a long time to
retrieve from remote mass-storage devices and the isosurfaces may take a long time to extract,
but this pre-processing step is incurred only once to produce an animation. Replaying the animation
on a workstation presents other problems. A two-minute animation, at frames per
second, requires 3600 frames. A full color frame, at a resolution of only 640-480 pixels,
requires about a megabyte. The total of 3.6 gigabytes of storage exceeds the range of current
workstation memories. The animation can be compressed using MPEG, but decoding and displaying
it at frames per second is a challenge. Even if the animation could be replayed con-
veniently, the general strategy of extracting isosurfaces from massive remotely-stored
volumetric datasets does not promise interactive exploration of the time-varying flow in the
foreseeable future.
There are alternative techniques for compressing the volumetric data and even for rendering
images from the compressed format. Ning and Hesselink [21] report compression factors
of about 5-fold by using vector quantization. The technique improves the performance of their
volume renderer to about one minute per frame. Shen and Johnson [22] use frame-to-frame
differencing, with a fixed viewpoint, to achieve compression factors up to about 700-fold at a
rendering rate of better than one second per frame. We desire a scheme that offers both subE
rel
a 2b 2+
a 0
stantial data reduction to permit local storage and fast rendering to permit real-time interaction
There are techniques that reduce the number of polygons in a surface representation of a
solid, as opposed to rendering it volumetrically. By visualizing only the polygonalized boundaries
of vortex tubes, one benefits from the fast rendering speed of the graphics hardware, as
compared with a slower volume-rendering of the vortex interiors. Hoppe [23] reduced the
polygon count of unstructured meshes by factors of 10 to 16. Turk [24] reduced the polygon
count of unstructured meshes by factors of 10 to 18. Schroeder [25] used multiple passes to
reduce the polygon count by factors of up to 10. These techniques are designed to apply to
somewhat arbitrary surface shapes. In the case of vortex tubes we exploit their elongated
cylindrical shape to achieve even more aggressive data-reduction using the Fourier series. In
addition we are able to specify, at run-time, the polygonal resolution of the reconstructed vortex
tubes. The details of reduction and reconstruction are described below.
We performed a flow simulation using Cray computers over the course of two calendar
years, using about 2000 Cray2 hours of processing time. The numerical grid grows with the
size of the evolving flow structures from an initial grid size of 301-121-41 (in the stream-
wise, wall-normal, and spanwise directions) to a final grid size of 461-161-275. Each grid
point holds 1 data-word for pressure and 3 data-words for vorticity. A Cray word is actually 8 bytes,
but 4 bytes per word would be adequate. The storage needs for each time step range from 24 mega-bytes
to 326 megabytes, assuming a 4-byte word. The entire set of 3D grids requires at least 45
gigabytes of storage. By using vortex skeletons with Fourier-series cross-sections we are able to
reduce the data significantly and then reconstruct the vortex tubes locally on a workstation.
5.1 Data Reduction
In our DNS data, a typical vortex skeleton is a polyline composed of 30 to 200 samples. The
time steps in the numerical simulation are non-uniform: the non-integer time increment is
determined by bounding the amount of integration error it introduces. The vortex tubes pictured
in fig. 7 are calculated at time step 152.8 and contain 1397 skeleton points. Each sample
in a vortex skeleton requires 60 bytes of data to represent its position, tangent, reference vec-
tor, cross-section coefficients, and velocity magnitude. Thus a reduction from 227 MB to
84 KB is achieved at this particular time step, a 3000-fold improvement over the volumetric
data size.
Fig. 14 shows the reduction factors for the vortices over a range of time steps. At the end
of the simulation the flow becomes fully turbulent and the 3D grid contains many interacting
vortices over a large sub-volume of the computational domain. Even so, the technique continues
to reduce the dataset by factors of one to three thousand. The vortex data from the entire
simulation can be reduced from the 45 GB volumetric grid to a 24 MB skeletal representation.
This is an average reduction factor of about 1800.
5.2 Faithful Reconstruction
The significant data-reduction that vortex skeletons provide does not come without cost.
There is still the matter of reconstructing polygonal tubes from the skeletons. If the tubes have
circular cross-sections, they are generalized cylinders. Bloomenthal gives a clear exposition
of how to reconstruct a generalized cylinder from a curve through its center [26]. The coordinate
system of the cross-section usually twists from one skeleton point to the next. The key
issue is how to keep the rate of rotation about the skeleton's tangent vector small. Excessive
twist is visible in the polygons that comprise the tube: they become long and thin and their
interiors approach the center of the tube (fig. 15). Our tubes are not cylinders: the additional terms
in the Fourier series produce non-circular cross-sections. But a coordinate frame that twists along the
skeleton will produce the same visible artifacts in a polygonal mesh.kk
50 100 150 200
Figure
14.
Reduction factors achieved using vortex
skeletons. Horizontal axis indicates time
step in the numerical simulation of an
unsteady flow. Vertical axis indicates
ratio of the size of original 3D grid to the
size of the skeletal representation of
vortices.
Reduction
factor
Figure
15.
A quadrilateral mesh connects
consecutive cross-sections (each with 8
samples) in a tube. On the left, 20- of
twist between cross-sections causes the
mesh to skew. On the right, the cross-section
at the back has samples which
are aligned with those at the front.
In order to reduce twisting of the coordinates, we project the coordinate bases from one
cross-section onto the next cross-section (fig. 16). Let p k be a point in the vortex skeleton with
normal . The tube's cross-section lies in the plane L k defined by coordinate axes n k
and b k . The following point p k+1 has cross-section plane L k+1 . We project n k onto plane L k+1 to produce
a new normal vector n k+1 . This produces a new coordinate system that has not twisted compared
to its predecessor. The initial normal n 0 and binormal b 0 can be chosen in a variety of ways. We use
as an initial choice of the normal vector n 0 , where w is the vorticity and the coordinates
tuple corresponds to the (stream, wall-normal, spanwise) directions. In the rare case that (1, 1, 1) and
w are aligned, we use (1, 0, 0) as a second choice to produce the normal vector. The new normal vector
might be different from the reference vector (which indicates the 0-degree direction) for
the Fourier representation of the cross-section. To reconstruct the cross-section, we phase-shift
the angle in the Fourier series by the angular difference between the normal and the reference
vector.
In general, 20 to 80 samples suffice to reconstruct a cross-section of acceptable image-
quality. We keep the number of cross-sectional samples constant along a reconstructed vortex tube so
that the tube can be represented as a quadrilateral mesh. Many graphics libraries have drawing routines
that are optimized for quadrilateral meshes.
Our original 3D grids, over steps, require at least 45 GB of storage. But in the reconstructed
vortex tubes there are only 404,428 skeleton points. A point on the polygonal mesh requires
bytes (for position, normal, and color). If each cross-section has 20 samples, the entire polygonal-
ized, time-varying dataset requires about 220 MB of storage, which is easily within the reach of large-
memory workstations.
Figure
16.
Basis vector n k at a point p k on a curve is
projected onto the cross-section plane
L k+1 to produce a new basis vector n k+1 .
5.3 Enhanced Reconstruction
Sometimes there is good reason for a "reconstruction" that is not faithful to the original shape
of the vortex tube. The faithful reconstruction in fig. 1e does not convey the spiraling motion
along the surface of the vortex tube. We experimented with different methods of visualizing
the velocities on the tube itself. One helpful technique is to create a texture on the surface,
drawing curves to indicate the helical flow. This visualization is enhanced dramatically when
the curves are displaced inward to produce grooves.
Fig. 17 demonstrates this technique on a single hairpin vortex. The grooves follow integral
curves of the surface-constrained velocity vectors. That is, a curve is developed on the surface
of the tube by projecting the velocity vectors onto the tube surface and integrating. The three
curves in the figure begin from initial trajectories that are shifted in phase by increments of
120 degrees. In an informal survey of a dozen colleagues, we found that none could estimate
the amount of helical motion in a faithful reconstruction (as in fig. 1e) of a vortex tube; after
all, there are no visual indications of the vortical motion. On the other hand, the same subjects
instantly identified the direction and amount of rotation in the enhanced image of fig. 17. The
model in the figure uses over 250,000 polygons to represent the vortex. This polygon count is
prohibitively large for contemporary graphic systems to display in real time. For a static visu-
alization, however, a large polygon count is reasonable in the trade-off between image quality
and rendering speed. As graphics architectures begin to deliver polygons per second
[27], we expect that such enhanced reconstructions of flow features will become more
common.
Figure
17.
Enhanced reconstruction of a
hairpin vortex tube. The grooves
follow integral curves of velocity,
constrained to follow the surface of
the tube.
5.4 Interactive Time-Varying Visualization
The predictor-corrector scheme was developed in order to visualize vortical structures in a time-varying
turbulent flow. The scheme has the added benefit that it represents the vortex tubes very efficiently.
We wish to visualize and explore the flow dynamically; to that end we have developed an interactive
application called "Tracktur" [28] which allows investigation of the vortices as they evolve in a flow.
There are other systems that have been developed for similar purposes [29] [30]. Tracktur differs from
them by exploiting the data-reduction that the predictor-corrector scheme provides in order to display
vortices in an unsteady flow. In addition, Tracktur provides 3D head-tracking, stereo display, and 3D
hand-tracking to let a viewer navigate among the vortices and probe quantitative values within the
flow. The system sustains about 15 updates per second on a full-screen display of about 8000 polygons
using the Silicon Graphics Onyx with Reality Engine 2 graphics.
Our ultimate goal is to better understand how a turbulent spot develops. Since this is a complex
and dynamic process, we expected that a time-varying visualization tool like Tracktur would provide
significant support. Other researchers report modest success in applying visualization systems to study
scientific problems of interest to them [31] [32]. By using Tracktur we have discovered a backward-
tilting S-shaped vortex head (fig. 18) that had been seen experimentally in a similar flow (fig. 19) [1],
but had not been identified before in the flow data we were investigating.
6 Future Work
There are two important issues in data-reduction and reconstruction still to be addressed. First,
we would like to minimize the number of samples along a vortex skeleton. Where the vortex
skeleton has high curvature or where the cross-section changes shape quickly, many samples
are required to produce an accurate reconstruction. But most vortex tubes have long, straight
portions with nearly-circular cross-sections of nearly-constant radius. This characteristic
should permit us to represent the vortex tube with fewer samples along its skeleton.
Figure
18.
S-shaped vortex head at time 184.6
displayed in the Tracktur system. The
white stripes on the flat plate mark units
in the computational domain.
The second issue concerns interpolation. In reviewing the development of a vortical flow,
a scientist may be especially interested in narrowing the interval of animation to only a few of
the original time steps. It would be helpful to generate in-between frames from the given data.
One could interpolate the original 3D grids to extract interpolated vortex skeletons, but that
would require a great deal of data communication and computation. Interpolating between the
skeletal representations, on the other hand, could be done in memory. Unfortunately, it is difficult
to interpolate vortex tubes as they appear, branch, merge, and disappear over time. Other
researchers have addressed the issue of matching corresponding isosurfaces in unsteady flows
[33]. Matching and interpolating the skeletal representation remains as future work. Concerning
the enhanced vortex reconstruction, it may be possible to animate the spiral grooves by
advecting the displacement coordinates according to the flow velocities. Max, Crawfis, and
Williams have used a similar technique to visualize wind velocities [34].
Conclusions
The innovative use of a two-step predictor-corrector algorithm has been introduced to identify
vortices in flow-field data. Unlike other approaches, our method is able to self-correct toward
the vortex core even in a turbulent shear layer. The principle of using the vorticity vector field
to predict the location of the next point and the gradient of the scalar pressure field to correct
this position distinguishes this method from others. The theoretical justification for the technique
is that vortices are generally characterized by large magnitudes of vorticity and low
pressures in their core. The presence of these two characteristics in a cross-section defines the
shape of the vortex interior.
Induced
velocity
Edge of the
boundary layer Figure 19.
S-shaped vortex head in an
experimental shear flow over a flat
plate. Top: schematic diagram of the
profile. From Acarlar and Smith, JFM
v. 175, p. 71. - Cambridge University
Press 1987. Reprinted with the
permission of Cambridge University
Press. Bottom left: dye injected into the
flow develops into an upright head.
Image courtesy of C. R. Smith. Bottom
right: intensity gradients of the image at
left produce a bas-relief image.
This paper discusses a number of novel approaches that we have developed to deal with
matters such as eliminating redundant vortices, eliminating feeders, and representing the
cross-section of a vortex tube. Sample extractions of vortices from various flow fields illustrate
the different aspects of the technique.
The vortex skeletons are an economical way to represent vortical structures within a flow,
offering data-reduction on the order of more than a thousand-fold even in a complex flow.
This presents an opportunity to store hundreds of frames of vortex geometry in workstation
memory. As a proof of concept, we implemented a system that lets a user interactively explore
an evolving turbulent spot. Where interactivity is not important, a vortex tube can be
enhanced during reconstruction by modelling grooves in the surface in order to help display
the dynamics of vortical flow in a static image.
Acknowledgments
The images in fig. 1a-d were rendered on a Silicon Graphics Indigo workstation using the
FAST visualization system. The images in figs. 2, 4, and 7-11 were rendered on a Silicon
Graphics Indigo 2 using the Explorer visualization system. Figs. 1e and were produced
using Tracktur. The image in fig. 17 was rendered on an Intel Paragon using PGL (Parallel
Graphics Library) [35].
We thank Gordon Erlebacher for his helpful insights regarding vortex identification
schemes. We thank Greg Turk and the reviewers for their suggested improvements to this
paper.
--R
"A study of hairpin vortices in a laminar boundary layer. Part 2. Hairpin vortices generated by fluid injection,"
"Vortex Methods for Flow Simulation,"
"Vortex Tubes in Turbulent Flows:
"Coherent motions in the turbulent boundary layer,"
"A review of vortex structures and associated coherent motions in turbulent boundary layers,"
"A review of quasi-coherent structures in a numerically simulated boundary layer,"
"A general classification of three-dimensional flow fields,"
"Identification and classification of topological structures in free shear flows,"
"The structure of the vorticity field in turbulent channel flow. Part 1. Analysis of instantaneous fields and statistical correlations,"
"The structure of the vorticity field in turbulent channel flow. Part 2. Study of ensemble-averaged fields,"
"Intense vorticity in isotropic turbulence,"
"Streamlines, Vorticity Lines, and Vortices,"
"A Tool for Visualizing the Topology of Three-Dimensional Vector Fields,"
"FAST: A Multi-processed Environment for Visualization of Computational Fluid Dynamics,"
"Vortex dynamics and the production of Reynolds stress,"
"An algorithm for space recognition and time tracking of vorticity tubes in turbulence,"
"Emergence of coherent patterns of vortex stretching during reconnection: A scattering paradigm,"
"Metamorphosis of a Hairpin Vortex into a Young Turbulent Spot,"
"The Minimal Flow Unit in Near-wall Turbulence,"
"A Predictor-Corrector Scheme for Vortex
"Fast Volume Rendering of Compressed Data,"
"Differential Volume Rendering: A Fast Volume Visualization Technique for Flow Animation,"
"Mesh Optimization,"
"Re-tiling Polygonal Surfaces,"
"Decimation of Triangle Meshes,"
"Calculation of Reference Frames Along a Space Curve,"
"PixelFlow: High-Speed Rendering Using Image Composition,"
"Tracking a Turbulent Spot in an Immersive Environment,"
"Visualization of Time-Dependent Flow Fields,"
"Visualization of Turbulent Flow with Particles,"
"Case Study: Tokamak Plasma Turbulence Visualization,"
"Case Study: Visualizing Classical Problems in CFD,"
"Visualizing Features and Tracking Their Evolution,"
"Visualizing Wind Velocities by Advecting Cloud Textures,"
"Parallel Polygon Rendering for Message-Passing Architectures,"
--TR
--CTR
Allen R. Sanderson , Chris R. Johnson , Robert M. Kirby, Display of Vector Fields Using a Reaction-Diffusion Model, Proceedings of the conference on Visualization '04, p.115-122, October 10-15, 2004
Ming Jiang , Raghu Machiraju , David Thompson, Geometric verification of swirling features in flow fields, Proceedings of the conference on Visualization '02, October 27-November 01, 2002, Boston, Massachusetts
Ming Jiang , Raghu Machiraju , David Thompson, A novel approach to vortex core region detection, Proceedings of the symposium on Data Visualisation 2002, May 27-29, 2002, Barcelona, Spain
Xavier Tricoche , Christoph Garth , Gordon Kindlmann , Eduard Deines , Gerik Scheuermann , Markus Ruetten , Charles Hansen, Visualization of Intricate Flow Structures for Vortex Breakdown Analysis, Proceedings of the conference on Visualization '04, p.187-194, October 10-15, 2004
Theo van Walsum , Frits H. Post , Deborah Silver , Frank J. Post, Feature Extraction and Iconic Visualization, IEEE Transactions on Visualization and Computer Graphics, v.2 n.2, p.111-119, June 1996
David N. Kenwright , Robert Haimes, Automatic Vortex Core Detection, IEEE Computer Graphics and Applications, v.18 n.4, p.70-74, July 1998
Jonathan Woodring , Chaoli Wang , Han-Wei Shen, High Dimensional Direct Rendering of Time-Varying Volumetric Data, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.55, October 22-24,
Guangfeng Ji , Han-Wei Shen , Rephael Wenger, Volume Tracking Using Higher Dimensional Isosurfacing, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.28, October 22-24,
P. Rona , K. Bemis , D. Kenchammana-Hosekote , D. Silver, Acoustic imaging and visualization of plumes discharging from black smoker vents on the deep seafloor, Proceedings of the conference on Visualization '98, p.475-478, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Kwan-Liu Ma, Visualizing Time-Varying Volume Data, Computing in Science and Engineering, v.5 n.2, p.34-42, March
Rainer Wegenkittl , Eduard Grller , Werner Purgathofer, Visualizing the Dynamical Behavior of Wonderland, IEEE Computer Graphics and Applications, v.17 n.6, p.71-79, November 1997
Hua Liu , Lian Jiang , Manish Parashar , Deborah Silver, Rule-based visualization in the discover computational steering collaboratory, Future Generation Computer Systems, v.21 n.1, p.53-59, 1 January 2005
Deborah Silver , Xin Wang, Tracking scalar features in unstructured datasets, Proceedings of the conference on Visualization '98, p.79-86, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Jonathan Woodring , Han-Wei Shen, Chronovolumes: a direct rendering technique for visualizing time-varying data, Proceedings of the 2003 Eurographics/IEEE TVCG Workshop on Volume graphics, July 07-08, 2003, Tokyo, Japan
Deborah Silver , Xin Wang, Tracking and Visualizing Turbulent 3D Features, IEEE Transactions on Visualization and Computer Graphics, v.3 n.2, p.129-141, April 1997 | vortex identification;data reduction;vortex visualization;feature extraction;numerical flow animation;vortex core;numerical flow visualization |
614304 | Quaternion Frame Approach to Streamline Visualization. | AbstractCurves in space are difficult to perceive and analyze, especially when they form dense sets as in typical 3D flow and volume deformation applications. We propose a technique that exposes essential properties of space curves by attaching an appropriate moving coordinate frame to each point, reexpressing that moving frame as a unit quaternion, and supporting interaction with the resulting quaternion field. The original curves in three-space are associated with piecewise continuous four-vector quaternion fields, which map into new curves lying in the unit three-sphere in four-space. Since four-space clusters of curves with similar moving frames occur independently of the curves original proximity in three-space, a powerful analysis tool results. We treat two separate moving-frame formalisms, the Frenet frame and the parallel-transport frame, and compare their properties. We describe several flexible approaches for interacting with and exploiting the properties of the four-dimensional quaternion fields. | Introduction
We introduce techniques and tools for visualizing
streamline data that are based on the differential
geometry of 3D space curves. Intrinsic
properties of space curves give rise to scalar fields
over the curves such as the curvature and torsion.
A moving coordinate frame on a curve is a tensor
field that is equivalent to a quaternion field; either
may be understood as the solution to a set of differential
equations driven by the intrinsic scalar
fields.
Our fundamental thesis is that quaternion
frame coordinates are useful for exposing the similarities
and differences of sets of streamlines.
Good analytic and visual measures for revealing
similarities of curve shapes are rare. Because of
the existence of a uniform distance measure in the
quaternion space that we use, orientation similarities
in the evolution of flow fields appear automatically
in meaningful spatial groups. Identification
of these similarities is useful for applications
such as finding repeating patterns and
related curve shapes, both on single curves and
within large collections of curves. Conversely, if
a large set of nearly-identical curves contains a
small number of significant curves that differ from
their neighbors due to subtle changes in their
frame orientations, our method will distinguish
them.
We study two distinct moving coordinate
frames that may be assigned to curves in three-
space. One is the classic Frenet frame, also
called the Frenet-Serret frame (see, e.g., [9, 10]),
which is defined locally by the tangent, normal
and binormal at each point of each curve; the
other is the parallel-transport frame (see, e.g.,
Bishop [4]), which retains the tangent vector,
but uses a nonlocal approach borrowed from the
parallel-transport methods of differential geometry
to compute the frame components in the plane
perpendicular to the curve. All such frames can
be recast into quaternion frame coordinates.
Orientation spaces and their relationship to
quaternions are described in Altmann [2]; an
interesting approach to the visualization of the
properties of quaternions was recently given by
Hart, Francis, and Kauffman [19]. Systematic approaches
for representing clusters of orientations
in 3D spaces of angles have been suggested, for
example, by Alpern et al. [1]. Among previous
approaches to visualizing the geometry of space
curves, we note the work of Gray [7, 10], which exploits
the curvature and torsion scalar fields on a
curve for visualization purposes; this method extends
naturally to higher-dimensional manifolds
with well-defined local curvature. We will give
some examples of the application of curvature and
torsion fields for completeness here, but will not
pursue this approach in detail.
The use of quaternion frames in a 4D display
was proposed as a visualization technique
for stream manifold characteristics in Hanson and
Ma [17]. The current article is based on the
concepts of the latter work, and includes additional
results on the comparative properties of
the Frenet and parallel-transport frames, as well
as further work on interactive methods.
2 The Differential Geometry of Space
Curves
Dense families of space curves can be generated
by many applications, ranging from time-dependent
particle flow fields, to static streamlines
generated by integrating a volume vector
field, to deformations of a solid coordinate
grid. Our fundamental approach singles out space
curves, although variations could be used to treat
individual point frames (see [1]), stream surfaces
(see [20]), and orientation differences (which
are themselves orientation fields) as well. Thus
we begin with the properties of a curve ~ x(t) in
3D space parameterized by the unnormalized arc
length t. If ~ x(t) is once-differentiable, then the
tangent vector at any point is
~
The standard arc-length differential is typically
expressed as
ds
dt
d~ x(t)
dt
d~ x(t)
dt
In practice, we never have smooth curves in numerical
applications, but only piecewise linear
curves that are presumed to be approximations
to differentiable curves; thus we might typically
take, for a curve given by the set of points f~ x i g,
~
or any corresponding formula with additional
sampling points and desirable symmetries. We
use a five-point formula to get a smoother result;
one could also produce finer intermediate states
by spline interpolation.
If the curve is locally straight, i.e., ~ x 00
, then there is no locally-determinable
coordinate frame component in the plane normal
to ~
T; a non-local definition must be used to decide
on the remainder of the frame once ~ T is
determined. Below, we formulate our two alternate
coordinate frames, one of which, the Frenet
frame, is completely local, but is indeterminable
where the curve is locally straight, and the other
of which, the parallel transport frame, is defined
everywhere but depends on a numerical integration
over the whole curve.
2.1 Frenet Frames
The Frenet frame (see, e.g., [9, 10]) is defined
as follows: If ~ x(t) is any thrice-differentiable space
curve, its tangent, binormal, and normal vectors
at a point on the curve are given by
~
~
~
B(t) \Theta ~ T(t) :
We illustrate this standard frame configuration
in
Figure
1. When the second derivative vanishes
on some interval, the Frenet frame is temporarily
undefined, as illustrated in Fig. 2. Attempts to
work around this problem involve various heuristics
[24].
The Frenet frame obeys the following differential
equation in the parameter t,6~
~
~
~
~
~
is the scalar magnitude of
the curve derivative, -(t) is the scalar curvature,
and -(t) is the torsion. These quantities can in
principle be calculated in terms of the parameterized
or numerical local values of ~ x(t) and its first
three derivatives as follows:
If we are given a non-vanishing curvature and a
torsion as smooth functions of t, we can theoretically
integrate the system of equations to find
the unique numerical values of the corresponding
space curve ~ x(t) (up to a rigid motion).
2.2 Parallel Transport Frames
Bishop [4] noted that, while the Frenet frame
has the advantage of consistent local computability
at all points on a curve except those with vanishing
second derivative, there is another natural
frame, the parallel transport frame, that is well-defined
everywhere; the distinguishing feature of
the parallel transport frame is that it is essentially
the solution to a differential equation, and thus
depends on the initial conditions and is subject
to numerical error for long curves. Operational
methods of defining such frames have been previously
noted (see, e.g., [5]) but the underlying
mathematical basis was not elaborated.
Geometrically, the parallel transport frame derives
its name from the fact that it corresponds
to the notion of moving a vector around a curved
manifold in such a way that it remains as parallel
to itself as possible. Its mathematical properties
derive from the observation that, while ~ T(t) for a
given curve model is unique, we may choose any
fig1.eps
Figure
1: The triad of orthogonal axes forming
the Frenet frame for a curve with non-vanishing
curvature.
convenient arbitrary basis ( ~
for the
remainder of the frame, so long as it is in the
plane perpendicular to ~
T(t) at each point. If the
derivatives of ( ~
depend only on ~
and not each other, we can make ~
vary smoothly throughout the path regardless of
the curvature. We may therefore choose the alternative
frame equations6~
~
~
~
illustrated in Fig. 3 for a curve with vanishing
curvature on a segment. One can show that [4]
dt
so that k 1 and k 2 effectively correspond to a
Cartesian coordinate system for the polar coordinates
dt. A fundamental
fig2.eps
Figure
2: The triad of orthogonal axes forming
the Frenet frame for a curve with vanishing curvature
on an interval; the frame is undefined on
the interval.
ambiguity in the parallel transport frame compared
to the Frenet frame thus arises from the
arbitrary choice of an integration constant for ',
which disappears from - due to the differentiation
A numerical method for computing the parallel
transport frame with the desired properties
is the following: Given a frame at ~ x
two neighboring tangents ~
and ~
and
their unit vectors -
find the angle
between them and the perpendicular
to the plane of the tangents given by
~
finally, rotate the frame at ~ x
by ' about -
V to get the frame at point ~ x i
. Either
3D vector rotation or rotation by quaternion
multiplication can be used to effect the rotation.
Just as for the Frenet frame, one can begin
with a curve ~ x(t) and an initial frame, or a pair
of functions and an initial frame, or
a frame over the entire curve, and then integrate
where needed to compute the missing variables.
It is also worthwhile noting that
form a two-dimensional Cartesian vector field at
fig3.eps
Figure
3: The parallel-transport curve frame for
the curve of Fig. 2 [4]. This frame, unlike the
Frenet frame in Fig. 2, is continuous along the
"roof peak" where the curvature vanishes.
each point of the curve, and thus allow a natural
alternate characterization to Gray's (-) curve
properties [7, 10].
3 Theory of Quaternion Frames
It is awkward to represent moving frames visually
in high-density data because a frame consists
of three 3D vectors, or nine components, yet it has
only three independent degrees of freedom. Some
approaches to representing these degrees of freedom
in a three-dimensional space were suggested
by Alpern et al. [1]. We propose instead to systematically
exploit the representation of 3D orientation
frames in four-dimensions using equivalent
unit quaternions that correspond, in turn,
to points on the three-sphere (see, e.g., [25]). A
collection of oriented frames such as those of a
crystal lattice can thus be represented by mapping
their orientations to a point set in the 4D
quaternion space. The moving frame of a 3D
space curve can be transformed into a path in
quaternion space corresponding pointwise to the
3D space curve.
The quaternion representation of rotations re-expressing
a moving frame of a 3D space curve is
an elegant unit four-vector field over the curve;
the resulting quaternion frames can be displayed
as curves in their own right, or can be used in
combination with other methods to enrich the display
of each 3D curve, e.g., by assigning a coded
display color representing a quaternion component
Properties. A quaternion frame is a unit-length
four-vector
corresponds to exactly one 3D coordinate frame
and is characterized by the following properties:
ffl Unit Norm. If we define the inner product
of two quaternions as
then the components of a unit quaternion
obey the constraint
and therefore lie on S 3 , the three-sphere,
which we will typically represent as embedded
in four-dimensional Euclidean space R 4 .
ffl Multiplication rule. The quaternion product
of two quaternions q and p, which we
write as q p, takes the form66[q p] 0
177This rule is isomorphic to multiplication in
the group SU(2), the double covering of the
ordinary 3D rotation group SO(3). If two
quaternions a and b are transformed by multiplying
them by the same quaternion q, their
inner product a \Delta b transforms as
and so is invariant if q is a unit quaternion.
ffl Mapping to 3D rotations. Every possible
3D rotation R (a 3 \Theta 3 orthogonal matrix)
can be constructed from either of two related
quaternions,
using the quadratic re-
lationship
where Q(\Sigma \Sigma
3 and
ffl Rotation Correspondence. When we
substitute
3-vector lying on
the 2-sphere S 2 , R('; -
n) becomes the standard
matrix for a rotation by ' in the plane
perpendicular to -
n; the quadratic form ensures
that the two distinct unit quaternions
q and \Gammaq in S 3 correspond to the same SO(3)
rotation.
3.1 Quaternion Frenet Frames
All 3D coordinate frames can be expressed in
the form of quaternions using Eq. (6). If we
assume the columns of Eq. (6) are the vectors
B), respectively, one can show from Eq. (2)
that [q 0 (t)] takes the form (see [12])66q 0q 0q 0q 0375
This equation has the following key properties:
ffl The matrix on the right hand side is antisym-
metric, so that q(t) \Delta q 0
tion. Thus all unit quaternions remain unit
quaternions as they evolve by this equation.
ffl The number of equations has been reduced
from nine coupled equations with six orthonormality
constraints to four coupled
equations incorporating a single constraint
that keeps the solution vector confined to the
3-sphere.
We verify that the matrices
37explicitly reproduce Eq. (2),
where we have applied Eq. (7) to get the right-hand
terms.
Just as the Frenet equations may be integrated
to generate a unique moving frame with its space
curve for non-vanishing -(t), we may integrate
the much simpler quaternion equations (7).
3.2 Quaternion Parallel Transport
Frames
Similarly, a parallel-transport frame system
given by Eq. (4) with ( ~
der) corresponding to the columns of Eq. (6) is
completely equivalent to the following parallel-
transport quaternion frame equation for [q 0 (t)]:66q 0q 0q 0q 0375
where antisymmetry again guarantees that the
quaternions remain constrained to the unit 3-
sphere. The correspondence to Eq. (4) is verified
as follows:
Assigning Smooth Quaternion
Frames
Given a particular curve, we are next faced
with the task of assigning quaternion values to
whatever moving frame sequence we have chosen.
4.1 Assigning Quaternions to Frenet
Frames
The Frenet frame equations are pathological,
for example, when the curve is perfectly straight
for some distance or when the curvature vanishes
momentarily. Thus, real numerical data for space
curves will frequently exhibit behaviors that make
the assignment of a smooth Frenet frame difficult,
unstable, or impossible. In addition, since any
given 3 \Theta 3 orthogonal matrix corresponds to two
quaternions that differ in sign, methods of deriving
a quaternion from a Frenet frame are intrinsically
ambiguous. Therefore, we prescribe the
following procedure for assigning smooth quaternion
Frenet frames to points on a space curve:
ffl Select a numerical approach to computing
the tangent ~
T at a given curve point ~ x; this
typically depends on the chosen curve model
and the number of points one wishes to sample
ffl Compute the remaining numerical derivatives
at a given point and use those to compute
the Frenet frame according to Eq. (1). If
any critical quantities vanish, tag the frame
as undefined (or as needing a heuristic fix).
ffl Check the dot product of the previous binormal
~ B(t) with the current value; if it is near
zero, choose a correction procedure to handle
this singular point. Among the correction
procedures we have considered are (1) simply
jump discontinuously to the next frame to indicate
the presence of a point with very small
curvature; (2) create an interpolating set of
points and perform a geodesic interpolation
[25]; or (3) deform the curve slightly before
and after the singular point to "ease in" with
a gradual rotation of the frame or apply an
interpolation heuristic (see, e.g., [24]). Creating
a jump in the frame assignment is our
default choice, since it does not introduce any
new information.
ffl Apply a suitable algorithm such as that of
Shoemake [25] to compute a candidate for
the quaternion corresponding to the Frenet
frame.
ffl If the 3 \Theta 3 Frenet frame is smoothly chang-
ing, make one last check on the 4D inner
product of the quaternion frame with its own
previous value: if there is a sign change,
choose the opposite sign to keep the quaternion
smoothly changing (this will have no effect
on the corresponding 3\Theta3 Frenet frame).
If this inner product is near zero instead of
\Sigma1, you have detected a radical change in the
Frenet frame which should have been noticed
in the previous tests.
ffl If the space curves of the data are too
coarsely sampled to give the desired smoothness
in the quaternion frames, but are still
close enough to give consistent qualitative
behavior, one may choose to smooth out the
intervening frames using the desired level of
recursive slerping [23, 25] to get smoothly
splined intermediate quaternion frames.
In Fig. 4, we plot an example of a torus knot,
a smooth space curve with everywhere nonzero
curvature, together with its associated Frenet
frames, its quaternion frame values, and the path
of its quaternion frame field projected from four-
space. Fig. 5 plots the same information, but
this time for a curve with a discontinuous frame
that flips too quickly at a zero-curvature point.
This space curve has two planar parts drawn as
though on separate pages of a partly-open book
and meeting smoothly on the "crack" between
pages. We see the obvious jump in the Frenet and
quaternion frame graphs at the meeting point; if
the two curves are joined by a long straight line,
the Frenet frame is ambiguous and is essentially
undefined in this segment. Rather than invent
an interpolation, we generally prefer to use the
parallel transport method described next.
4.2 Assigning Quaternions to Parallel
Transport Frames
In order to determine the quaternion frames of
an individual curve using the parallel transport
method, we follow a similar, but distinct, procedure
ffl Select a numerical approach to assigning a
tangent at a given curve point as usual.
ffl Assign an initial reference orientation to each
curve in the plane perpendicular to the initial
tangent direction. The entire set of frames
will be displaced from the origin in quaternion
space by the corresponding value of this
initial orientation matrix, but the shape of
the entire curve will be the same regardless of
the initial choice. This choice is intrinsically
ambiguous and application dependent. How-
ever, one appealing strategy is to base the
initial frame on the first well-defined Frenet
frame, and then proceed from there using the
parallel-transport frame evolution; this guarantees
that identical curves have the same
parallel-transport frames.
ffl Compute the angle between successive tan-
gents, and rotate the frame by this angle in
the plane of the two tangents to get the next
frame value.
ffl If the curve is straight, the algorithm automatically
makes no changes.
ffl Compute a candidate quaternion representation
for the frame, applying consistency conditions
as needed.
Note that the initial reference orientation and all
discrete rotations can be represented directly in
terms of quaternions, and thus quaternion multiplication
can be used directly to apply frame
rotations. Local consistency is then automatic.
An example is provided in Fig. 6, which shows
the parallel transport analog of Fig. 4 for a torus
knot. Fig. 7 is the parallel transport analog of
the pathological case in Fig. 5, but this time the
frame is continuous when the curvature vanishes.
Figure
4: (a) Projected image of a 3D (3,5) torus knot. (b) Selected Frenet frame components displayed
along the knot. (c) The corresponding smooth quaternion frame components. (d) The path of the
quaternion frame components in the three-sphere projected from four-space. Color scales indicate the
0-th component of the curve's four-vector frame (upper left graph in (c)).
5 Examples
We next present some typical examples of
streamline data represented using the basic geometric
properties we have described. Each data
set is rendered in the following alternative modes:
(1) as a 3D Euclidean space picture, pseudocol-
ored by curvature value; (2) as a 3D Euclidean
space picture, pseudocolored by torsion value; (3)
as a four-vector quaternion Frenet frame field
plotted in the three-sphere; (4) as a four-vector
quaternion parallel transport frame field plotted
in the three-sphere.
Fig. 8. A complicated set of streamlines derived
from twisting a solid elastic Euclidean
space as part of the process of tying a topological
knot.
Fig. 9. An AVS-generated streamline data
set; the flow is obstructed somewhere in the
center, causing sudden jumps of the streamlines
in certain regions.
While our focus in this paper is specifically on
the frames of space curves, we remark that collections
of frames of isolated points, frames on
stream surfaces [20], and volumetric frame fields
Figure
5: (a) Projected image of a pathological curve segment. (b) Selected Frenet frame components,
showing a sudden change of the normal. (c) The quaternion frame components, showing discontinuity
in values. (d) The discontinuous path of the quaternion frame components in the three-sphere. Color
scales indicate the 0-th component of the curve's four-vector frame (upper left graph in (c)).
could also be represented using a similar mapping
into quaternion space.
6 Visualization Methods
Once we have calculated the quaternion
frames, the curvature, and the torsion for a point
on the curve, we have a family of tensor and scalar
quantities that we may exploit to expose the intrinsic
properties of a single curve. Furthermore,
and probably of greater interest, we also have the
ability to make visual comparisons of the similarities
and differences among families of neighboring
space curves.
The moving frame field of a set of streamlines
is potentially a rich source of detailed information
about the data. However, the 9-component frame
is unsuitable for direct superposition on dense
data due to the high clutter resulting when its
three orthogonal 3-vectors are displayed; direct
use of the frame is only practical at very sparse
intervals, which prevents the viewer from grasping
important structural details and changes at a
glance. Displays based on 3D angular coordinates
are potentially useful, but lack metric uniformity
[1].
The 4-vector quaternion frame is potentially
a more informative and flexible basis for frame
visualizations; below, we discuss several alterna-
Figure
(a) Projected image of a 3D (3,5) torus knot. (b) Selected parallel-transport frame components
displayed along knot. (c) The corresponding smooth quaternion frame components. (d) The path of
the quaternion frame components in the three-sphere projected from four-space. Color scales indicate
the 0-th component of the curve's four-vector frame (upper left graph in (c)).
tive approaches to the exploitation of quaternion
frames for data consisting of families of smooth
curves.
6.1 Direct Three-Sphere Plot of Quaternion
Frame Fields
We now repeat the crucial observation: For
each 3D space curve, the moving quaternion
frames define completely new 4D space curves lying
on the unit three-sphere embedded in 4D Euclidean
space.
These curves can have entirely different geometry
from the original space curve, since distinct
points on the curve correspond to distinct ori-
entations. Families of space curves with exactly
the same shape will map to the same quaternion
curve, while curves that fall away from their
neighbors will stand out distinctly in the three-
sphere plot. Regions of vanishing curvature will
show up as discontinuous gaps in the otherwise
continuous quaternion Frenet frame field curves,
but will be well-behaved in the quaternion parallel
transport frame fields. Straight 3D lines will of
course map to single points in quaternion space,
which may require special attention in the display.
Figures 4d and 5d present elementary examples
of the three-sphere plot for the Frenet frame,
Figure
7: (a) Projected image of a pathological curve segment. (b) Selected parallel transport frame
components, showing smooth change of the normal. (c) The quaternion frame components, showing
continuity in values. (d) The continuous path of the quaternion frame components in the three-sphere.
Color scales indicate the 0-th component of the curve's four-vector frame (upper left graph in (c)).
while Figures 6d and 7d illustrate the parallel
transport frame. Figures 8c,d and 9c,d present
more realistic examples.
The quaternion frame curves displayed in these
plots are 2D projections of two overlaid 3D solid
balls corresponding to the "front" and "back"
hemispheres of S 3 . The 3-sphere is projected from
4D to 3D along the 0-th axis, so the "front" ball
has points with 0 - q 0 - +1, and the "back"
ball has points with
of the frame at each point can be displayed
as shades of gray or pseudocolor. In the default
view projected along the q 0 -axis, points that are
projected from 4D to the 3D origin are in fact
identity frames, since unit length of q requires
at these points. In Fig. 10, we
show a sequence of views of the same quaternion
curves from different 4D viewpoints using parallel
projection; Fig. 11 shows the additional contrast
in structure sizes resulting from a 4D perspective
projection.
6.2 Scalar Geometric Fields
Gray [7, 10] has advocated the use of curvature
and torsion-based color mapping to emphasize
the geometric properties of single curves such
as the torus knot. Since this information is trivial
to obtain simultaneously with the Frenet frame,
we also offer the alternative of encoding the curvature
and torsion as scalar fields on a volumetric
space populated either sparsely or densely with
streamlines; examples are shown in Figures 8a,b
and 9a,b.
6.3 Similarity Measures for Quaternion
Frames
Quaternion frames carry with them a natural
geometry that may be exploited to compute
meaningful similarity measures. Rather than use
the Euclidean distance in four-dimensional Euclidean
space R 4 , one may use the magnitude of
the four-vector scalar product of unit quaternions
or the corresponding angle,
which is the angular difference between the two
4D unit vectors and a natural measure of great-
circle arc-length on S 3 . Choosing this as a distance
measure results in a quantity that is invariant
under 4D rotations, invariant under 3D rotations
represented by quaternion multiplication,
and is also insensitive to the sign ambiguity in
the quaternion representation for a given frame.
Thus it may be used as a quantitative measure of
the similarity of any two 3D frames. This is a natural
way to compare either successive frames on
a single streamline or pairs of frames on different
streamlines.
6.4 Probing Quaternion Frames with 4D
Light
We next explore techniques developed in other
contexts for dealing with 4D objects (see [14],
[15], and [16]). In our previous work on 4D geometry
and lighting, the critical element was the
observation that 4D light can be used as a probe
of geometric structure provided we can find a
way (such as thickening curves or surfaces until
they become true 3-manifolds) to define a unique
4D normal vector that has a well-defined scalar
product with the 4D light; when that objective
is achieved, we can interactively employ a moving
4D light and a generalization of the standard
illumination equations to produce images that selectively
expose new structural details.
Given a quaternion field, we may simply select
a 4D unit vector L to represent a "light direc-
tion" and employ a standard lighting model such
as to select individual components
of the quaternion fields for display using pseudo-color
coding for the intensity.
Fig. 12 shows a streamline data set rendered
by computing a pseudo-color index at each point
using the 4D lighting formula and varying the directions
of the 4-vector L.
6.5 True 4D Illumination
The quaternion curves in 4D may also be displayed
in an entirely different mode by thickening
them to form 3-manifolds using the method
of Hanson and Heng [15, 16] and replacing q(t) in
the 4D lighting formula and its specular analogs
by the 4D normal vector for each volume element
or vertex. The massive expense of volume rendering
the resulting solid tubes comprising the
4D projection to 3D can be avoided by extending
the "bear-hair" algorithm to 4D curves [3, 14, 21]
and rendering the tubes in the limit of vanishing
radius.
7 Interactive Interfaces
We next describe a variety of specific interactive
techniques that we have examined as tools
for exploring quaternion fields.
7.1 4D Light Orientation Control
Direct manipulation of 3D orientation using a
2D mouse is typically handled using a rolling ball
[11] or virtual sphere [6] method to give the user
a feeling of physical control. This philosophy extends
well to 4D orientation control (see [8, 13]),
giving a practical approach to interacting with
the visualization approaches of sections 6.4 and
6.5.
A 3D unit vector has only two degrees of free-
dom, and so is determined by picking a point
within a unit circle to determine the direction
uniquely up to the sign of its view-direction com-
ponent. The analogous control system for 4D
lighting is based on a similar observation: since
the 4D normal vector has only 3 independent degrees
of freedom, choosing an interior point in
a solid sphere determines the vector uniquely up
to the sign of its component in the unseen 4th
dimension (the "4D view-direction component").
Fig. 12 shows an example with a series of snap-shots
of this interactive interface at work. An
additional information display shows the components
of the 4D light vector at any particular moment
7.2 4D Viewing and Three-Sphere Projection
Control
Actually displaying quaternion field data
mapped to the 3-sphere requires us to choose
a particular projection from 4D to 3D and a
method for displaying the features of the stream-
lines. In order to expose all possible relevant
structures, the user interface must allow the
viewer to freely manipulate the 4D projection
parameters. This control is easily and inexpensively
provided using the 4D rolling ball interface
[8, 13]. A special version of our "MeshView" 4D
viewing utility [22] has been adapted to support
real-time interaction with quaternion frame structures
Figures
show snapshots from this
interactive interface for 4D rotations using parallel
and polar 4D projections, respectively.
The simplest viewing strategy plots wide lines
that may be viewed in stereo or using motion par-
allax. A more expensive viewing strategy requires
projecting a line or solid from the 4D quaternion
space and reconstructing an ideal tube in real
time for each projected streamline. The parallel
transport techniques introduced in this paper are
in fact extremely relevant to this task, and may
be applied to the tubing problem as well (see, e.g.,
[5, 18]).
7.3 3D Rotations of Quaternion Displays
Using the 3D rolling ball interface, we can generate
quaternion representations of 3D rotations
of the form
sin ') and transform the
quaternion display by quaternion multipli-
cation, i.e., by changing each point to
This effectively displaces the 3D identity frame in
quaternion space from (1; 0; 0; 0) to q. This may
be useful when trying to compare curves whose
properties differ by a rigid 3D rotation (a common
occurrence in the parallel-transport frame
due to the arbitrariness of the initial condition).
Other refinements might include selecting and
rotating single streamlines in the quaternion field
display to make interactive comparisons with
other streamlines differing only by rigid rotations.
One might also use automated tools to select rotationally
similar structures based on minimizing
the 4D scalar product between quaternion field
points as a measure of similarity.
7.4 Exploiting or Ignoring Double Points
The unique feature of quaternion representations
of orientation frames is that they are dou-
bled. If we have a single curve, it does not matter
which of the two points in S 3 is chosen as a starting
point, since the others follow by continuously
integrating small transformations. A collection
of points with a uniform orientation as an initial
condition similarly will evolve in tandem and
normally requires only a single choice to see the
pattern.
However, it is possible for a frame to rotate a
full 2- radians back to its initial orientation, and
be on the opposite side of S 3 , or for a collection of
streamlines to have a wide range of starting orientations
that preclude a locally consistent method
for choosing a particular quaternion q over its
"neighbor" \Gammaq. We then have several alternatives
ffl Include a reflected copy of every quaternion
field in the display. This doubles the data
density, but ensures that no two frame fields
that are similar will appear diametrically op-
posite; the metric properties of similar curves
will be easy to detect. In addition, 4D rotations
will do no damage to the continuity of
fields that are rotated to the outer surface
and pass from the northern to the southern
hyperhemisphere. If 4D depth is depicted by
a color code, for example, a point that rotates
up to the surface of the displayed solid
ball will smoothly pass to the surface and
then pass back towards the center while its
color changes from positive to negative depth
coding.
ffl Keep only one copy, effectively replacing q
by \Gammaq if it is not in the default viewing hy-
perhemisphere. This has the effect that each
data point is unique, but that curve frames
very near diametrically opposite points on
the S 2 surface of the solid ball representing
the north hyperhemisphere will be close in
orientation but far away in the projection.
In addition, when 4D rotations are applied,
curves that reach the S 2 surface of the solid
ball will jump to the diametrically opposite
surface instead of passing smoothly "around"
the edge to the southern hyperhemisphere.
7.5 Reciprocal Similarities and Difference
One of the most interesting properties of the
quaternion frame method is the appearance of
clusters of similar frame fields in the 3-sphere dis-
play. Two reciprocal tools for exploring these
properties immediately suggest themselves. In
Fig. 13, we illustrate the effect of grabbing a cluster
of streamlines that are spatially close in 3D
space and then highlighting their counterparts
in the 4D quaternion field space, thus allowing
the separate study of their moving frame proper-
ties. This technique distinguishes curves that are
similar in 3D space but have drastically different
frame characteristics.
Fig. 14, in contrast, shows the result of selecting
a cluster of curves with similar frame-
field properties and then highlighting the original
streamlines back in the 3D space display. This
method assists in the location of similar curves
that could not be easily singled out in the original
densely populated spatial display. We are
examining a variety of alternative approaches to
the design of such tools.
8 Conclusion
In this paper, we have introduced a visualization
method for distinguishing characteristic features
of streamline-like volume data by assigning
to each streamline a quaternion frame field derived
from its moving Frenet or parallel-transport
frame; curvature and torsion scalar fields may be
incorporated as well. The quaternion frame is
a four-vector field that is a piecewise smoothly
varying map from each original space curve to a
new curve in the three-sphere embedded in four-dimensional
Euclidean space. This four-vector
field can be probed interactively using a variety
of techniques, including 4D lighting, 4D view con-
trol, and interaction with selected portions of the
data in tandem 3D streamline and 4D quaternion
field displays.
Acknowledgments
This work was supported in part by NSF grant
IRI-91-06389. We thank Brian Kaplan for his assistance
with the vector field data set. We are
indebted to Bruce Solomon for bringing reference
[4] to our attention, and to the referees for a number
of helpful suggestions.
--R
Orientation maps: Techniques for visualizing rotations (a consumer's guide).
Illumination in diverse codi- mensions
There is more than one way to frame a curve.
Calculation of reference frames along a space curve.
A study in interactive 3-d rotation using 2-d control devices
Mathematicians gather to play the numbers game.
Virtual reality performance for virtual geometry.
A Treatise on the Differential Geometry of Curves and Surfaces.
Modern Differential Geometry of Curves and Surfaces.
The rolling ball.
Quaternion Frenet frames.
Rotations for n-dimensional graphics
Interactive visualization methods for four di- mensions
Visualizing the fourth dimension using geometry and light.
Illuminating the fourth dimension.
Visualizing flow with quaternion frames.
Parallel transport approach to curve framing.
Visualizing quaternion rotation.
Constructing stream surfaces in steady 3d vector fields.
Rendering fur with three dimensional textures.
A portable 4D geometry viewer written in OpenGL/Motif
Using geometric constructions to interpolate orientation with quaternions.
Splines as embeddings for generalized cylinders.
"light"
--TR
--CTR
Andrew J. Hanson, Constrained optimal framings of curves and surfaces using quaternion Gauss maps, Proceedings of the conference on Visualization '98, p.375-382, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Andrew J. Hanson, Visualizing quaternions, ACM SIGGRAPH 2005 Courses, July 31-August | frenet frame;quaternion;orientation frame |
614306 | Visualization of Geometric Algorithms. | AbstractThis paper investigates the visualization of geometric algorithms. We discuss how limiting the domain makes it possible to create a system that enables others to use it easily. Knowledge about the domain can be very helpful in building a system which automates large parts of the users task. A system can be designed to isolate the user from any concern about how graphics is done. The application need only specify what happens and need not be concerned with how to make it happen on the screen. We develop a conceptual model and a framework for experimenting with it. We also present a system, GASP, which implements this model. GASP allows quick generation of three-dimensional geometric algorithm visualizations, even for highly complex algorithms. It also provides a visual debugging facility for geometric computing. We show the utility of GASP by presenting a variety of examples. | Introduction
The visualization of mathematical concepts goes
back to the early days of graphics hardware [21],
[2], and continues to the present [18], [16], [15],
[19]. These videos use graphics and motion to
explain geometric ideas in three dimensions and
higher. They have been widely accepted as the
necessary companions to the traditional medium
of journal publication [32], [33]. Similar gains in
exposition are found in the algorithm animation
work that has become popular in recent years [1],
[8], [5], [6], [27], [7], [24], [23], [22]. The limiting
force has been the difficulty of generating the
graphics for such animations.
The main principle guiding our work is that
algorithm designers want to visualize their algorithms
but are limited by current tools. In partic-
ular, visualizations would be less rare if the effort
to create them was little. In the past, visualizations
have been produced by developing sophisticated
software for a particular situation but there
Ayellet Tal is with the Department of Applied Mathematics
and Computer Science at the Weizmann Institute of Science,
Rehovot, Israel. E-mail: [email protected]. This
work was done at Princeton University.
David Dobkin is with the Department of Computer Science
at Princeton University. E-mail: [email protected].
has been little movement towards more widely usable
systems.
By limiting our domain we are able to create
such a system that enables others to use it easily.
We have chosen the domain of computational geometry
to build a system that greatly facilitates
the visualization of algorithms regardless of their
complexity. The visual nature of geometry makes
it one of the areas of computer science that can
benefit greatly from visualization. Even the simple
task of imagining in the mind a three-dimensional
geometric construction can be hard. In many cases
the dynamics of the algorithm must be understood
to grasp the algorithm, and even a simple animation
can assist the geometer.
We describe in this paper our system, GASP
(Geometric Animation System, Princeton). We
present the conceptual model that underlies the
development and implementation of our system,
and we demonstrate its utility in a series of snap-shots
taken from a videotape [30].
Three major objectives set GASP apart from
other animation systems (e.g., Balsa [8], Balsa-II
[5], [6], Tango [27], and Zeus [7]).
ffl GASP allows the very quick creation of three
dimensional algorithm visualizations. A typical
animation can be produced in a matter
of days or even hours. In particular, GASP
allows the fast prototyping of algorithm animations
ffl Even highly complex geometric algorithms can
be animated with ease. This is an important
point, because it is our view that complicated
algorithms are those that gain the most from
visualization. To create an animation, it is
sufficient to write a few dozen lines of code.
ffl Providing a visual debugging facility for geometric
computing is one of the major goals of
the GASP project. Geometric algorithms can
be very complex and hard to implement. Typical
geometric code is often heavily pointer-based
and thus standard debuggers are notoriously
inadequate for it. In addition, running
geometric code is plagued by problems of robustness
2and degeneracies.
There are many ways in which the system can be
used. First, it can be used simply as an illustration
tool for geometric constructions. Second, stand-alone
videotapes to accompany talks and classes
can be created by GASP. Third, GASP can ease
the task of debugging. Fourth, GASP can significantly
enhance the study of algorithms by allowing
students to interact and experiment with the
animations. Fifth, GASP enables users to create
animations to attach to their documents.
Computational geometers describe configurations
of geometric objects either through ASCII
text as generated by symbolic computing tools
(e.g., Mathematica [34]) or through hand drawn
figures created with a graphics editor. Our system
offers an alternative to this by allowing the geometer
to feed ASCII data into a simple program and
get a three-dimensional dynamic (as well as static)
visualization of objects.
Often, the dynamics of the algorithm must be
understood. Animations can assist the geometer
and be a powerful adjunct to a technical paper.
With GASP, generating an animation requires no
knowledge of computer graphics. The interaction
with the system is tuned to the user's area of ex-
pertise, i.e., geometry.
Until recently, most researchers have been reluctant
to implement, let alone visualize, their algo-
rithms. In large part, this has been due to the
difficulty in using graphics systems added to the
difficulty of implementing geometric algorithms.
This combination made it a major effort to animate
even the simplest geometric algorithm. Our
system can ease some of the unique hardships of
coding and debugging geometric algorithms. The
inherent difficulty in checking a geometric object
(e.g., listing vertices, edges, and faces of a poly-
hedron) in a debugger can be eliminated once it
becomes possible to view the object. In practice,
a simple feature such as being able to visualize
a geometric object right before a bug causes the
program to crash is an invaluable debugging tool.
Visualization can have a great impact in educa-
tion. Watching and interacting with an algorithm
can enhance the understanding, give insight into
geometry, and explain the intuition behind the al-
gorithm. The environment in which the animation
runs is designed to be simple and effective. The
viewer is able to observe, interact, and experiment
with the animation.
An important consideration in the design of
GASP is the support of enclosures of animations
in online documents. GASP movies can be converted
into MPEG movies which can be included
in Mosaic documents. The reader of such a document
can click on the icon and see the animation.
For the viewer it takes no more work to view an
animation and the effect is better.
In the next section we describe the conceptual
model upon which GASP was built. In Sections
III and IV we present the specification of the sys-
tem. We focus on the ways the system meets the
needs of both the geometer and the viewer. In Section
V we describe, through examples, how our
system has been used in various scenarios. Section
VI discusses some implementation issues. We
summarize and mention open problems in Section
VII. This paper is an enhanced version of [31].
II. Conceptual Model
Previous algorithm animation systems (e.g., [5],
[6], [7], [27]) have dealt with the general case and
thus have attempted to solve many problems at
once. They have not made any assumptions about
the type of objects and the kind of operations that
make up the building blocks of the animations.
As a result, no knowledge could be used in the
creation of the animation. For example, suppose
a user wants to animate a sorting algorithm. First,
the user needs to decide how the elements should
look - rectangles, cubes or maybe cylinders, and
generate them. Then, the user has to design and
implement the animation of the operations that
make up the algorithm, in this case, the compare
and swap operations. There are many possible
ways to do it.
An animation system for a restricted domain
can be vastly superior to a general-case system.
Knowledge about the entities and the operations
in this domain can be very helpful in building an
animation system which produces animations significantly
more easily. Appropriate ways to visualize
the entities and to animate the operations
can be embedded in the system. Thus, large parts
of the user's task can be automated. In this case,
a system can be designed to isolate the user from
any concern about how graphics is done.
One of the major departures of our work from
previous work is the elimination of the animator.
We define a conceptual model which allows us to
do this. The main principle behind our model
is that programmers should be freed from having
to design and implement the visual aspects
of the animation, and can concentrate solely on
the contents of the animation. This is important
not only because the job of implementing an animation
is time-consuming, but also because it involves
graphics design, an area the user is not usually
familiar with.
The ability to automate the process of generating
animations is very useful for most users.
However, some might find it too restrictive and
would like to be able to change it. We therefore
define a hierarchy of users. While previous
systems identified two types of clients, end-users
and client-programmers, we identify three distinct
user types for any such system, end-users, naive-
programmers, and advanced-programmers.
1. As before, end-users want to experiment with
an algorithm to understand its functioning.
End-users should be able to run the application
(i.e., see the animation) as an interactive
experience. That is, it should be possible to
play the animation at slow or fast speed, to
run it backwards, to pause and alter the objects
being considered, and to run the animation
on an input of the user's choosing among
other things.
2. Naive application programmers want a system
which makes generic animation of algorithms
as easy as possible. The naive-
programmer is not concerned with the presentation
aspects of the animation and can
choose to be isolated from any decisions of
a graphical nature. Typically, the naive programmer
needs the animation for one of three
purposes. The animation aids in the debugging
process, it helps for exploring research
ideas, or it serves as a prototype animation
and will be refined later on.
3. Advanced programmers want, in addition, to
be able to easily modify and extend various visualization
aspects of the animation. A major
concern when generating animations automatically
is that the outcome might be different
from what is most useful to the user and might
not fit the programmer's taste. It is therefore
necessary to provide a way to modify the ani-
mation. The advanced programmer should be
able to change the style of the animation without
having to implement additional code.
To understand these levels, one can draw an
analogy with document processing systems. The
end-user need not know how the animation was
produced. By analogy, the reader of a document
does not care about how it was created. The second
user is the application writer. The application
writer is analogous to the text preparer who
typically uses the default style settings of a sys-
tem. The creator of the document is concerned
more with the text the paper includes and less
with the visual aspects such as the selection of
margins, spacings, and fonts. Similarly, the application
writer is concerned with the contents of
the animation rather than with its visual aspects.
Finally, there are times when the creator of the animation
does want to change the viewing aspects
(e.g., colors) of the animation. By analogy, there
are times when the writer of a document would
like to change fonts and margins. Systems such as
flexibility. The user can
change many defaults by creating personal style
files of L aT E X . To do it, the user needs additional
knowledge. This is the third type of user, the advanced
programmer.
We define a similar interface for animating algo-
rithms. The interface we propose in response to
these needs consists of library calls for the naive-
programmer and external ASCII style files for the
advanced programmer. The idea of using style files
in not new in computer graphics (e.g., see [3]). Its
use in animation systems, however, is novel. The
naive-programmer writes short snippets of C code
to define the structure of the animation. The animation
system knows how to generate an appropriate
animation from this C code. The advanced
programmer can edit an ASCII style file to control
the visual aspects of the animation. The animation
is still generated automatically by the animation
system. But, a different animation will be
created, if the style file is modified. Editing the
style file allows experimentation with various animations
for a given algorithm.
Thus, any animation has four components:
ffl The algorithm animation system.
ffl The algorithm implementation.
ffl Hooks to the animation system within the algorithm
implementation.
ffl Style files.
The programmer need never be concerned with
the algorithm animation system. The algorithm
implementation is something that would have to
be done anyways. Creating the hooks is the main
task of the animator. The use of style files is optional
We believe that the model we suggest is general
enough and can be applied to other constrained
domains. Only the types of objects and supported
operations should be replaced. The structure of
the system and the user interfaces should not be
changed.
III. GASP's Language
GASP is an algorithm animation system for the
domain of computational geometry, which implements
the conceptual model presented in the previous
section.
Recall that there are two types of programmers:
naive-programmers and advanced-programmers.
Naive-programmer are concerned only with the
contents of the animation, whereas advanced-
programmers care also about the visual aspects of
the animation. Naive-programmers need to write
brief snippets of C code to define the structure of
the animation. The code includes only manipulations
of objects and modifications of data struc-
tures. This code contains calls to GASP's library.
Style files are used by advanced-programmers to
change from default aspects of the animation to
other options. In this section we introduce the
programmer interface that GASP provides.
A. Naive-Programmer Interface
GASP's library is a set of building blocks that
enable us to write animations with minimal ef-
fort. All we need to do is write short snippets of C
code, and GASP makes sure that they are powerful
enough to generate an animation. To do this,
we follow two principles: First, the programmer
does not need to have any knowledge of computer
graphics. Second, we distinguish between what is
being animated and how it is animated. The application
specifies what happens (The What) and
need not be concerned with how to make it happen
on the screen (The How). For example, the
creation of a polyhedron is different from the way
it is made to appear through the animation. It
can be created by fading into the scene, by traveling
into its location, etc. The code includes only
the what, but not any of the visualization issues,
such as the way each operation animates, the look
of the objects, or the colors.
The scenes of interest to us are built out of geometric
objects and displays of data structures.
Typical geometric objects are lines, points, poly-
gons, spheres, cylinders and polyhedra. Typical
data structures include lists and trees of various
forms. The operations applied to these objects
depend upon their types. A standard animation
in the domain of computational geometry is built
out of these building blocks.
The parameter data required by GASP is part
of the algorithm being animated. To make its use
easy, GASP requires only very simple data types:
integers, floats, chars, arrays of integers or floats,
and strings. We avoid using more complex data
structures (e.g., a more complex data structure
which represents a polyhedron) in order to keep
the start-up time minimal.
GASP's library contains four classes of operations
- operations on objects, atomic units, mo-
tion, and undo.
A.1 Operations on Objects
GASP's objects include three-dimensional geometric
objects, two dimensional geometric objects,
combinatorial objects, views, text, and titles. Objects
can be created, removed, and modified. They
can also be copied, grouped, and ungrouped.
We use the Create XXX function to create an
object of the type XXX. Each Create function has
different parameters, which are suitable to the object
being created. For example, to create a line
GASP expects the two end-points of the line as
parameters whereas to create a polyhedron GASP
needs the number of vertices of the polyhedron,
the number of faces, the specification of the ver-
tices, and that of the faces. Each Create function
has its own default way to animate. A polyhedron
fades into the scene, a point blinks in order to attract
attention, and a tree is created level after
level starting from the root.
We use Remove object to remove an object from
the scene. An object is removed in the reverse
fashion to the way it is being created. For ex-
ample, a polyhedron fades out from the scene, a
point blinks, and a tree is removed level after level,
starting from the leaves and working its way to the
root.
Each object is related to one or more modification
functions which are appropriate for this ob-
ject. For example, we can add faces to a polyhe-
dron, but naturally there is no equivalent operation
for atomic objects such as spheres.
Copy object creates an exact copy of the object.
This function is very useful when displaying an algorithm
in multiple views. We can create multiple
copies of the objects, and manipulate them in distinct
ways in the various views.
The Group function creates an object which contains
an ordered list of child objects. Grouping allows
us to isolate effects (e.g., motion) to a specific
set of objects. The reverse function, Ungroup, is
also available.
Three-dimensional geometric objects: Typical
objects embedded in three-space include spheres,
cylinders, cubes, cones, planes, lines, points,
line-sets, point-set, sweep-lines, polyhedra, and
meshes. A mesh represents a three-dimensional
shape formed by constructing faces from given vertices
Meshes and polyhedra are unique objects, being
non-atomic. Six special functions on meshes are
supported by GASP: Split mesh removes vertices
from a mesh, together with their related cones of
faces. This operation is animated by first creating
new meshes - a cone for every removed vertex.
The new cones travel away from the initial mesh,
creating black holes in it. Each cone travels in the
direction of the vector which is the difference between
the vertex (which created the cone) and the
center of the split mesh. Attach mesh to mesh attaches
a few meshes to each other. This operation
is visualized in the reversed way to the split oper-
ation. The meshes travel towards a chosen mesh,
until they meet. Bind mesh to mesh is similar to
the attach operation, with one difference. At the
end of the binding process, we get a single object.
Add faces adds new faces to a mesh and is dis-
played, by default, by fading in. Remove faces is
the opposite operation. Add vertices adds new
vertices to a given mesh.
Two-dimensional geometric objects: Typical objects
embedded in two-dimensions include cir-
cles, rectangles, elliptic arcs, lines, points, line-
sets, point-set, sweep-lines, polygons, and splined-
polygons.
Here again, GASP supports a few types of displays
for each object, one of them is the default.
For example, a polygon can be filled or not, two-dimensional
objects can be highlighted by using
related three-dimensional objects (e.g., cylinders
can highlight edges and spheres can highlight ver-
tices), etc.
Combinatorial objects: Combinatorial objects
include lists and trees of various types (binary or
not, red-black trees etc.
Unlike geometric objects, combinatorial objects
do not have an evident visual representation. A
tree can either be presented in two dimensions or
in three. GASP can layout a tree in both ways,
but will usually prefer the novel three-dimensional
style in which the nodes which belong to the same
level of the tree reside on a single cycle; the radius
of the cycles increase as the level of the tree
increases. The creation of a tree is visualized, by
default, by fading in the nodes, level after level,
starting at the root. Similarly, lists can be displayed
in two dimensions (e.g., using rectangles)
or in three dimensions (e.g., using cubes).
In addition to the creation and deletion of trees
and lists, GASP supports the addition of nodes,
and the removal of nodes and subtrees.
Views: A view is more than a window used for
rendering. Built on top of Inventor's Examiner-
viewer [28], a view contains a camera and a light
model. It also contains buttons and thumbwheels
that allows use of the mouse to modify the camera
placement in the scene.
Text and titles: Text objects and title objects
define text strings to be rendered to the screen.
We can annotate our graphics with text. Titles
ease the creation of videotapes.
By default, we use two-dimensional text and
titles, though three-dimensional is supported as
well. Text appears on the screen as one unit, while
titles show up line by line. The default fonts and
font sizes vary.
A.2 Atomic Units
Every logical phase of an algorithm can involve
several operations, which should be animated con-
currently. We can isolate phases of the algorithm
by grouping primitives into logical phases,
called atomic units. We use the Begin atomic
- End atomic phrase to enclose the operations
which belong to the same logical phase, and GASP
executes their animation as a single unit.
For example, if adding a new face to a polyhe-
dron, creating a new plane, and rotating a third
object constitute one logical unit, these operations
are animated as one unit. GASP would concurrently
fade in the new faces of the polyhedron,
fade in the plane, and rotate the cylinder. The
code that generates this animation is:
Add-faces("Poly",
Create-plane("Plane", point1, point2,
Some properties of atomic units: Like any other
object in the system, atomic units are named. Using
names (rather than IDs) not only makes the
interaction between the programmer and the system
more natural, but also allows the end-user to
follow the unfolding of the algorithm by listing the
names of algorithm's atomic units (assuming that
appropriate names have been used). Atomic units
can be nested. To nest atomic units within each
other, we use the Start late or the Finish early
functions. Start late declares when, within the
nesting atomic unit, GASP should start animating
the current unit. Finish early declares when,
within the nesting unit, GASP should terminate
the animation of the current atomic unit. Finally,
each atomic unit can be accompanied with text
and voice which elucidate the events happening
during this unit. Since an atomic unit represents
a logical phase of the algorithm, this is the appropriate
unit to attach explanations to.
A.3 Motion
Smooth motion is a major component of any an-
imation. Motion can be applied to either a single
object (or a set of objects, after grouping them),
or to the camera. When the camera moves, the
whole scene changes.
GASP supports five types of motion. We use
the Rotate obj or the Rotate world primitives
in order to rotate an object or the camera respectively
in terms of an axis and an angle. We
use Scale world or Scale obj to scale an object
in x; We use Translate world
or Translate obj to move an object in x;
We use LinearPath world or LinearPath obj
to float an object on a linear path. We use
Path world or Path obj to float an object on a
B'ezier curve. For the last two operations, we need
only specify the positions through which the object
moves and GASP calculates the exact path
through which an object floats.
All the motion primitives are visualized
smoothly. For example, a rotation is done gradu-
ally, until the desired angle is achieved.
A.4 Undo
We use the undo operation to play the animation
backwards. The undo operation takes as a
parameter the number of atomic units to be re-
versed. GASP knows how to reverse visually each
primitive within an atomic unit.
B. Advanced-Programmer Interface
Each operation supported by GASP generates
a piece of animation which demonstrates the specific
operation in a suitable way. If a programmer
wants freedom to accommodate personal taste, the
parameters of the animation can be modified by
editing a "Style File". The animation is still generated
automatically by the system but a different
animation will be generated if the style file is mod-
ified. The style file affects the animation, not the
implementation.
Large number of parameters can be changed in
the style file. Those parameters can be set either
globally, for the whole animation, or for each
atomic unit separately. We describe some of these
parameters here.
B.1 Visualizing Primitives
Each primitive supported by GASP can be animated
in several ways, one of which is the default
that GASP chooses. However, a parameter can
be set in the style file to change from a default
visualization to an optional one.
For instance, objects can be created in various
ways: by fading in, by scaling up to their full size,
by traveling into the scene, by blinking, by growing
adding one feature after the other (e.g., a tree
grows level after level, a mesh grows by adding the
faces one at a time), or by appearing at once at
the scene. A reasonable subset of the above visualizations
is allowed for each of the objects. We
can choose our favorite option by editing one line
in the style file.
B.2 Visualizing Objects
Objects can be rendered in various fashions. For
example, numerous ways exist to present meshes.
A mesh can be flat, smooth, or wire-framed. The
edges of a mesh can be displayed or not. The same
is true for its vertices. A mesh can be opaque or
transparent to some degree. Different normals defined
for the faces of the mesh influence the colors
of the faces. We can modify all those parameters.
Special attention is given to the issue of col-
ors. "Color is the most sophisticated and complex
of the visible language components" [20]. GASP
chooses colors for the objects and for the features
it creates. GASP maintains palettes of pre-selected
colors, and picks colors which are appropriate
for the device they are presented on (i.e.,
screen or video). This is especially important for
inexperienced users.
Colors are assigned to objects (or other features
such as faces of a polyhedron) on the basis of
their creation time. That is, every logical phase of
the algorithm is associated with an unused color,
and the objects created during that phase get this
color. This scheme allows us to group related elements
and to make it clear to the observer how the
algorithm progresses from phase to phase. Those
colors can be changed in the style file.
B.3 Visualizing Motion
The parameters for the motion operations can
be also altered in the style file. We can change the
axis of the rotation and its angle, the amount of
translation or scale, the number of key-frames of
a path, etc.
B.4 Miscellaneous
There are many other important parameters for
any animation. For instance, we are able to specify
in the style file whether the animation is running
on the screen or on the video. This is so, because
colors look very different on both devices. If we
want colors that look good on a video, we must
use less saturated colors. GASP knows how to
generate appropriate set of colors.
As another example, we can add one line to the
style file which tells GASP to stop after every
frame and execute a given script file. We found
this option to be very useful. We could record a
movie frame by frame by writing a suitable script
file. We could generate MPEG movies from GASP
movies, by providing yet another script file that
converted each frame.
Style File Example
The following is part of the style file for an animation
which will be discussed in a later section.
The style file determines the following aspects of
the animation. The background color is light gray.
The colors to be chosen by GASP are colors which
fit the creation of a video (rather than the screen).
Each atomic unit spans frames, that is, the operations
within an atomic unit are divided into
increments of change. If the scene needs to be
scaled, the objects will become 0.82 of their original
size. Rotation of the world is done 20 degrees
around the Y axis. The atomic unit pluck is executed
over 100 frames, instead of over 30. The
colors of the faces to be added in the atomic unit
add faces are green.
begin-global-style
end-global-style
begin-unit-style pluck
end-unit-style
begin-unit-style add-faces
end-unit-style
Note that the syntax of the style file is eminently
simple.
IV. GASP's Environment
The interactive environment is a primary part of
the GASP system. It allows researchers, program-
mers, and students to explore the behavior of their
geometric algorithms. It is designed to be simple
and effective, and to allow the viewer to observe,
interact, and experiment with the animation.
The GASP environment, illustrated in Fig. 1,
consists of a Control Panel through which the student
controls the execution of the animation, several
windows where the algorithm runs, called the
Algorithm Windows, along with a Text Window
which explains the algorithm.
Fig. 1. GASP's Environment
A. The Control Panel
The control panel, at the upper left of Fig. 1,
lets us explore the animation at our own pace. It
uses the VCR metaphor, to make the interaction
intuitive, familiar, and easy.
We might want to stop the animation at various
points of its execution. Sometimes we would
like to fast-forward through the easy parts and
single-step through the hard ones to facilitate our
understanding. We may want to "rewind" the algorithm
in order to observe the confusing parts
of the algorithm multiple times. We may need to
PAUSE at any time to suspend the execution of
the algorithm or to EJECT the movie. GASP's
environment allows us to do all these.
B. The Algorithm Window
We observe the algorithm in the algorithm windows
(at the bottom of Fig. 1). Algorithm
windows use Inventor's Examiner-Viewer [28] and
thus are decorated with thumbwheels and push
buttons.
Thumbwheels let us rotate and scale the scene.
We use the left thumbwheel for a screen X rota-
tion. We use the bottom thumbwheel for a screen
Y rotation. We use the right thumbwheel for dolly
(in and out of screen). We use the zoom slider on
the bottom to change the camera height (ortho-
graphics) or the height-angle (perspective).
The push buttons at the right-hand side of the
algorithm window do the following operations. We
click the help button to display a help card for the
viewer. We push the home button to reset the
camera to a "home" position. We push the set
home button to set a new home position. We click
the view all button to reposition the camera so that
all objects become visible. The seek button makes
the camera animate to the center of the selected
object.
The left-hand side push buttons give us information
about the algorithm and the animation. The
ls button lists the objects currently appearing on
the screen. The obj button prints a description of
a chosen object. For example, when a polyhedron
is picked, its vertices and faces are printed out.
The lu button lists the atomic units. The xf button
prints the current transformation of either a
selected object or the global transformation. The
lpr button creates a snapshot file of the screen.
Using the lpr option, we can create pictures to annotate
our papers.
C. The Text Window
We can read about the algorithm and the animation
in the text window (at the upper right of Fig.
1). The text window lets the client-programmer
accompany the animation running on the screen
with verbal explanations. Text can elucidate the
events and direct the viewer's attention to specific
details. Every atomic unit is associated with
a piece of text which explains the events occurring
during this unit. When the current atomic
unit changes, the text in the window changes ac-
cordingly. Voice is also supported by GASP. The
viewer can listen to the explanations that appear
in the text window.
V. GASP in Action
In this section we describe different scenarios for
which we produced animations to accompany geometric
papers. Excerpts from the animations are
given in a videotape [30]. For each case we present
the problem of study, the goal in creating the animation
and the animation itself.
A. Building and Using Polyhedral Hierarchies
This algorithm, which is based on [11], [12],
builds an advanced data structure for a polyhedron
and uses it for intersecting a polyhedron and
a plane. The main component of the algorithm is
a preprocessing method for convex polyhedra in
3D which creates a linear-size data structure for
the polyhedron called its Hierarchical Representa-
tion. Using hierarchical representations, polyhedra
can be searched (i.e., tested for intersection
with planes) and merged (i.e., tested for pairwise
in logarithmic time. The basic geometric
primitive used in constructing the hierarchical
representation is called the Pluck: Given a
polyhedron,
, we build a polyhedron, P 1
, by removing
vertices in V (P 0
). The cones of
faces attached to the vertices are also removed.
This leaves holes in the polyhedron P 0
. These
holes are retriangulated in a convex fashion. Repetition
of plucking on the polyhedron P 1
creates a
new polyhedron, P 2
. The sequence
. P n
forms the hierarchical representation.
There were two goals for creating the animation
([13]). First, we wanted to create a video
that explains the data structure and the algorithm
for educational reasons. Second, since the algorithm
for detecting plane-polyhedral intersection
had not been implemented before, we wanted the
animation as an aid in debugging the implementation
The animation explains how the hierarchy is
constructed and then how it is used. For the first
of these we explain a single pluck and then show
how the hierarchy progresses from level to level.
First, we show a single pluck. The animation
begins by rotating the polyhedron to identify it to
the user (Fig. 2). Next we highlight a vertex and
lift its cone of faces by moving them away from
the polyhedron (Fig. 3). Then, we add the new
triangulation to the hole created (Fig. 4). Finally,
we remove the triangulation and reattach the cone,
to explain that plucking is reversible.
This is done in our system by the following piece
of C code, which is up to the creator of the animation
to write.
explain-pluck(int poly-vert-no,
float (*poly-vertices)[3],
int poly-face-no,
long *poly-faces,
char *poly-names[],
int vert-no, int *vertices,
int long *faces)
/* create and rotate the polyhedron */
Create-polyhedron("P0",
poly-vert-no, poly-face-no,
poly-vertices, poly-faces);
remove vertices and cones */
vert-no, vertices);
/* add new faces */
/* undo plucking */
Each of the operations described above is a single
GASP primitive. Create polyhedron fades in
the given polyhedron. Rotate world makes the
scene spin. Split polyhedron highlights the vertex
and splits the polyhedron as described above.
Add faces fades in the new faces. Undo removes
the triangulation and brings the cone back to the
polyhedron.
Notice that the code does not include the graph-
0Fig. 2. The Polyhedron
Fig. 3. Removing the Cone of Faces
Fig. 4. Retriangulating the Polyhedron
ics. Coloring, fading, traveling, speed, etc. are
not mentioned in the code. In the related style
file these operations are controlled. This allows
the user to experiment with the animation without
modifying and recompiling the code.
After explaining a single pluck, the next step is
to show the pluck of an independent set of vertices.
This is no more difficult than a single pluck and is
achieved by the following code.
char *atomic1-name,
char *atomic2-name,
char *atomic3-name,
char *poly-name,
int vert-no, int *vertices,
int long *faces,
char *new-polys-names[])
poly-name, vert-no, vertices);
Finish-early(0.5);
for
Here again we use the style file to choose speeds
at which cones move out, faces fade in, the scene
spins, etc. We also use the style file to choose a
next color that contrasts the new faces with those
that are preserved.
We found GASP to be very helpful in implementing
the algorithm for detecting plane-
polyhedron intersections. Bugs we were not aware
of showed up in the animation (e.g., we got non-convex
polyhedra as part of the hierarchical rep-
resentation). We also found GASP's environment
to be very useful. When debugging the algorithm,
it is necessary to watch earlier stages of the animation
(the construction process) which set state
variables that are needed by later stages. The control
panel of GASP allows us to fast-forward over
these initial fragments to get to the section of in-
terest. Single-stepping through the section under
consideration and rewinding are also highly valuable
tools.
B. Objects that Cannot be Taken Apart with Two
Hands
This animation is based on [26]. This paper
shows a configuration of six tetrahedra that cannot
be taken apart by translation with two hands (Fig.
5). Then, it presents a configuration of thirty
objects that cannot be taken apart by applying
an isometry to any proper subset (Fig. 6). The
ASCII data of the configurations was produced by
using Mathematica.
The purpose of the animation is to illustrate the
use of GASP as an illustration tool for geometric
configurations. It took us far less than a day
to generate that animation. The increased understanding
from a moving animation is significant.
The animation has two parts. Each one of them
shows one of the configurations described above.
Each part begins by fading each object which belong
to the configuration, in turn, into the scene.
The colors of the objects vary. After all the objects
appear in the scene, the scene rotates so that
the configuration as a whole can be examined.
The animation is produced by the following
In the code below, except
for get polyhedron, the other functions belong
to GASP. The function get polyhedron reads
the ASCII data for each object from a file.
Create polyhedron is responsible for fading in a
single object. Rotate world causes the scene to
spin.
hands(int object-no)
float (*points)[3];
long *indices;
int nmax, fmax,
char *atomic-name, *object-name;
for
object i */
get-polyhedron(&points, &indices,
Create-polyhedron(object-name, nmax,
Fig. 5. Objects that Cannot be Taken Apart with Two
Hands Using Translation
Fig. 6. Objects that Cannot be Taken Apart with Two
Hands Using Isometries
C. Line Segment Intersections
This example, which is based on [9], is a short
clip from an animation ([29]) which shows a line
segment intersection algorithm in action and illustrates
its most important features. The goal
is to use the animation as an aid in explaining a
highly complex algorithm. The viewer of the animation
can not only control the execution of the
animation but can also choose the input by editing
an ASCII file containing the initial line segments.
This example also illustrates the use of GASP in
creating two-dimensional animations. In a matter
of days we generated the animation.
The animation runs in three phases. The first
phase presents the initial line segments and the
visibility map that needs to be built (Fig. 7). The
second phase demonstrates that the visibility map
is being constructed by operating in a sweepline
fashion, scanning the segments from left to right,
and maintaining the visibility map of the region
swept along the way (Fig. 8). Finally, a third
pass through the algorithm is made, demonstrating
that the cross section along the sweepline is
maintained in a lazy fashion, meaning that the
nodes of the tree representing the cross section
might correspond to segments stranded past the
sweepline (Fig. 9).
In the first pass of the animation, red line segments
fade into the scene. While they fade out,
a green visibility map fades in on top of them,
to illustrate the correlation between the segments
and the map. Yellow points, representing the "in-
teresting" events of the algorithm, then blink. At
that point, the scene is cleared and the second pass
through the algorithm begins.
During the second pass the viewer can watch as
the sweep-line advances by rolling to its new position
(the gray line in Fig. 8). The animation also
demonstrates how the map is built - new subsegments
fade in in blue, and then change their color
to green to become a part of the already-built visibility
map.
The third pass adds more information about the
process of constructing the map by showing how
the the red-black tree which is maintained by the
algorithm changes. The animation also presents
the "walks" on the map (marked in yellow in Fig.
9).
There are only eleven GASP's calls necessary for
Fig. 7. The Visibility Map
Fig. 8. Building the Visibility Map
Fig. 9. Maintaining the Cross Section
the creation of this animation and they are:
Begin atomic, End atomic, Rotate world,
Scale world, Create point, Create line,
Create Sweepline, Modify Sweepline,
Create tree, Add node to tree,
Remove object.
D. Heapsort
Though GASP was originally meant to facilitate
animations that involve three-dimensional geometric
computation, we found that the interface
we provide actually facilitates the animation of
any algorithm that involves the display of three
dimensional geometry, among them many of the
algorithms in [25]. To show the added power of
the system, we chose to animate heapsort.
Heapsort is an efficient sorting algorithm that is
defined from the basic operations on heaps. The
idea is to build a heap containing the elements to
be sorted and then remove them all in order.
In the animation, each element is represented
as a cylinder whose height is proportional to its
value. The elements first appear in an array
and then it is demonstrated how the array can be
looked upon as a tree. From this point, the animation
shows two views of the heap - one as an array
and the other as a tree displayed in three dimensions
(Fig. 10). The next step of the animation
is to build a heap out of the tree in a bottom up
fashion (Fig. 11). Whenever two elements switch
positions, they switch in both views. After the
heap is built, the first and the last element switch
and the heap is rearranged. At the end, when
the array is sorted, the colors of the elements are
"sorted" as well (Fig. 12).
VI. Implementation
GASP is written in C and runs under UNIX on a
Silicon Graphics Iris. It is built on top of Inventor
[28] and Motif/Xt [14].
GASP consists of two processes which communicate
with each other through messages, as shown
in Fig. 13. Process 1 includes the collection of
procedures which make up the programmer inter-
Process 2 is responsible for executing the
animation and handling the viewer's input.
The application's code initiates calls to procedures
which belong to Process 1. Process 1 prepares
one or more messages containing the type
Fig. 10. Two Views of the Array
Fig. 11. Building the Heap
Fig. 12. The Sorted Array
message
type
ID
user's
Inventor
process processcode 2
y
z
Fig. 13. GASP's Architecture
of the operation required and the relevant information
for that operation and sends it to Process
2. Upon receiving the message, Process 2 updates
its internal data structure or executes the anima-
tion, and sends an acknowledgement to Process 1.
The acknowledgement includes internal IDs of the
objects (if necessary). Process 1, which is waiting
for that message, updates the hash table of objects
and returns to the application's code.
This hand-shaking approach has a few advan-
tages. First, it enables the user to visualize the
scene at the time when the calls to the system's
functions occur and thus facilitates debugging.
Since rendering is done within an event mainloop,
it is otherwise difficult to return to the application
after each call. Second, compilation becomes very
quick since the 'heavy' code is in the process the
application does not link to. Finally, the user's
code cannot corrupt GASP's code and vice versa.
This is an important point, because one of the major
goals of GASP is to ease debugging. During
debugging, it is always a problem to figure out
whose bug is it - the application's or the system's.
Process 2, which is responsible for the graphics,
works in an event mainloop. We use Inventor's
Timer-Sensor to update the graphics. This sensor
goes off at regular intervals. Every time it goes
off, Process 2 checks which direction the animation
is running. If it is running forwards, it checks
whether there is still work to do updating the animation
(if yes, it does it) or it is at the point when
further instructions from Process 1 are needed. In
the latter case, it checks to see whether there is
a message sent by Process 1. It keeps accepting
messages, updating its internal data structure,
and confirming the acceptance of messages until
it gets an END ATOMIC message. At that point,
Process 2 starts executing all the commands specified
for the atomic unit. It informs the first process
upon termination. If the animation is running
backwards, it updates the animation according to
the phase it is in.
VII. Conclusions
GASP has been built as an animation system for
computational geometry. Geometric algorithms
can be highly complex, hard to implement and
debug, and difficult to grasp. The visual nature
of geometry makes animations extremely helpful.
Researchers can use the system as an aid in exploring
new ideas; programmers can use it as a
debugging tool; students can enhance their understanding
of the studied algorithm and get some
intuition into the way it operates.
GASP is a demonstration of a concept. Picking
a small domain makes it possible to create
an animation system that enables others to use
it easily. In a well-defined domain, we can use
knowledge about the kinds of objects and operations
that need to be visualized. In this case,
it becomes practical to hide the graphics system
from the user and to automate the creation of the
animation. All the user needs to specify is the logical
operations that need to be visualized (i.e., the
what), but not how to do it (i.e., the how).
We also recognize that any algorithm animation
system has various types of users with differing
needs. The naive programmer would like to produce
a "quick-and-dirty" animation to check out
ideas or for debugging purposes. The naive programmer
need not have any knowledge of computer
graphics. The code includes only manipulations
of objects and modifications of data struc-
tures. The algorithm animation system makes
heuristic guesses for the way the animation should
appear. The advanced programmer would like to
have a say in the way the animation looks. The
advanced programmer experiments with the animation
by editing an ASCII style file, without ever
modifying or compiling the code. The end-user
would like to experiment with a finished anima-
tion. An algorithm animation system should serve
these varying levels of user-types by providing distinct
5interfaces. GASP supports these levels.
Limiting the domain and providing multiple
suitable interfaces make it possible to create an
algorithm animation system that allows users to
quickly create animations. With GASP, a typical
animation can be generated in a very short time.
This is true even for highly complex geometric al-
gorithms. This is important because complex algorithms
are those that benefit the most from being
visualized.
We have shown several animations of geometric
algorithms. The system is now at the stage where
other people are starting to use it. In fact, three
[4], [10], [30] out of the eight segments of animations
which appeared in the Third Annual Video
Review of Computational Geometry were created
by GASP. Two of them were created by the geometers
who made movies describing their newly
discovered algorithms. They took less than a week
to produce. We consider it to be a very short time
for a first use of a system. The system is now
available for ftp.
In the future, GASP can be expanded to support
four-dimensional space. This can be an invaluable
tool for research and education. We would
like to experiment with GASP in an actual class-
room. We believe that animations can be used
as a central part of teaching computational geom-
etry, both for demonstrating algorithms, and for
accompanying programming assignments. Finally,
many intriguing possibilities exist in making an
electronic book out of GASP. A user will then be
able to sit on the network, capture an animation,
and experiment with the algorithm.
We believe that reducing the effort involved in
creating animations will increase their prolifera-
tion. We hope that GASP is a first step in the
creation of animation systems for constrained do-
mains. Visualization can apply to many focused
enough domains such as topology, databases, and
networks.
Acknowledgements
We would like to thank Bernard Chazelle for numerous
discussions and great advice.
This work was supported in part by the National
Science Foundation under Grant Number
CCR93-01254, by The Geometry Center, University
of Minnesota, an STC funded by NSF, DOE,
and Minnesota Technology, Inc., and by DIMACS,
an STC funded by NSF.
--R
Sorting out sorting (video).
Complex Function Graphs
Graphical style: Towards high quality illustration.
Almost optimal polyhedral separators (video).
Algorithm Animation.
Exploring algorithms using Balsa-II
Zeus: A system for algorithm animation and multi-view editing
Techniques for algorithm animation.
An optimal algorithm for intersecting line segments in the plane.
Computing the rectangle discrepancy (video).
Fast detection of polyhedral intersections.
Determining the separation of preprocessed polyhedra - a unified approach
OSF/Motif - Programmer's Reference
Discrete groups and visualization of three-dimensional manifolds
Not Knot (video).
A Document Preparation System L a T E X User's Guide and Reference Manual.
The sudanese mobius band (video).
Outside in (video).
Graphics Design for Electronic Documents and User Interfaces.
Turning a Sphere Inside Out (video).
A library for visualizing combinatorial structures.
A principles taxonomy of software visualization.
Robust Algorithms in a Program Library for Geometric Computation.
Objects that cannot be taken apart with two hands.
TANGO: A framework and system for algorithm animation.
An object-oriented 3D graphics toolkit
The New-Jersey line-segment saw massacre (video)
Computing Optimal Geometries.
Computational Crystal Growers Workshop.
--TR
--CTR
David P. Dobkin , Emden R. Gansner , E. Koutsofios , S. C. North, A path router for graph drawing, Proceedings of the fourteenth annual symposium on Computational geometry, p.415-416, June 07-10, 1998, Minneapolis, Minnesota, United States
Maria Shneerson , Ayellet Tal, GASP-IIa geometric algorithm animation system for an electronic classroom, Proceedings of the fourteenth annual symposium on Computational geometry, p.405-406, June 07-10, 1998, Minneapolis, Minnesota, United States
David P. Dobkin , Ayellet Tal, Small representation of line arrangements, Proceedings of the seventeenth annual symposium on Computational geometry, p.319-320, June 2001, Medford, Massachusetts, United States
Patricia Crossno , David Rogers, Visual Debugging, IEEE Computer Graphics and Applications, v.22 n.6, p.6-10, November 2002
Alejo Hausner , David P. Dobkin, GAWAIN: visualizing geometric algorithms with Web-based animation, Proceedings of the fourteenth annual symposium on Computational geometry, p.411-412, June 07-10, 1998, Minneapolis, Minnesota, United States
Maria Shneerson , Ayellet Tal, GASP-II: a geometric algorithm animation system for an electronic classroom, Proceedings of the thirteenth annual symposium on Computational geometry, p.379-381, June 04-06, 1997, Nice, France
Jrgen Dllner , Klaus Hinrichs , Hermann Spiegel, An interactive environment for visualizing and animating algorithms, Proceedings of the thirteenth annual symposium on Computational geometry, p.409-411, June 04-06, 1997, Nice, France
Maria Shneerson , Ayellet Tal, Visualization of geometric algorithms in an electronic classroom, Proceedings of the 8th conference on Visualization '97, p.455-ff., October 18-24, 1997, Phoenix, Arizona, United States
Patricia Crossno , Edward Angel, Visual debugging of visualization software: a case study for particle systems, Proceedings of the conference on Visualization '99: celebrating ten years, p.417-420, October 1999, San Francisco, California, United States
Yoram Moses , Zvi Polunsky , Ayellet Tal , Leonid Ulitsky, Algorithm Visualization For Distributed Environments, Proceedings of the 1998 IEEE Symposium on Information Visualization, p.71-78, October 19-20, 1998, North Carolina
Gill Barequet , Daniel Shapiro , Ayellet Tal, History consideration in reconstructing polyhedral surfaces from parallel slices, Proceedings of the 7th conference on Visualization '96, p.149-ff., October 28-29, 1996, San Francisco, California, United States
James E. Baker , Isabel F. Cruz , Giuseppe Liotta , Roberto Tamassia, Algorithm animation over the World Wide Web, Proceedings of the workshop on Advanced visual interfaces, May 27-29, 1996, Gubbio, Italy
David P. Dobkin , Ayellet Tal, Efficient and small representation of line arrangements with applications, Proceedings of the seventeenth annual symposium on Computational geometry, p.293-301, June 2001, Medford, Massachusetts, United States
Camil Demetrescu , Irene Finocchi , Giuseppe F. Italiano , Stefan Nher, Visualization in algorithm engineering: tools and techniques, Experimental algorithmics: from algorithm design to robust and efficient software, Springer-Verlag New York, Inc., New York, NY, 2002 | algorithm animation;three-dimensional geometric algorithms;computational geometry |
614325 | Volume-Preserving Free-Form Solids. | AbstractSome important trends in geometric modeling are the reliance on solid models rather than surface-based models and the enhancement of the expressive power of models, by using free-form objects in addition to the usual geometric primitives and by incorporating physical principles. An additional trend is the emphasis on interactive performance. In this paper we integrate all of these requirements in a single geometric primitive by endowing the tri-variate tensor product free-form solid with several important physical properties, including volume and internal deformation energy. Volume preservation is of benefit in several application areas of geometric modeling, including computer animation, industrial design and mechanical engineering. However, previous physics-based methods, which usually have used some forms of "energy," have neglected the issue of volume (or area) preservation. We present a novel method for modeling an object composed of several tensor-product solids while preserving the desired volume of each primitive and ensuring high-order continuity constraints between the primitives. The method utilizes the Uzawa algorithm for non-linear optimization, with objective functions based on deformation energy or least squares. We show how the algorithm can be used in an interactive environment by relaxing exactness requirements while the user interactively manipulates free-form solid primitives. On current workstations, the algorithm runs in real-time for tri-quadratic volumes and close to real-time for tri-cubic volumes. | Introduction
Modern geometric modeling emphasizes solid
models rather than surface-based models, usage
of free-form objects in addition to the usual geometric
primitives, incorporation of physical prin-
ciples, and interactive performance. In this paper
we integrate these four issues in a single setting
by endowing the tri-variate tensor product B'ezier
free-form solid with physical properties.
1.1 Background
The common approach to representing and manipulating
free-form objects is by using a boundary
representation (Brep), with parametric surfaces
for the boundary. Adjacencies between
neighboring surface patches are stored explicitly.
Using a Brep, it is inherently difficult to model
physical attributes associated with the object.
Such attributes are easier to consider when using
parametric free-form solids instead of surfaces.
The difference between the two is the dimension
of the parameter space (two for surfaces and three
for solids.)
Some previous systems have used free-form
solids (e.g. [Farouki85].) However, parametric volumes
are usually not used in the way that surfaces
are used, for direct object design, but rather for
design of separate deformation entities used for
modification of existing objects. This can be explained
by the fact that if only the boundary of
the object is of interest, there is no need to use
free-form solids, which enable control over what
happens 'inside' the object.
Free-form deformations (FFD) were introduced
in [Sederberg86] as a technique for defining a
smooth deformation on a space including the objects
embedded within that space, regardless of
their geometric representation. FFD utilizes a
tri-variate tensor-product parametric B'ezier solid
defined by a lattice of control points. The defining
parameter space is the unit cube. To deform
an object point, its local coordinates inside
the unit cube are computed. Then the image of
the point under the deformation is computed using
the B'ezier control points and basis functions.
Naturally, other basis functions (such as NURBS)
could be used as well [Griessmair89].
suggested a user interface based on
control point manipulation, with which is it rather
difficult and tedious to obtain a desired defor-
mation. Direct manipulation of object points instead
of control point manipulation was suggested
in [Borrel91, Hsu92]. The user directly moves
an object point, and the system automatically
computes the control point configuration yielding
the desired point displacement constraints. [Rap-
poport94] extends this method to approximate
('probabilistic') point constraints with a non-isotropic
shape parameter. [Joy91] gave methods
to manipulate a group of control points in
a single operation. A more general type of extension
to FFD was presented in [Coquillart91], who
defined an arbitrary volume and used numerical
routines to compute local coordinates within this
volume. Neither of the above methods attaches
any physical meaning of the deformation. Simple
constrained deformations were described in [Bor-
rel94].
Physics-based modeling is a successful research
area in geometric modeling. Several papers
[Terzopoulos94, Welch92, Kallay93, Moreton92,
Celniker91, Greiner93] presented surface design
schemes based on minimization of an energy functional
subject to linear point constraints such as
location and tangent vectors. We are not aware of
any work using similar ideas for free-form solids.
Other applications of physics-based modeling are
in reconstruction and tracking [Fang92], motion
control [Shapiro88], and modeling of flexible and
rigid objects [Barzel88].
The only relevant reference we are aware of for
volume preservation 1 is [Aumann92], which gives
an algorithm that approximates a surface of revolution
by a surface which is not a surface of revolution
while trying to preserve the original volume.
Free-form solids are not discussed, and it seems
that the algorithm is not suited for them at all.
for computing the area or volume en-
refers to an unpublished report about
volume preserving deformations, but such deformations
cannot be everywhere locally satisfied with polynomial
fields except for the simple case of pure shears.
closed by curves and surface patches were given
in [Elber94, Liu87].
1.2 Proposed Approach
We use free-form solids as design primitives. In
the context of solid model design in general and
specifically of free-form solids, one of the most basic
physical properties of a space cell is its volume
size. A major drawback of current user interaction
techniques when applied to free-form solid
design is that the user has no way of controlling
the contained volume size. Currently, solid design
(as opposed to using volumes for free-form
deformations) is not much more than design of
the surfaces bounding the volume, each of them
independently.
We present a novel method for modeling an
object composed of several tensor-product solids
while preserving the desired volume of each primitive
and ensuring high-order continuity constraints
(and any linear constraints in the control
points) between the primitives. The method
utilizes the Uzawa algorithm for non-linear opti-
mization, with an objective function based on deformation
energy or least squares (LSQ).
The algorithm is very useful for several appli-
cations. For example, hierarchical FFDs were
used by [Chadwick89] for computer animation of
muscles. A similar effect could be achieved by
a combination of point displacement constraints
and smooth modification of desired volume size.
The algorithm is useful in industrial design, where
basic functional requirements are automatically
obeyed without imposing limitation on the creativity
of the designer. When the object material
is known, volume preservation means weight
preservation, hence is attractive for mechanical
engineering applications when the engineer designs
a part or an assembly. The preservation of
volume of each element of the objects enables us
to keep required proportions between volumes and
weights of object parts. Obviously, simple scaling
of the object in order to achieve a desired volume
is not possible, due to the presence of point location
and continuity constraints.
Our algorithm uses B'ezier solids of arbitrary
orders as the underlying mathematical definition
of a free-form solid primitive. A B'ezier solid of
known orders is completely specified by its control
points. The input to our algorithm consists
of a desired object form (a set of primitives defined
by their control points configurations), desired
primitives volume sizes and a set of linear
constraints on the control points implied by continuity
requirements between the primitives or imposed
directly by the user. The control points configurations
can either be given directly by the user
through control point manipulation, or computed
from point displacement constraints specified by
direct solid manipulation as in [Borrel91, Hsu92].
The algorithm computes a control point configuration
closest to the given one (in a deformation
energy minimization or least square sense) such
that the deformed primitives contain volume of
the given sizes and obeying the linear constraints.
The algorithm does not automatically guarantee
that the boundaries do not self-intersect.
Note that it is the global volume of a given free-form
cell that is being preserved, not the volume
of an object embedded within the cell or of local
sub-cells. This approach was introduced in
the finite element method for rubber type ma-
terials, but here we avoid the complexity of the
penalty approach [Bercovier81] and use a duality
argument to deal with the constraint, based on
the Uzawa algorithm for non-linear programming
[Arrow58, Ciarlet88].
Special measures were taken in order to endow
the algorithm with real-time performance on current
workstations. We utilize the fact that the
volume size actually depends only on the boundary
surfaces of the deformed primitive, hence volume
size computation can be done with a subset
of the control points. The inside points are of no
interest to the user as well for the object's geome-
try, but are required for physical computations on
the object, such as tear strength or deformation
energy. The inside control points are computed
from the outside points using a 3-D variant of the
Coons surface formula when energy computation
is required. This does not prevent them in general
from crossing the parametric boundary, but intersection
is not caused for most modeled objects.
In an interactive setting, the algorithm relaxes
its accuracy requirements during object manipu-
lation, computing an accurate solution only when
real-time performance is no longer essential. This
technique gives the user a feeling that volume is
preserved during interaction.
Although in this work we limit the method description
to B'ezier solids, it can easily be adopted
for most of the other common definitions of free-form
solids, for example NURBS. The only restriction
on the mathematical definition of the solid
that we have is that it should be a defined as a
linear combination of the control points.
The paper is organized as follows. Section 2
gives necessary mathematical notations. Section 3
formalizes the mathematical problem involved.
Section 4 explains in detail how to compute the
size of the volume enclosed by a tensor product
B'ezier solid and the partial derivatives of the volume
size function. Section 5 explains how to represent
continuity constraints. Section 6 explains
how to compute the energy required for a change
of a tensor product B'ezier solid from one control
point configuration to another, and the energy
derivative. Section 7 presents the numerical algorithm
used to solve the mathematical problem,
and Section 8 describes our implementation and
results.
Notations
We introduce here the formal mathematical notations
used during the rest of the work. A tensor
product B'ezier solid is defined using a set of control
points P ijk 2 R 3 . The image of a parametric
point (u; v; w) in the unit cube is
nw
i (t) is the Bernstein polynomial defined
by
Denote the x; z coordinates of a control point
by P x
ijk respectively. Denote the volume
of the solid primitive defined by a set P
of control points by V olume(P ); and denote by
@V olume(P )=@P the vector whose components
are the partial derivatives
abc
abc
@P z
abc
for every triplet abc
Denote the energy of a transformation from a
B'ezier solid defined by a configuration Q of control
points to one defined by configuration P of
control points by Energy(P \Gamma Q); and denote by
Q)=@P the vector whose components
are the partial derivatives
abc
abc
@P z
abc
Denote by -
P the column vector of all the control
points from all the B'ezier solids in the system,
3 Problem Statement
The general problem we handle is finding a control
point configuration that satisfies the constraints
(linear and volume) and which results in an object
as close as possible to the given one. The change
of an object can be represented in two ways. The
simpler is as the sum of squares of distances between
the original control point positions and the
new ones. The second is as the energy required
to get from the original object to the new one. In
this section we formalize this problem as a set of
mathematicalrequirements that the target control
points configuration should satisfy.
We denote by Dist(P; Q) the distance between
two objects resulting from control points locations
P and Q, which can stand for:
using an energy approach
using a LSQ approach
In case objects are modeled directly, the original
objects are usually close to the desired final ones,
in which case the the distance measure should be
LSQ since we want the resulting object control
points to be close to the original ones so that the
shape of the object will incur a minimal change.
With physics-based modeling, we use as the
original object the element in an initial state and
we deform it by applying linear constraints and
minimizing the energy. The resulting object then
simulates the behavior of an elastic material with
internal pressure. Initial control point configurations
and the specification of constraints can be
obtained by any method, including direct control
point manipulation and direct manipulation
of points and vectors inside or on the object.
The resulting constrained minimization problem
(M) is: given a control point configuration
(each Q i representing a single trivariate
primitive), a set of corresponding volume
Vn and a matrix C representing linear
constraints on the control points, find a new
control point configuration
that the following holds:
ffl P is the solution of min P 0
ffl For each i, V olume(P
The desired volumes V i could be the initial volume
sizes or any other number. For example,
smooth variation of the desired volumes can be
used for dilating the object during animations.
4 The Volume Function
Our volume preservation algorithm requires the
computations of V olume(P ) and of @V olume(P )
@P .
Below we show how to analytically compute the
exact volume size of a tensor product B'ezier solid.
We show that the computation of the volume size
can be represented as a scalar product of two vec-
tors: one whose components are the multiplication
of the coordinates of the solid's control points, and
a second one whose components are based on the
B'ezier basis functions and therefore can be computed
off-line just once for each combination of
orders of basis functions.
4.1 Computing the Volume
The size of the volume specified by a three-dimensional
function F (u; v; w) defined over the
unit cube is
Z 1Z 1Z 1JF dudvdw
where JF is the determinant of the Jacobian matrix
In our case F is given by Equation 1. For exam-
ple, the entry in the first row and column of the
Jacobian matrix is
d
du
The derivative of a Bernstein polynomial of order
n can be expressed by the scaled difference of two
Bernstein polynomials of order
d
du
with the convention that B a
0, or b ? a. Denote
-
du
and similarly -
v ijk and -
w ijk . The determinant JF
can be written as:
det@
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
expanding, we obtain:
ijk
-
ijk
ijk
ijk
ijk
ijk
ijk
-
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
-
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
ijk
Since the determinant is a multylinear operator
and due to the structure of the summations, we
can write:
ijk
lmn
lmn P z
ijklmnopq be a new index notation, in the
Denote the determinant in
Equation 4 by det I (u; v; w), and denote
Z 1Z 1Z 1det I (u; v; w)dudvdw:
Since the integral is a linear operator, the volume
can be written as:
ijk
lmn
I P x
lmn P z
Let p be a column vector indexed by I containing
all terms of the form P x
lmn P z
opq , and let c be a
column vector of the same size whose components
are the c I 's. Then Equation 5 can be expressed
as the scalar product of p and c:
The vector c depends only on the orders of the
B'ezier basis functions, hence can be computed
once and for all for every practical order combination
(the number of all practically useful order
combinations is small.) Computing the elements
of c via symbolic integration is very complicated
even for relatively small B'ezier orders, therefore
we compute them using Gauss numerical integration
[Press88], which gives an exact result since
the integrated functions are polynomials. A component
c I is computed as
r
s
are the Gauss weights corresponding
to the points x r in the unit in-
terval. The number of sample points on each dimension
is determined according to the order of
the basis function in that dimension.
The description above was simplified for ease of
explanation. Actually, volume size depends only
on the boundary surfaces (Stokes' formula [Gib-
son44].) In B'ezier volumes the boundary surfaces
are not influenced at all by the 'inner' control
points, which can be completely neglected during
the computation of the volume size. In fact,
when computing the elements of the c vector we
find that for any index ijklmnopq containing co-ordinates
of an inside point the value c ijklnmopq is
zero. In practice, then, to accelerate the volume
computation we let the index I run only on values
of ijklmnopq which define 'outer' control points.
4.2 Computing the Volume Deriva-
tive
The volume preservation algorithm requires the
computation of the
vector @V olume(P )=@P whose components are of
the form @V olume(P )=@P r
abc where r is x; y or z.
For example,
abc
@
ijk
lmn
lmn P z
abc
Since for every ijk 6= abc the partial derivative
vanishes, we get
abc
lmn
lmn P z
5 The Constraints
In this section we explain the different linear constraints
imposed on the control point configuration
required in order to achieve desired geometric
or physical results.
5.1 Continuity Constraints
Continuity constraints between primitives in an
object are essential for any object design. Continuity
of order k (C k ) between two adjacent volumes
defined on [0; 1]
in the u direction is achieved when the following
holds
for every (v
In our case for two adjacent primitives defined
by control points configurations P and Q we will
get
Since the derivative is a linear operator we get@ X
ijk
ijk
and therefore
(1)P ijk
For this to hold for each v
necessary and sufficient condition is that for
(0)Q ijk
thus getting a set of n v nw linear equations in
The derivative of a Bernstein polynomial of order
was given in Equation 2.
l (0) 6= 0
for
l (1) 6= 0 only for
the number of i's for which @ k B nu
and the same holds for
fore, C k continuity conditions between adjacent
B'ezier volumes are expressed as a set of n v nw linear
equations on k layers of control points of
each volume from the adjacent border.
For the most common cases the conditions are:
Another kind of continuity constraint between
elements is geometric continuity, which is more
general than parametric continuity. Geometric
continuity yields non-linear constraints which are
difficult to express and solve, and therefore we do
not use them in this work. For more details on
geometric constraints see [Bercovier93].
5.2 Other Constraints
The following types of constraints can easily be
handled in addition to continuity constraints:
ffl Fixing a point at a given location, resulting
in equations such as P r
i is a constant.
ffl Attaching two points together, resulting in
equations such as P r
ffl Preserving a given distance between points.
5.3
Summary
A general linear equation on variables P i is expressed
as X
or in vector representation as
c 0:
If we denote by C the matrix whose rows are the
coefficients c of the linear equations and by -
P the
column vector of all the control points, the constraints
are achieved when C -
6 The Energy Function
Energy computation for a deformation of a B'ezier
primitive from one control point configuration to
another is required by our algorithm. Here we
show that it can be computed using a matrix
whose elements depend only on the order of B'ezier
basis functions.
6.1 Computing the Energy
The energy of a deformation of a unit cube specified
by a 3-D vector function F usually
[Terzopolus94] described as
Z 1Z 1Z 11(fi
ff
with ff and fi being material property constants.
We can write
Z 1Z 1Z 1' @F i
dudvdw +2 ff
Z 1Z 1Z 1' @F i
dudvdw
Z 1Z 1Z 1' @F j
dudvdw +Z 1Z 1Z 1' @F j
dudvdw
In our case the deformations is of a body defined
by one B'ezier point configuration to a body
defined by another one. Hence the deformation is
defined as a tri-variate B'ezier function with distances
between the control points of the two configurations
serving as its control point lattice. Using
P for these new 'control points', we can write
ijk
-
ijk
and consequently
ijk
lmn
We have
Z 1Z 1Z 1' @F x
Z
ijk
lmn
ijk
lmn
lmn
'Z 1Z 1Z 1-
Let Du be a matrix indexed by ijk and lmn,
defined by
Z 1Z 1Z 1-
and define Dv and Dw similarly. Let Px be a column
vector with components P x
ijk , and similarly
define P y and P z. then we have
Z 1Z 1Z 1' @F x
Denote by Duv the matrix of the mixed derivatives
given by
Z 1Z 1Z 1-
and define Duw and Dvw similarly
Duv T .) We have
Z 1Z 1Z 1' @F y
'' @F x
The elements of the three matrices above can also
be computed numerically. Substituting the matrices
into Equation 7,
Finally, let D be the matrix whose rows are2
Let P be a column vector concatenating
All the elements of D depend only on the orders of
the B'ezier basis functions, hence can be computed
exactly once and for all for every practical order
combination, using Gauss quadrature.
6.2 Computing the Energy Deriva-
tive
To minimize the deformation energy the algorithm
requires the computation of the vector
@Energy(P )=@P whose components are of the
abc where r is x; y or z. It
is easy to see that, for example,
abc
r
lmn
since for every rijk 6= xabc the partial derivative
of Energy(P ) according to P x
abc vanishes.
7 The Uzawa-Based Volume
Preservation Algorithm
In this section we explain in detail the algorithm
we use for solving the problem as defined in Section
3.
7.1 Lagrangian Multiplier Method
To convert the constrained minimization problem
subject to the constraints
an unconstrained min-max problem,
we define a new functional L called the Lagrangian
associated with the problem (M) by
where fl is vector with size the number of linear
constraints. The vector
is called the Lagrange multipliers vector, - i is
called the Lagrange multiplier for the constraint
is called the Lagrange
multiplier for the constraint C j
stands
for row j of C).
As explained in [Ciarlet88], the constrained
minimization problem (M) can be reformulated as
finding a solution to the unconstrained min-max
problem (S) defined by
A necessary condition for a triplet (P; -; fl) to be
a solution of (S) is the vanishing of the partial
derivatives:
@-
which means that for each
(Cj i denotes the columns of C that multiply the
points of P i in -
7.2 Solution Method
The volume derivative expression is non-linear,
hence the usual direct methods (such as LDL T
and Gauss elimination) cannot be used to solve
(S). We use a version of the Uzawa method tailored
to our problem [Ciarlet88]. Uzawa's method
is an iterative method allowing one to solve an
inequality constrained minimization problem by
replacing it with a sequence of unconstrained minimization
problems. Since we do not have inequality
constraints we can use a simpler version.
Given the problem (M) the iteration starts with
an arbitrary values for - 0 2 R n
(we
start with 0 for both), and with an initial value
for P 0 for which we use Q. These initial guesses
are especially suitable in an interactive setting,
where it is expected that Q will not change much
after the constraints are satisfied. A sequence of
triplets
is defined by means of the following iterations:
The algorithm runs until the constraints are satisfied
or the number of iterations exceeds a given
limit.
Pseudo-code for the algorithm is shown on Figure
1. The initial values for and fl are set
in lines 1 and 2. Line 3 computes the current
volumes v i and line 4 initializes the loop counter
k. The main ('outer') loop of the algorithm is
performed in lines 5-10. The loop iterates while
the constraints are not satisfied, stopping after
the limit on the number of iterations has been
reached. In each iteration the system in Equation
8 is solved (line 6) and the current value of -
and fl is updated using the tuning parameters ae 1
and ae 2 respectively (lines 7-8). Line 11 returns P
as the answer.
The choice of tuning parameters ae 1 and ae 2
as used in Equation 10 and Equation 11 is the
most difficult practical issue when using Uzawa's
method. Each type of problem has its own best
range of values for ae i . In our case we found it
best to use ae for an energy distance
function and ae least squares
distance function. In general, the larger the ae's
the faster the convergence becomes, but the risk
of non-convergence due to overstepping the convergence
point increases.
Pseudo-code for one way of solving the inner
problem is shown in Figure 2. The inner problem
is solving Equation 8 for P with the given
and -. It is a non-linear problem; Figure 2
shows how to solve it using successive approximation
on P . We iteratively compute new values
for P based on Equation 8 until the distance
between two successive iterations is small enough
dist ). There are several other
possible techniques for solving a set of non-linear
equations which can be used here as well.
Usually when solving physics-based problems
by Lagrange multipliers methods the additional
variables added as multipliers have physical mean-
ing. In our case one can interpret - as an inner
hydrostatic pressure to keep the volume at a
given value. We are looking for the value of that
pressure: the Uzawa outer step can be seen as
augmenting or diminishing the hydrostatic pressure
until convergence. This tuning is done with
the parameter ae 1 . This observation relates our
method to so-called mixed finite element methods
for the Stokes problem [Hughes87]. In our case
we have constant pressure for each small volume
element.
The algorithm was implemented in C under Unix
using SGI/GL for graphics and Motif for the user
interface. The interface lets the user work with a
number of B'ezier primitives, the order of each selectable
by the user. In the initial state the primitives
are displayed as unit cubes (cubes whose
volume is 1.) Control points on each primitive
can be selected and manipulated in 3-D. We did
not implement direct manipulation of boundary
surface points since it is immaterial to the problem
being tackled. The primitives as whole can
be selected as well and manipulated.
Constraints are inserted via a Motif-based user
interface where the type of the constraint is set
and then through direct point manipulation the
points or surfaces involved are chosen.
There are two methods for object design. In the
first method, volume preservation can be turned
off during interaction and performed only when
arriving at a desired configuration. In the second
method, it can be turned on during the whole interaction
process. The first option is necessary
since for high orders the performance is not fully
interactive.
Due to the complexity of computations in the
interactive stage we cannot satisfy volume and linear
constraints simultaneously, so the user has to
choose which one is preferred.
There are three sets of parameters to the algo-
rithm: parameters that influence volume preservation
during interaction while the user drags the
mouse, parameters that are for solving volume
constraint when leaving the mouse, and parameters
for global computation when solving all the
constraints. Typically, for the interaction mode
the iteration limits are lower and the convergence
tolerances are larger than for the final mode, for
the global computation the tolerances usually do
not increase but the iteration limits are larger and
the ae i used are smaller.
Different sets of parameters do not cause divergence
of the algorithm, since during interaction
the current configuration is very close to a solution
satisfying the volume constraint, and the
algorithm needs fewer iterations to reach a solu-
tion. The parameter sets can be tuned using a
dialog box.
The user can manipulate a scale widget that
defines the desired volume for a chosen primitive.
The volume preservation algorithm is performed
repeatedly while the scale is dragged.
Tri-quadratic free-form volume design is fully
interactive. For a typical movement of a single
control point, to reach a final volume tolerance
of 10 \Gamma4 and a final distance tolerance of 10 \Gamma3 requires
about 15 outer iterations, each of them with
1-2 inner iterations. This takes about 3 seconds
on Silicon Graphics workstations with a MIPS R-
4000 processor. During interaction it is enough
to set both tolerances to 10 \Gamma2 ; in which case the
solution is completed in real-time.
For a tri-cubic free-form volume, to reach the
same tolerances requires about 25-30 outer itera-
tions, each of them with one inner iteration. This
takes about 15 seconds. When both tolerances
are set to 10 \Gamma2 during interaction the solver takes
about 3 seconds, hence tri-cubic interaction could
be done in real-time using a faster processor.
The running times above are of course dependent
on the number of linear constraints and on
how far the current configuration is from their solution
designed using the system. The amphora is
modeled from a single primitive, and the phone
was modeled from three tri-cubic primitives with
conditions between them. Its parts
were designed by volume modifications to create
the right proportions between them while keeping
the desired shape constraints and continuity.
9 Conclusion
We presented an approach for modeling with free-form
solid primitives while preserving the volume
contained within each primitive and satisfying
continuity constraints between the primitives.
Careful tuning allows our Uzawa-based non-linear
optimization algorithm to be fully interactive for
tri-quadratic volume elements and almost interactive
for tri-cubic elements. The algorithm possesses
several possible applications in computer
animation, industrial design and mechanical en-
gineering, broadening the scope of physics-based
geometric modeling.
Acknowledgments
Daniel Youlus participated in an early part of this
work. I thank Naftali Tishby for a fruitful comment
and the reviewers for their detailed comments.
--R
Studies in Linear and Nonlinear Programming
Two algorithms for volume-preserving approximations of surfaces of revolution
A modeling system based on dynamic constraints
A finite element procedure for non-linear incompressible elasticity
Minimization, constraints and composite B'ezier curves.
Deformation of n-dimensional objects
Simple constrained deformations for geometric modeling and interactive design.
Deformable curve and surface finite elements
Layered construction for deformable animated characters
Introduction to Numerical Linear Algebra and Optimization
sculpturing tool for 3D geometric modeling
Symbolic and numeric computation in curve interrogation.
Reconstruction for smooth parametric surfaces from unorganized data points
Curves and Surfaces for Computer Aided Geometric Design
A hierarchy of geometric forms
Curvature continuous blend surfaces.
Deformation of solids with tri-variate B-splines
Direct manipulation of free-form deformations
The Finite Element Method
Utilizing parametric hyperpatch methods for modeling and display of free form solids
Constrained optimization in surface design
Algorithms for computing area and Volume
Functional optimization for fair surface design
Numerical Recipes in C
Interactive design of smooth objects using probabilistic point constraints.
Motion interpolation by optimal control
Dynamic NURBS with geometric constraints for interactive sculpting
Computer Graphics
--TR
--CTR
Gershon Elber, Multiresolution curve editing with linear constraints, Proceedings of the sixth ACM symposium on Solid modeling and applications, p.109-119, May 2001, Ann Arbor, Michigan, United States
Stefanie Hahmann , Basile Sauvage , Georges-Pierre Bonneau, Area preserving deformation of multiresolution curves, Computer Aided Geometric Design, v.22 n.4, p.349-367, May 2005
Jin Huang , Xiaohan Shi , Xinguo Liu , Kun Zhou , Li-Yi Wei , Shang-Hua Teng , Hujun Bao , Baining Guo , Heung-Yeung Shum, Subspace gradient domain mesh deformation, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Victor Brian Zordan , Bhrigu Celly , Bill Chiu , Paul C. DiLorenzo, Breathe easy: model and control of simulated respiration for animation, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Wolfram von Funck , Holger Theisel , Hans-Peter Seidel, Vector field based shape deformations, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Victor B. Zordan , Bhrigu Celly , Bill Chiu , Paul C. DiLorenzo, Breathe easy: model and control of human respiration for computer animation, Graphical Models, v.68 n.2, p.113-132, March 2006
Gentaro Hirota , Renee Maheshwari , Ming C. Lin, Fast volume-preserving free form deformation using multi-level optimization, Proceedings of the fifth ACM symposium on Solid modeling and applications, p.234-245, June 08-11, 1999, Ann Arbor, Michigan, United States
Christoph M. Hoffmann , Jaroslaw R. Rossignac, A Road Map To Solid Modeling, IEEE Transactions on Visualization and Computer Graphics, v.2 n.1, p.3-10, March 1996 | continuity constraints;volume preservation;uzawa's algorithm;free-form deformations FFD;physics-based modeling;energy constraints;free-form solids |
614330 | A Near Optimal Isosurface Extraction Algorithm Using the Span Space. | AbstractWe present the "Near Optimal IsoSurface Extraction" (NOISE) algorithm for rapidly extracting isosurfaces from structured and unstructured grids. Using the span space, a new representation of the underlying domain, we develop n isosurface extraction algorithm with a worst case complexity of $O\left ({\sqrt n} k\right )$ for the search phase, where n is the size of the data set and k is the number of cells intersected by the isosurface. The memory requirement is kept at O(n) while the preprocessing step is O(n log n). We utilize the span space representation as a tool for comparing isosurface extraction methods on structured and unstructured grids. We also present a fast triangulation scheme for generating and displaying unstructured tetrahedral grids. | Introduction
Isosurface extraction is a powerful tool for investigating
scalar fields within volumetric data sets. The position of
an isosurface, as well as its relation to other neighboring
isosurfaces, can provide clues to the underlying structure
of the scalar field. In medical imaging applications, isosurfaces
permit the extraction of particular anatomical structures
and tissues. These isosurfaces are static in nature.
A more dynamic use of isosurfaces is called for in many
computational science applications, such as computational
fluid dynamics and atmospheric simulations. In such ap-
plications, scientists would ideally like to dynamically investigate
the scalar field in order to gain better insight into
simulation results.
As scientific computation demands higher accuracy and
state-of-the-art medical scanners increase in resolution, the
resulting data sets for visualization expand rapidly. The
sheer size of these data sets, as well as their structure,
pose major obstacles for interactive investigation. While
medical imaging data usually structured in nature, other
scientific and engineering data sets frequently consist of
geometry represented by unstructured finite element grids.
Originally, isosurface extraction methods were restricted
to structured grid geometry, as such, early efforts focused
on extracting a single isosurface [1] from the volumetric
data set. Recently, in an effort to speed up isosurface ex-
traction, several methods were developed that could be
adapted to extraction of multiple isosurfaces from structured
[2], [3] as well as from unstructured geometry [4],
[5]. Nevertheless, for large data sets, existing methods do
not allow for interactive investigation of the data set, especially
for unstructured grids. Defining n as the number
The authors are with the Department of Computer Science,
University of Utah, Salt Lake City, UT 84112 E-Mail: (yliv-
nat,hwshen,crj)@cs.utah.edu Web: http://www.cs.utah.edu/-sci/
of data cells and k as the number of cells intersecting a
given isosurface, most of the existing algorithms have time
complexity of O(n). While [2] has an improved time complexity
of O(k log( n
k )), the algorithm is only suitable for
structured hexahedral grids.
In this paper we introduce a new view of the underlying
domain. We call this new representation the span space.
Based on this new perspective, we propose a fast and effi-
cient, O(
isosurface extraction algorithm for both
structured and unstructured grids.
Section II investigates the underlying domain for structured
and unstructured problems and the new decomposition
of this domain is then proposed. The proposed Span
Space is then used in section III as a common backdrop for
comparing previous methods of isosurface extraction. Section
IV shows how the Span Space paradigm leads to an
efficient representation and fast isosurface extraction meth-
ods. In section V, we present several optimizations with
respect to both memory and time requirements. A fast
triangulation method for unstructured tetrahedral grid is
presented in Section VI. We conclude by analyzing the results
of testing the new algorithm on several science and
engineering applications.
II. The Span Space
field and let D be a sample set
over ', such that,
where G ' R p is a geometric space and V ' R q is the
associated value space, for some
jDj be the size of the data set.
Given a set of samples
D over a field ' : G ! V, and given a single value
Note that S, the isosurface, need not be topologically simple
Approximating an isosurface, S, as a global solution to
Eq. 1 can be a difficult task because of the sheer size, d, of
a typical science or engineering data set.
Data is often generated from 3D images or as solutions
to numerical approximation techniques, such as from finite
difference or finite element methods. These methods
naturally decompose the geometric space, G , into a set of
polyhedral cells, C, where the data points define the ver-
tices. Rather than finding a global solution one can seek
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, (SUBMITED)
a local approximation within each cell. Hence, isosurface
extraction becomes a two-stage process: Locating the cells
that intersect the isosurface and then, locally, approximating
the isosurface inside each such cell. We focus our attention
on the problem of finding those cells that intersect
an isosurface of a specified isovalue.
On structured grids, the position of a cell can be represented
in the geometric space G . Because this representation
does not require explicit adjacency information
between cells, isosurface extraction methods on structured
grids conduct searches over the geometric space, G . The
problem as stated by these methods is defined as follows:
Approach 1 (Geometric Search) Given a point v 2 Vand
given a set C of cells in G space where each cell is associated
with a set of values fv j g 2 V, find the subset of C which
an isosurface, of value v, intersects.
Efficient isosurface extraction for unstructured grids is
more difficult, as no explicit order, i.e. position and shape,
is imposed on the cells, only an implicit one that is difficult
to utilize. Methods designed to work in this domain
have to use additional explicit information or revert to a
search over the value space, V. The advantage of the latter
approach is that one needs only to examine the minimum
and maximum values of a cell to determine if an isosurface
intersects that cell. Hence, the dimensionality of the
problem reduces to two for scalar fields.
Current methods for isosurface extraction over unstructured
grids, as well as some for structured grids, view the
isosurface extraction problem in the following way:
Approach 2 (Interval Search) Given a point
given a set of cells represented as intervals,
find the subset I s such that,
I s ' I and, a
where a norm should be used when the dimensionality of
Vis greater than one.
Posing the search problem over intervals introduces some
difficulties. If the intervals are of the same length or are
mutually exclusive they can be organized in an efficient way
suitable for quick queries. However, it is much less obvious
how to organize an arbitrary set of intervals. Indeed, what
distinguishes these methods from one another is the way
they organize the intervals rather than how they perform
searches.
A key point is that the minimum and maximum values
are given over the same dimension. More formally, the
minimumand maximumvalues are represented over a basis
that includes only one unit vector. This degenerated basis
is the cause for the above difficulties. We should be able
to obtain a simpler representation if we use a basis that
includes two unit vectors, one for the min value and one
for the max value. Better still, the maximum separation
between the representation of the min and max values will
occur when these two unit vectors are perpendicular to
each other. We are, therefore, led to a new representation,
a point in a plane, using the natural coordinate system to
represent the minimum and maximum values.
The method proposed in this paper addresses the problem
of isosurface generation over unstructured grids and
searches over the value space. Our approach, nevertheless,
is not to view the problem as a search over intervals in V
but rather as a search over points in V 2 . We start with an
augmented definition of the search space.
be a given set of
cells, define a set of points
where,
a
and fv j g i are the values of the vertices of cell i.
Though conceptually not much different than the interval
space, the span space will, nevertheless, lead to a simple
and near optimal search algorithm. In addition, the span
space will enable us to clarify the differences and commonalities
between previous interval approaches.
The benefit of using the the span space is that points in
2D exhibit no explicit relations between themselves, while
intervals tend to be viewed as stacked on top of each other,
so that overlapping intervals exhibit merely coincidental
links. Points, do not exhibit such arbitrary ties and in this
respect lend themselves to many different organizations.
However, as we shall show later, previous methods grouped
these points in very similar ways, because they looked at
them from an interval perspective.
Using our augmented definition, the isosurface extraction
problem can be stated as,
Approach 3 (The Span Search ) Given a set of cells, C,
and its associated set of points, P , in the span space, and
given a value v 2 V, find the subset P s ' P , such that
We note that 8(x thus the associated
points will lie above the line y
perspective of the span search is given in Fig. 1.
III. Previous Work
We now examine previous approaches to the problem of
isosurface generation.
A. Geometric Space Decomposition
Originally, only structured grids were available as an underlying
geometry. Structured grids impose order on the
given cell set. This fact helps to keep the geometric complexity
of the entire cell set in G . By utilizing this order,
methods based on the geometry of the data set could take
advantage of the coherence between adjacent cells.
LIVNAT, HAN-WEI AND JOHNSON: A NEAR OPTIMAL ISOSURFACE EXTRACTION ALGORITHM USING THE SPAN SPACE 3
min
Fig. 1. Search over the span space. A data cell is represented by
a point based upon the minimum and maximum values at the
vertices of the cell. The points in the shaded area represent the
cells that intersect the isovalue v.
A.1 Marching Cubes
Perhaps the most well known isosurface extraction
method to achieve high resolution results is the Marching
Cubes method introduced by Lorensen and Cline [1].
The marching cubes method concentrated on the approximation
of the isosurface inside the cells rather than on
efficient locations of the involved cells. To this end, the
marching cube method scans the entire cell set, one cell at
a time. The novelty of the method is the way in which it
decides for each cell whether the isosurface intersects that
cell and if so, how to approximate it.
A.2 Octrees
The marching cubes method did not attempt to optimize
the time needed to search for the cells that actually
intersect the isosurface. This issue was later addressed by
Wilhelms and Gelder [2], who employed an octree, effectively
creating a 3D hierarchical decomposition of the cell
set, C. Each node in the tree was tagged with the minimum
and maximum values of the cells it represents. These
tags, and the hierarchical nature of the octree, enable one
to trim off sections of the tree during the search and thus
restrict the search to only a portion of the original geometric
space. Wilhelms and Gelder did not analyze the time
complexity of the search phase of their algorithm. However,
octree decompositions are known to be sensitive to the underlying
data. If the underlying data contains some fluctuations
or noise, most of the octree will have to be traversed.
Fig. 13 is an example for such a data set, which ultimately
undermines any geometric decomposition scheme. In Appendix
A we present an analysis of the octree algorithm
and show that the algorithm has a worst case complexity
of O(k log n=k). Finally, octrees have primarily been
applied to structured grids and are not easily adapted to
deal with unstructured grids.
A.3 Extrema Graphs
Recently, Itoh and Koyamada [3] presented a new
method for generating isosurfaces over unstructured grids
using extrema graphs.
The search starts at a seed cell known to intersect the
isosurface, and propagates recursively to its neighbor cells.
Knowing how the isosurface intersects the current cell enables
the algorithm to move only to those neighbor cells
that are guaranteed to intersect the isosurface.
In order to find such a seed cell, Itoh and Koyamada
employed extrema graphs. The nodes of these graphs are
those cells that include local extrema vertices. Each arc
in the graphs has a list of the cells connecting its two end
nodes.
Given an isovalue, the extrema graph is first scanned to
located arcs that span across the isovalue. The cells in
each such arc's list are then scanned sequentially until a
seed cell is found. Boundary cells must also be traversed;
hence the complexity of the algorithm is at best the size of
the boundary list, which Itoh and Koyamada estimate as
O(n 2=3 ).
Our analysis shows that the number of arcs can be of
O(n) in the worst case. Such a case occurs when the data
exhibits small perturbations such that each node is a local
extrema. In such a case, the numbers of arcs in the extrema
graph can be equal to the number of cells, though each arc
will contain only a single cell.
Storage requirements for the extrema graph method can
be high, since the propagation search requires four links
from each cell to its neighbors in addition to the maximum
and minimum values of its vertices. In addition, the algorithm
uses a queue during the propagating search, yet the
maximumrequired size of the queue is unknown in advance.
B. Value Space Decomposition
Decomposing the value space, rather than the geometric
space, has two advantages. First, the underlying geometric
structure is of no importance, so this decomposition works
well with unstructured grids. Second, for a scalar field in
3D, the dimensionality of the search is reduced from three
to only two.
B.1 The Span Filter
A key issue in isosurface extraction is the size of the
data set. Gallagher [5] addressed this issue by scanning
the data set and generating a compressed representation
suitable for isosurface extraction. The range of data values
is divided into sub-ranges, termed buckets. Each cell is then
classified based on the bucket its minimum value resides
in and on how many buckets the cell's range spans, i.e.
the span of the cell. Cells are then grouped according to
their span, and within each such group the cells are further
grouped according to their starting bucket. In each such
internal group, the representation is compressed according
to a unique id assigned to each cell. Rather than requiring
a span list for every possible span length, the method uses
one span list to catch all the cells that span more than a
4 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, (SUBMITED)
min
bucket
span 2
span 3
span n
span 1
Fig. 2. Span Filter. Shown is the ad hoc division of a field's range
into subranges called buckets. Each point, which represents a
data cell, is then assigned a min and max bucket, based upon the
point's min and max coordinate. The points are then grouped into
spans based upon the difference between their assigned buckets'
numbers. Span n represents all the spans with index larger than
some predefined index, i.e. 3 in this example.
predefined number of buckets.
Fig. 2 depicts the span filter organization over the span
space. Note that the compression over the cells' id is not
shown. For a given isovalue, v, the cells that intersect the
isosurface are those that lie above and to the left of the
dashed line.
The use of this perspective stresses the importance of
the first division into buckets. The entire organization of
the domain is controlled by only one set of parameters,
the position of the original buckets. While this may help
to ensure even distribution in the first span, it does not
provide control over the distribution of the cells in the other
spans. Furthermore, this division is not automated and
has to be crafted by trial and error for each new data set.
Finally, the search algorithm has a complexity of O(n) in
time.
B.2 The Active List
A different approach was taken by Giles and Haimes [4]:
to find the cells that intersect an isosurface incrementally.
Once an isosurface is found, then a neighbor isosurface,
with an isovalue close to the first one, can be found with
minimal effort.
The algorithm is based on two cell lists ordered by the
cell's minimum and maximum values and on \Delta, the global
maximum range of any of the cells. When an isovalue
is first given, or if the change from the previous value is
greater than \Delta, then an active cell list is formed. The active
list is first initialized with all the cells with a minimum
value between the given isovalue, v, and v \Gamma \Delta, by consulting
the minimum list. The active list is then purged of the
cells with a maximum value less then v. If the isovalue is
changed by less then \Delta, then the active list is augmented
with the cells that lie between the previous isovalue, v and
min
v-D nv
v-D
Fig. 3. Active List. The doted area represents the points that are
initially put into the active list. The points in the doted area
below the horizontal line, v are then removed from the active list.
When the new isovalue, nv is close to the current isovalue, v, only
the points in the striped area are added to the active list. The
points below the horizontal nv line, within both the striped and
the dotted areas, are then removed from the active list.
the new one, nv. The new cells are found by using one of
the two ordered lists, based upon whether the change was
positive or negative. The active list is then purged again
for the cells that do not intersect the isosurface.
Fig. 3. depicts Giles' and Haimes' algorithm over the
span space. Though the algorithm does not explicitly partition
the space in advance, the use of the global maximum
cell span, \Delta, does the same thing implicitly, as the width
of the area that needs to be scanned is constant. When
the change in the isovalue is greater than \Delta, the algorithm
must linearly scan all the cells in the range (nv \Gamma \Delta; nv).
Since \Delta depends on the data set, the algorithm has no
control over the size of the scanned list. In two of our test
cases, Heart and Brain, there are few cells on the boundary
that have a very large span. This causes \Delta to be so
large that the algorithm must linearly scan approximately
half of the data set. On the other hand \Delta might be too
small such that the neighborhood search may not be used
at all. Using the span perspective, Fig. 3, we can see that
when the isovalue is changed from v to nv the algorithm
will scan all the cells in the striped band but will then discard
those cells that are in the lower triangle of that band.
This triangle is usually the most dense part of the band, so
that a large number of cells must be scanned and then dis-
carded. If one scans across the entire range of the data set,
a typical change in the isovalue will be larger than 0.5%,
while, for a large data set, \Delta will be much smaller, again
not taking advantage of neighboring isosurfaces. Finally,
the algorithm's complexity is still O(n) in time.
B.3 Sweeping Simplices
Recently, two of the authors, Shen and Johnson [6],
developed the sweeping simplices method for extracting
isosurfaces from unstructured three-dimensional meshes.
Their algorithm utilizes both coherence between adjacent
isosurfaces and explicit space decomposition.
LIVNAT, HAN-WEI AND JOHNSON: A NEAR OPTIMAL ISOSURFACE EXTRACTION ALGORITHM USING THE SPAN SPACE 5
Sweeping simplices uses two ordered cell lists, a sweep
list and a min list. Each element in the sweep list contains
a pointer to a cell, the cell's maximum value, and a flag.
The sweep list is then sorted according the cell's maximum
value. The min list contains the minimum value for each
cell as well as a pointer to the corresponding element in
the sweep list and is ordered by the minimum values. The
initialization step requires a time of O(n log n).
Given an isovalue, the sweeping simplices algorithm
marks all the cells that have a minimum value less than
the given isovalue using the min list by setting the corresponding
flag in the sweep list. If an isovalue was previously
given, then the min list is traversed between the previous
isovalue and the new one. The corresponding flags in the
sweep list are then set or reset based on whether the new
isovalue is greater or smaller than the previous isovalue.
Once the flags are changed, the sweep list is traversed
starting at the first cell with a maximumvalue greater than
the new isovalue. The cells that intersect the isosurface are
those cells for which their corresponding flag is set. The
complexity of the algorithm is O(n) in both time and space.
The sweeping simplices algorithm uses a hierarchical
data decomposition. At the lowest level, the range of data
values is subdivided into several subgroups. Other levels
are created recursively by grouping consecutive pairs from
the previous level. At the top level there exists a single
subgroup with the range as the entire data set. The cells
are then associated with the smallest subgroup that contains
the cell. Each subgroup is then associated with a min
and sweep list as described before. Isosurface extraction is
accomplished by selecting for each level the subgroup that
contains the given isovalue and performing the search using
its min and sweep lists.
The space decomposition for the sweeping simplices al-
gorithm, as well as the marked cells for an isovalue pv, is
shown in Fig 4. The full dots are the marked cells. When
a new isovalue is selected, all the cells that lie between the
vertical lines pv and v are first marked. The cells that intersect
the isosurface are those marked cells that lie above the
horizontal line at v. Though sweeping simplices is faster
than the active list algorithm and does not depend on a
global \Delta, its space decomposition is not optimal. Each of
the groups whose range intersect the isovalue lines, Fig.
4, must be linearly scanned and each such group contains
an area outside the target isosurface region. We remark
that using the span space perspective, the second author
recently devised a more efficient space decomposition algorithm
that improved the overall performance of the sweeping
simplices algorithm.
B.4 Summary of Existing Methods
Previous value space decomposition algorithms use a
wide range of terminology and approaches. The use of
the span space provides a common ground on which these
methods can be compared. In effect, it was shown that
these methods use very similar approaches both in searching
and in space decomposition. All of these methods have
complexity of O(n) in both time and memory requirements.
min
pv
level 1
level 2
level 3
level 4
marked
unmarked
Fig. 4. Sweeping Simplices. The range of the field is divided into
subranges that are, in turn, organized into levels. See text for
further details.
IV. The New Algorithm
A common obstacle for all the interval methods was that
the intervals were ordered according to either their maximum
or their minimum value. Both the sweep algorithm
and the min-max attempted to tackle this issue by maintaining
two lists of the intervals, ordered by the maximum
and minimum values. What was missing, however, was a
way to combine these two lists into a single list.
In the following, we present a solution to this obstacle.
Using the span space as our underlying domain, we employ
a kd-tree as a means for simultaneously ordering the cells
according to their maximum and minimum values.
A. Kd-Trees
Kd-trees were designed by Bentley in 1975 [7] as a data
structure for efficient associative searching. In essence, kd-trees
are a multi-dimensional version of binary search trees.
Each node in the tree holds one of the data values and has
two sub-trees as children. The sub-trees are constructed so
that all the nodes in one sub-tree, the left one for example,
hold values that are less than the parent node's value, while
the values in the right sub-tree are greater than the parent
node's value.
Binary trees partition data according to only one dimen-
sion. Kd-trees, on the other hand, utilize multidimensional
data and partition the data by alternating between each of
the dimensions of the data at each level of the tree.
B. Search over the Span Space Using Kd-Tree
Given a data set, a kd-tree that contains pointers to the
data cells is constructed. Using this kd-tree as an index to
the data set, the algorithm can now rapidly answer isosurface
queries. Fig. 5 depicts a typical decomposition of a
span space by a kd-tree.
Construction
The construction of the kd-trees can be done recursively
in optimal time O(n log n). The approach is to find the
6 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, (SUBMITED)
min
root12 2v
Fig. 5. Kd Tree. The lines represent the structure of the kd-tree.
The vertical line root represents the first split of the span space
along the min coordinate. The next split, at level 1, is represented
by two horizontal lines that split the two major subregions along
the max coordinate. At level 2 of the tree, the split of the, now
four subspaces, is again along the min coordinate. The processes
continues until all of the points are accounted for.
median of the data values along one dimension and store it
at the root node. The data is then partitioned according
to the median and recursively stored in the two sub-trees.
The partition at each level alternates between the min and
coordinates.
An efficient way to achieve O(n log n) time is to recursively
find the median in O(n), using the method described
by Blum et al.[8], and partition the data within the same
time bound.
A simpler approach is to sort the data into two lists
according to the maximum and minimum coordinates, re-
spectively, in order O(n log n). The first partition accesses
the median of the first list, the min coordinate, in constant
time, and marks all the data points with values less than
the median. We then use these marks to construct the two
sub groups, in O(n), and continue recursively.
Though the above methods have complexity of
O(n log n), they do have weaknesses. Finding the median
in optimal time of O(n) is theoretically possible yet difficult
to program. The second algorithm requires sorting
two lists and maintaining a total of four lists of pointers.
Although it is still linear with respect to its memory re-
quirement, it nevertheless poses a problem for very large
data sets.
A simple (and we think elegant) solution is to use a
Quicksort-based selection [9]. While this method has a
worst case of O(n 2 ), the average case is only O(n). Fur-
thermore, this selection algorithm requires no additional
memory and operates directly on the tree. We note that
this algorithm performed at least four time faster on all of
our application data sets in section VII than the two sorted
lists algorithm.
Pseudo code for the kd-tree construction is given in Fig.
6.
It is clear that the kd-tree has one node per cell, or span
point, and thus the memory requirement of the kd-tree is
build-kd-tree( array, size )
recursive build
build (array, size, min );
build ( array, size, criterion )
// criterion is either min or max coordinate
if
array, size, criterion );
build( array, size/2, other-criterion );
build( array+1+size/2, (size-1)/2,
other-criterion
array, size, criterion )
Use Quicksort partition algorithm to
rearrange the array, based on the given
criterion, such that the median element
is in array[size/2] and all the elements
less then the median are in array[0.size/2-1].
Fig. 6. Kd-Tree Construction.
O(n).
Query
Given an iso-value, v, we seek to locate all the points
in Fig. 1 that are to the left of the vertical line at v and
are above the horizontal line at v. We note that we do not
need to locate points that are on these horizontal or vertical
lines if we assume non-degenerate cells, for which minimum
or maximum values are not unique. We will remove this
restriction later.
The kd-tree is traversed recursively by comparing the
iso-value to the value stored at the current root alternating
between the root's minimum and maximum values at odd
and even levels. If the root node is to the right (below) of
the iso-value line, then only the left (right) sub-tree should
be traversed. Otherwise, both sub-trees should be traversed
recursively. Furthermore, in this last case the root's other
value should also be compared to the given iso-value to
determine if the corresponding cell should be triangulated.
For efficiency we define two search routines, search-min-
and search-max-min. The dimension we currently
checking is the first named, and the dimension we still need
to search is named second. The importance of naming the
second dimension will be evident in the next section, when
we consider optimizing the algorithm.
Following is a short pseudo-code for the min-max routine
search-min-max( iso-value, root )
LIVNAT, HAN-WEI AND JOHNSON: A NEAR OPTIMAL ISOSURFACE EXTRACTION ALGORITHM USING THE SPAN SPACE 7
construct polygon(s) from root's cell
search-max-min( iso-value, root.right );
search-max-min( iso-value, root.left );
Estimating the complexity of the query is not straight-
forward. Indeed, the analysis of the worst case was developed
by Lee and Wong [10] only several years after Bentley
introduced kd-trees. Clearly, the query time is proportional
to the number of nodes visited. Lee and Wong analyzed
the worst case by constructing a situation where all
the visited nodes are not part of the final result. Their
analysis showed that the worst case time complexity is
O(
k). The average case analysis of a region query
is still an open problem, though observations suggest it is
much faster than O(
In almost all typical
applications
n, which suggests a complexity
of only O(k). On the other hand, the complexity of the
isosurface extraction problem is \Omega\Gamma k), because it is bound
from below by the size of the output. Hence, the proposed
algorithm, NOISE, is optimal, '(k), for almost all cases
and is near optimal in the general case.
Degenerate Cells
A degenerate cell is defined as a cell having more then
one vertex with a minimum or maximum value. When a
given iso-value is equal to the extrema value of a cell, the
isosurface will not intersect the cell. Rather, the isosurface
will touch the cell at a vertex, an edge, or a face, based on
how many vertices share that extrema value. In the first
two cases, vertex or edge, the cell can be ignored. The last
case is more problematic, as ignoring this case will lead to
a hole in the isosurface. Furthermore, if the face is not
ignored, it will be drawn twice.
One solution is to perturb the isovalue by a small
amount, so that the isosurface will intersect the inside of
only one of those cells. Another solution is to check both
sides of the kd-tree when such a case occurs. While the
direct cost of such an approach is not too high as this can
happen at most twice, there is a higher cost in performing
an equality test at each level. We note that in all the
data sets we tested there was not a single case of such a
degeneracy.
V. Optimization
The algorithm presented in the previous section is not
optimal with regards to both the memory requirement and
search time. We now present several strategies to optimize
the algorithm.
A. Pointerless Kd-Tree
A kd-tree node, as presented previously, must maintain
links to its two sub-trees. These links introduce a high cost
in terms of memory requirements. To overcome this defi-
###############################################################Tree
Array
R
Fig. 7. Two representations of a kd-tree and the relative position of
their nodes.
ciency, we note that in our case the kd-tree is completely
balanced. At each level, one data point is stored at the
node and the rest are equally divided between the two sub-
trees. We can, therefore, represent a pointerless kd-tree as
a one-dimensional array of the nodes. The root node is
placed at the middle of the array, while the first n=2 nodes
represent the left sub-tree and the last (n \Gamma 1)=2 nodes the
right sub-tree, as shown in Fig. 7.
The memory requirement, per node, for a pointerless kd-tree
reduces to two real numbers, for minimum and maximum
values, and one pointer back to the original cell for
later usage. Considering that each cell, for a 3D application
with tetrahedral cells has pointers to four vertices, the
kd-tree memory overhead is even less than the size of the
set of cells.
The use of a pointerless kd-tree enables one to compute
the tree as an off line preprocess and load the tree using a
single read in time complexity of only O(n). Data acquisition
via CT/MRI scans or scientific simulations is generally
very time consuming. The ability to build the kd-tree as a
separate preprocess allows one to shift the cost of computing
the tree to the data acquisition stage. Hence, reducing
the impact of the initialization stage on the extraction of
isosurfaces for large data sets.
B. Optimized Search
The search algorithm can be further enhanced. Let us
consider, again, the min-max (max-min) routine. In the
original algorithm, if the iso-value is less then the minimum
value of the node, then we know we can trim the right
sub-tree. Consider the case where the iso-value is greater
then the node's minimum coordinate. In this case, we need
to traverse both sub-trees. We have no new information
with respect to the search in the right sub-tree, but, for
the search in the left sub-tree we know that the minimum
condition is satisfied. We can take advantage of this fact
by skipping over the odd levels from that point on. To
achieve this, we define two new routines, search-min and
search-max. Adhering to our previous notation, the name
search-min states that we are only looking for a minimum
value.
8 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, (SUBMITED)
search-min-max( iso-value, root
construct polygon(s) from root's cell;
search-max-min( iso-value, root.right );
search-max( iso-value, root.left );
else
search-max-min( iso-value, root.left );
search-min( iso-value, root
construct polygon(s) from root's cell;
search-skip-min( iso-value, root.right );
collect( root.left );
else
search-skip-min( iso-value, root.left );
search-skip-min( iso-value, skip-node
construct polygon(s) from skip-node's cell;
search-min( iso-value, skip-node.right );
search-min( iso-value, skip-node.left );
for (each leaf node)
construct polygon(s) for leaf's cell.
Note: the leaf nodes are organized
sequentially and thus there is
no need to descend this subtree.
Fig. 8. Optimized Search
Examining the search-min routine, we note that the maximum
requirement is already satisfied. We do not gain new
information if the isovalue is less than the current node's
minimum and again only trim off the right sub-tree. If the
iso-value is greater than the node's minimum, we recursively
traverse the right sub-tree, but with regard to the
left sub-tree, we now know that all of its points are in the
query's domain. We therefore need only to collect them.
Using the notion of pointerless kd-tree as proposed in Sec
any sub-tree is represented as a contiguous block of
the tree's nodes. Collecting all the nodes of a sub-tree requires
only sequentially traversing this contiguous block.
Pseudo code of the optimized search for the odd levels
of the tree, i.e. searching for minima is presented in Fig.
8. The code for even levels, searching for maxima, is essentially
the same and uses the same collect routine.
C. Count Mode
Extracting isosurfaces is an important goal, yet in a particular
application one may wish only to know how many
cells intersect a particular isosurface. Knowing the number
of cells that intersect the isosurface can help one give
a rough estimate of the surface area of the isosurface on a
structured grid and on a "well behaved" unstructured grid.
The volume encompassed by the isosurface can also be estimated
if one knows the number of cells that lie inside the
isosurface as well as the number of cells that intersect it.
The above algorithm can accommodate the need for such
particular knowledge in a simple way. The number of cells
intersecting the isosurface can be found by incrementing
a counter rather than constructing polygons from a node
and by replacing collection with a single increment of the
counter with the size of the sub-tree, which is known without
the need to traverse the tree. To count the number of
cells that lie inside the isosurface, one need only look for
the cells that have a maximum value below the iso-value.
The worst case complexity of the count mode is only
O(
n). A complete analysis is presented in Appendix B.
It is important to note that the count mode does not depend
on the size of the isosurface. We shall show in Section
VII that such a count is extremely fast and introduces no
meaningful cost in time. The count mode thus enables an
application to quickly count the cells that intersect the iso-surface
and allocate and prepare the appropriate resources
before a full search begins.
D. Neighborhood Search
The Sweeping Simplices and the Active List algorithms
were designed to take advantage of coherence between iso-surfaces
with close isovalues. We now present a variant of
the proposed algorithm that also takes advantage of such
coherence.
By examining Fig. 10 we see that if an isovalue pv is
changed to v, then the set of cells that intersect the new
isosurface can be generated by adjusting the current set of
cells. In essence, if v ? pv then we need to remove the cells
that lie in the bottom rectangle and add those that lie in
the right rectangle. If v ! pv the add and remove roles of
these rectangles are flipped. As opposed to the previous
methods, which decompose the space specifically for small
changes in the isovalue, we can use the kd-tree decomposition
as is. This, in turn, means that at any time, either
the regular or the neighborhood search can be performed
over the same data structure and thus we can choose which
one will likely be the best one based on the current esti-
mation. The new set of cells is achieved by performing two
searches. First the kd-tree is searched for cells that need
to be removed. A second search is then performed to find
new cells to add to the list. Fig. 9 depicts a pseudo code
for a part of the second search.
The neighborhood search can benefit when the change
in the isovalue is small and only a small number of cells
needs to be added or removed, especially in the count mode.
However, there are several disadvantages in using this type
of search, as was the case in previous methods. First, an
active cell list must be maintained that adds more overhead
both in time and memory. Second, each node in the kd-tree
must maintain yet another pointer to the cell entry in
the active list so that it can be removed quickly without
traversing the active list. Finally, if the number of cells that
belong to both the current and the new cell list is small, the
effort to find the new isosurface is doubled.
LIVNAT, HAN-WEI AND JOHNSON: A NEAR OPTIMAL ISOSURFACE EXTRACTION ALGORITHM USING THE SPAN SPACE 9
near-search-min-max( pv, v, node )
near-search-max-min( pv, v, node.right );
else
near-search-max-min( pv, v, node.left );
else
add node;
near-search-max-min( pv, v, node.right );
near-search-max-min( pv, v, node.left );
Fig. 9. Neighborhood Search - Pseudo Code
min
Fig. 10. NeighborhoodSearch. The points in the dotted area represent
cells that are intersected by both the current isosurface and the
new isosurface. The points (cells) in the right striped area should
be added to the isosurface while the points (cells) in the lower
striped area should be removed from the isosurface.
We remark that with the current performance of the algorithm
and current available hardware, the bottle neck is
no longer in finding the isosurface or even computing it,
but rather in the actual time it takes to display it.
VI. Triangulation
Once a cell is identified as intersecting the isosurface, we
need to approximate the isosurface inside that cell. Toward
this goal, the marching cubes algorithm checks each of the
cell's vertices and marks them as either above or below the
isosurface. Using this information and a lookup table, the
algorithm identifies the particular way the isosurface intersects
the cell. The marching cubes, and its many variants,
are designed for structured grids though they can be applied
to unstructured grids as well.
We propose a new algorithm for unstructured grids of
tetrahedral cells. We first note that if an isosurface intersects
inside a cell, then the the vertex with the maximum
value must be above the isosurface and the vertex with the
minimum value must be below it.
Fig. 11. Triangulation. The vertices are numbered according to ascending
values.
To take advantage of this fact, we reorder the vertices of
a cell according to their ascending values, say v1 to v4, a
priori, in the initialization stage. When the cell is determined
to intersect the isosurface, we need only to compare
the iso-value against at most the two middle vertices. Since
there are only three possible cases: only v1 is below the
isosurface, only v4 is above the isosurface, or fv1,v2g are
below and fv3, v4g are above see Fig. 11. Moreover, the
order of the vertices of the approximating triangle(s), such
that the triangle(s) will be oriented correctly with respect
to the isosurface, is known in advance at no cost. We can
further take advantage of the fact that there are only four
possible triangles for each cell and compute their normals
a priory. This option can improve the triangulation time
dramatically yet it comes with a high memory price tag.
VII. Results
To evaluate the proposed algorithms, we have done extensive
tests on various data sets. The tests were carried
on SGI (R4400, 150MHz) workstations with 256Mb and
640Mb of memory.
A. The data sets
We have used several data sets from a variety of sources.
Table
I shows the characteristics of these models. The first
three data sets consists of bio-electric field problems solved
using the finite element method on unstructured tetrahedral
grids, Fig. 14, 15, 16. Head is a 128 3 MRI scan of
a human head, Fig. 12. The FD, Fluid Dynamics, data
set is computed from a 256 3 spectral CFD simulation, Fig.
13. We also used sub-sampled sets of this large data set of
sizes
I
Data Sets
Source Type Vertices Cells
Heart FEM U-grid 11504 69892
Torso FEM U-grid 201142 1290072
Brain FEM U-grid 74217 471770
Head MRI S-grid 2M 2048383
FD-128 FEM S-grid 2M 2048383
1 NOTE: We will submit color figures for the final paper.
B. Benchmarks
The algorithm was tested both with respect to CPU run
time and its complexity relative to a given data set. Each
test included 1000 random value isosurface extractions. Table
II shows the distribution of the number of cells in the
isosurfaces for the different models. The Brain model is an
example of a non-uniform cell size and position distribu-
tion. Some of the cells had very large span that would have
caused worst-case performance in previous isosurface extraction
algorithms. We performed two tests on this model
first using iso-values from the entire model domain and a
second checking only a small dense area.
In this paper, we concentrated on finding the cells that
intersect an isosurface and performing fast triangulation
on tetrahedral cells. We therefore did not measure the
triangulation of the structured grid model. For these data
sets we issued a call to an empty stub function for each cell
that intersects the iso-surface, therefore introducing some
cost per intersected cell.
II
Isosurface Statistics
Cells in Isosurfaces
Torso
Brain:
partial 5287 26710 10713
full 12 14756 25
Head 8 610291 61091
III
Performance Statistics
Isosurface Nodes Overhead Max.
size checked collected
Heart 1617 687 473 17472
Torso 8001 3487 2679 20156
partial 10713 2295 1570 14742
full
Head 61091 4568 3735 512095
C. Analysis
Table
III shows the performance of the algorithm with
respect to the size of an average isosurface. The first col-
IV
CPU Time
Build Count Search a
per cell per cell
sec msec nano-sec msec nano-sec
Heart 7.6 0.4 5.7 7.0 43
Torso 27.1 2.2 1.7 43.8 55
partial 1.5 3.2 53.9 50
full 1.0 2.1 1.1 440
Head 35.2 3.0 1.4 31.4 15
2.3 0.9 3.5 7.7 31
FD-128 22.6 2.9 1.4 69.2 34
a Search times include triangulation for unstructured grids only.
umn was taken verbatim from Table II. The Nodes Checked
column represents the average number of tree nodes that
were actually examined by the algorithm of which Overhead
were not part of the final isosurface. For example, the average
isosurface in the FD-128 case intersected 172; 247 cells,
yet the algorithm had to examine only 4; 489 tree nodes in
order to locate these cells. Out of the 4; 489 nodes that
were checked 3; 405 nodes did not intersect the isosurface
and there for represent an overhead in some sense. A key
point in the algorithm is its ability to locate large groups
of intersected cells i.e. large subtrees in which all of their
nodes represent cells that are intersected by the isosurface.
Once such a subtree is located, there is no need to traverse
this subtree as its leaf nodes form a continuous block. The
largest such subtree that was found in a a paticular data
set is depicted under the Collected column of the table. In
the case of our previous example, Fd-128, the largest such
subtree contained 512; 095 nodes.
The algorithm consistently examined many fewer nodes
than the size of the extracted isosurface. The only exception
was the full Brain data set where the average isosurface
was more or less empty. Even in this pathological
case, the number of cells that were examined was small,
only 0.43%. This is a case where the algorithm is not optimal
n, yet the overhead is negligible. Overall, the
overhead of examining extra nodes was kept at a minimum
and the collection scheme achieved excellent results.
The complexity of the search phase was kept at 3
which does not depend on the size of the resulting isosurface
as predicted by the count mode analysis. CPU run time
is shown in Tab. IV. The initialization step is measured
in seconds while the count and search are in milliseconds.
All numbers represent the average run time per query. The
search includes triangulation for the unstructured grid data
sets only, using the proposed fast triangulation algorithm.
The time requirements for the count mode was kept to a
few milliseconds, even for very large data sets with corre-
LIVNAT, HAN-WEI AND JOHNSON: A NEAR OPTIMAL ISOSURFACE EXTRACTION ALGORITHM USING THE SPAN SPACE 11
spondingly large numbers of isosurfaces. The search optimization
has clearly benefited from the collect routine, as
is evident by the large collected blocks.
The performance of the algorithm should be viewed with
respect to its main goal, that is, locating the cells that intersect
the isosurface. In this respect, i.e. the count mode,
the CPU time requirements were as low as a few milliseconds
- even for large data sets and exhibit complexity of
only O(
n), i.e. no dependency on the size of the isosurface
was noticed. The search mode CPU time is clearly
dominated by the size of the isosurface, as each intersected
cell must be examined and triangulated. In the case of
the unstructured grid datasets, the entire process of search
and triangulation was about 50ms. However, for the large
structured grid datasets, the average size of the isosurfaces
was much larger and caused the total time to increase to
approximately 0.8 seconds.
VIII. Conclusions
We presented the "Near Optimal IsoSurface Extraction"
(NOISE) algorithm, which has a worst-case performance of
O(
n+k). The algorithm is near optimal in the sense that
for the typical case, k ?
n, NOISE is optimal, while for
the rest of the cases the overhead is negligible. The memory
requirement for NOISE is O(n), while the preprocess step
has a complexity of O(n log n) and can be performed offline.
If the preprocessing is done offline, its results can be loaded
in O(n).
The algorithm performs well for large and small data sets
and for any size of isosurface. The number of cells that
intersect an isosurface can also be found in O(
n) time,
which enables fast rough estimates of the surface area and
the corresponding volume encompassed by the isosurface.
We were able to create the NOISE algorithm by projecting
the data onto a new space, termed the span space,
which, in turn, lends itself to a simple decomposition utilizing
kd-tree. Furthermore, the span space can serve as a
common ground on which other methods can be compared
and analyzed.
We also presented a fast triangulation scheme based on
a one time pre-process reorganization of the cells' vertices.
Acknowledgments
This work was supported in part by the National Science
Foundation and the National Institutes of Health. The authors
would like to thanks K. Coles and J. Painter for their
helpful comments and suggestions. We wish to thank the
Los Alamos National Laboratory for the the use of their
facilities and the Head data set. The FD data set is courtesy
of Shi-Yi Chen of LANL. Furthermore, we appreciate
access to facilities that are part of the NSF STC for Computer
Graphics and Scientific Visualization.
Appendices
A. Worst Case Analysis for Octree Isosurface
Extraction
Wilhelms and Gelder did not analyze the time complexity
of their octree-based isosurface extraction algorithm,
Section III-A.2. We now present a worst-case analysis of
their method.
We first note that the octree used by Wilhelms and
Gelder is derived from the geometry of the data set and
is only augmented by the minimum and maximum values
of the cells in the tree. As such, the octree relies solely
on geometry to group cells with close field values. On the
other hand, the octree is guaranteed to be balanced. Also
note that the data cells occur only on the leaves of the tree.
For simplicity, consider first the 1D case of a binary tree
with n leaves. For a given k, we seek one of the groups of k
leaves with the highest cost to locate. For the cost is
log n; this suggests an estimate of O(k log n) for the worst
case. This is clearly an overestimate as many segments of
the paths to these k cells are shared. When 2, the two
paths from the root must share several intermediate nodes.
The maximum cost will occur when only the root node is
shared. Therefore,
log (n)
which, for leads to
As an example, T (n; since the a binary tree
with n leaves has nodes.
The general case for a d-dimensional tree follows immediately
from the binary case. Let
k. The solution to the recursive formula is
d
log
For the special case of octree,
3 log ( n
and a complexity of O(k
B. Performance Analysis for the Count Mode
A node in a kd-tree holds information regarding only the
value used to split the current tree. This forces a search
algorithm always to traverse at least one subtree. The best
case performance for the count mode is thus O(logn).
We now examine the worst case complexity of the count
mode. Referring to the optimized version, section V-A, we
find two cases. When the isovalue is less than the value at
the root of the tree we need to traversed only one subtree.
Otherwise, both subtrees are traverse, yet for one of them
we now know that the min or max condition is satisfied.
Clearly the worst case involves the second case,
For the case where the min or max condition is satisfied
there are again two cases. These cases, however, are different
from each other only with respect to whether one
of the subtree is completely empty or full. In both these
cases, only one subtree is descended. Moreover, the next
level of this subtree can be skipped and the algorithm descends
directly to both sub-subtrees. Note that the root of
the subtree still need to be checked. Therefore,
log 4
n: (4)
Substituting Eq. 4 in Eq. 3 and using Eq. 2 we get,
log
n:
Hence a complexity of O(
n).
--R
"Marching cubes: A high resolution 3d surface construction algorithm"
"Octrees for faster isosurface generation"
"Isosurface generation by using extrema graphs"
"Advanced interactive visualization for CFD"
"Span filter: An optimization scheme for volume visualization of large finite element models"
"Sweeping simplicies: A fast iso-surface extraction algorithm for unstructured grids"
"Multidimentional binary search trees used for associative search"
"Time bounds for selection"
Algorithms in C
"Worst-case analysis for region and partial region searches in multidimentional binary search trees and balanced quad trees"
"Analysis of range searches in quad trees"
--TR
--CTR
Christopher Johnson , Steven G. Parker , Charles Hansen , Gordon L. Kindlmann , Yarden Livnat, Interactive Simulation and Visualization, Computer, v.32 n.12, p.59-65, December 1999
Han-Wei Shen, Isosurface extraction in time-varying fields using a temporal hierarchical index tree, Proceedings of the conference on Visualization '98, p.159-166, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Takayuki Itoh , Yasushi Yamaguchi , Koji Koyamada, Volume thinning for automatic isosurface propagation, Proceedings of the 7th conference on Visualization '96, p.303-ff., October 28-29, 1996, San Francisco, California, United States
T. Todd Elvins, Visfiles: visualizing simulation data, ACM SIGGRAPH Computer Graphics, v.33 n.1, February 1999
Benjamin Vrolijk , Charl P. Botha , Frits H. Post, Fast time-dependent isosurface extraction and rendering, Proceedings of the 20th spring conference on Computer graphics, April 22-24, 2004, Budmerice, Slovakia
Andrew S. Forsberg , David H. Laidlaw , Andries van Dam , Robert M. Kirby , George E. Karniadakis , Jonathan L. Elion, Immersive virtual reality for visualizing flow through an artery, Proceedings of the conference on Visualization '00, p.457-460, October 2000, Salt Lake City, Utah, United States
Han-Wei Shen , Charles D. Hansen , Yarden Livnat , Christopher R. Johnson, Isosurfacing in span space with utmost efficiency (ISSUE), Proceedings of the 7th conference on Visualization '96, p.287-ff., October 28-29, 1996, San Francisco, California, United States
Chandrajit L. Bajaj , Valerio Pascucci , Daniel R. Schikore, Fast isocontouring for improved interactivity, Proceedings of the 1996 symposium on Volume visualization, p.39-ff., October 28-29, 1996, San Francisco, California, United States
James S. Painter , Hans-Peter Bunge , Yarden Livnat, Mantle convection visualization on the Cray T3D, Proceedings of the 7th conference on Visualization '96, p.409-ff., October 28-29, 1996, San Francisco, California, United States
Jinzhu Gao , Han-Wei Shen, Parallel view-dependent isosurface extraction using multi-pass occlusion culling, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
P. Cignoni , C. Montani , E. Puppo , R. Scopigno, Optimal isosurface extraction from irregular volume data, Proceedings of the 1996 symposium on Volume visualization, p.31-38, October 28-29, 1996, San Francisco, California, United States
Caleb Lyness , Edwin Blake, Real time isosurface browsing, Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation, November 05-07, 2001, Camps Bay, Cape Town, South Africa
Yarden Livnat , Charles Hansen, View dependent isosurface extraction, Proceedings of the conference on Visualization '98, p.175-180, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Chi-Keung Tang , Grard Medioni, Extremal feature extraction from 3-D vector and noisy scalar fields, Proceedings of the conference on Visualization '98, p.95-102, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Udeepta D. Bordoloi , Han-Wei Shen, Space Efficient Fast Isosurface Extraction for Large Datasets, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.27, October 22-24,
Chandrajit L. Bajaj , Valerio Pascucci , Daniel R. Schikore, The contour spectrum, Proceedings of the 8th conference on Visualization '97, p.167-ff., October 18-24, 1997, Phoenix, Arizona, United States
Stefan Rttger , Martin Kraus , Thomas Ertl, Hardware-accelerated volume and isosurface rendering based on cell-projection, Proceedings of the conference on Visualization '00, p.109-116, October 2000, Salt Lake City, Utah, United States
Bill Hibbard, Vis files: computational field visualization, ACM SIGGRAPH Computer Graphics, v.35 n.4, November 2001
Bruno Lvy , Guillaume Caumon , Stphane Conreaux , Xavier Cavin, Circular incident edge lists: a data structure for rendering complex unstructured grids, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Reinhard , Charles Hansen , Steve Parker, Interactive ray tracing of time varying data, Proceedings of the Fourth Eurographics Workshop on Parallel Graphics and Visualization, September 09-10, 2002, Blaubeuren, Germany
Paolo Cignoni , Paola Marino , Claudio Montani , Enrico Puppo , Roberto Scopigno, Speeding Up Isosurface Extraction Using Interval Trees, IEEE Transactions on Visualization and Computer Graphics, v.3 n.2, p.158-170, April 1997
Takayuki Itoh , Yasushi Yamaguchi , Koji Koyamada, Fast Isosurface Generation Using the Volume Thinning Algorithm, IEEE Transactions on Visualization and Computer Graphics, v.7 n.1, p.32-46, January 2001
Yi-Jen Chiang , Cludio T. Silva, I/O optimal isosurface extraction (extended abstract), Proceedings of the 8th conference on Visualization '97, p.293-ff., October 18-24, 1997, Phoenix, Arizona, United States
Philip Sutton , Charles D. Hansen, Isosurface extraction in time-varying fields using a temporal branch-on-need tree (T-BON), Proceedings of the conference on Visualization '99: celebrating ten years, p.147-153, October 1999, San Francisco, California, United States
Klaus Engel , Rdiger Westermann , Thomas Ertl, Isosurface extraction techniques for Web-based volume visualization, Proceedings of the conference on Visualization '99: celebrating ten years, p.139-146, October 1999, San Francisco, California, United States
C. L. Bajaj , V. Pascucci , D. Thompson , X. Y. Zhang, Parallel accelerated isocontouring for out-of-core visualization, Proceedings of the 1999 IEEE symposium on Parallel visualization and graphics, p.97-104, October 25-26, 1999, San Francisco, California, United States
Philip M. Sutton , Charles D. Hansen, Accelerated Isosurface Extraction in Time-Varying Fields, IEEE Transactions on Visualization and Computer Graphics, v.6 n.2, p.98-107, April 2000
Thomas Gerstner , Renato Pajarola, Topology preserving and controlled topology simplifying multiresolution isosurface extraction, Proceedings of the conference on Visualization '00, p.259-266, October 2000, Salt Lake City, Utah, United States
Yi-Jen Chiang, Out-of-Core Isosurface Extraction of Time-Varying Fields over Irregular Grids, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.29, October 22-24,
Tao Ju , Frank Losasso , Scott Schaefer , Joe Warren, Dual contouring of hermite data, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Marc van Kreveld , Ren van Oostrum , Chandrajit Bajaj , Valerio Pascucci , Dan Schikore, Contour trees and small seed sets for isosurface traversal, Proceedings of the thirteenth annual symposium on Computational geometry, p.212-220, June 04-06, 1997, Nice, France
Michael Burns , Janek Klawe , Szymon Rusinkiewicz , Adam Finkelstein , Doug DeCarlo, Line drawings from volume data, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Ingo Wald , Heiko Friedrich , Gerd Marmitt , Philipp Slusallek , Hans-Peter Seidel, Faster Isosurface Ray Tracing Using Implicit KD-Trees, IEEE Transactions on Visualization and Computer Graphics, v.11 n.5, p.562-572, September 2005
Bartosz von Rymon-lipinski , Nils Hanssen , Thomas Jansen , Lutz Ritter , Erwin Keeve, Efficient Point-Based Isosurface Exploration Using the Span-Triangle, Proceedings of the conference on Visualization '04, p.441-448, October 10-15, 2004
Alexander Gre , Reinhard Klein, Efficient representation and extraction of 2-manifold isosurfaces using kd-trees, Graphical Models, v.66 n.6, p.370-397, November 2004
Hamish Carr , Jack Snoeyink , Ulrike Axen, Computing contour trees in all dimensions, Computational Geometry: Theory and Applications, v.24 n.2, p.75-94, February
Yi-Jen Chiang , Cludio T. Silva , William J. Schroeder, Interactive out-of-core isosurface extraction, Proceedings of the conference on Visualization '98, p.167-174, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Leif P. Kobbelt , Mario Botsch , Ulrich Schwanecke , Hans-Peter Seidel, Feature sensitive surface extraction from volume data, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.57-66, August 2001
Yarden Livnat , Xavier Tricoche, Interactive Point-Based Isosurface Extraction, Proceedings of the conference on Visualization '04, p.457-464, October 10-15, 2004
Kwan-Liu Ma , Steven Parker, Massively Parallel Software Rendering for Visualizing Large-Scale Data Sets, IEEE Computer Graphics and Applications, v.21 n.4, p.72-83, July 2001
Jim Cox , D. B. Karron , Nazma Ferdous, Topological Zone Organization of Scalar Volume Data, Journal of Mathematical Imaging and Vision, v.18 n.2, p.95-117, March
V. Pascucci , C. L. Bajaj, Time critical isosurface refinement and smoothing, Proceedings of the 2000 IEEE symposium on Volume visualization, p.33-42, October 09-10, 2000, Salt Lake City, Utah, United States
Lutz Kettner , Jarek Rossignac , Jack Snoeyink, The safari interface for visualizing time-dependent volume data using iso-surfaces and contour spectra, Computational Geometry: Theory and Applications, v.25 n.1-2, p.97-116, May
Yi-Jen Chiang , Ricardo Farias , Cludio T. Silva , Bin Wei, A unified infrastructure for parallel out-of-core isosurface extraction and volume rendering of unstructured grids, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
Lutz Kettner , Jarek Rossignac , Jack Snoeyink, The safari interface for visualizing time-dependent volume data using iso-surfaces and contour spectra, Computational Geometry: Theory and Applications, v.25 n.1-2, p.97-116, May
George J. Grevera , Jayaram K. Udupa , Dewey Odhner, An Order of Magnitude Faster Isosurface Rendering in Software on a PC than Using Dedicated, General Purpose Rendering Hardware, IEEE Transactions on Visualization and Computer Graphics, v.6 n.4, p.335-345, October 2000
Mario Ohlberger , Martin Rumpf, Adaptive Projection Operators in Multiresolution Scientific Visualization, IEEE Transactions on Visualization and Computer Graphics, v.5 n.1, p.74-94, January 1999
Mario Ohlberger , Martin Rumpf, Adaptive Projection Operators in Multiresolution Scientific Visualization, IEEE Transactions on Visualization and Computer Graphics, v.4 n.4, p.344-364, October 1998
Steven Parker , Michael Parker , Yarden Livnat , Peter-Pike Sloan , Charles Hansen , Peter Shirley, Interactive Ray Tracing for Volume Visualization, IEEE Transactions on Visualization and Computer Graphics, v.5 n.3, p.238-250, July 1999
Steven Parker , Michael Parker , Yarden Livnat , Peter-Pike Sloan , Charles Hansen , Peter Shirley, Interactive ray tracing for volume visualization, ACM SIGGRAPH 2005 Courses, July 31-August
Tim Purcell, Parallel ray tracing on a chip, Practical parallel rendering, A. K. Peters, Ltd., Natick, MA, 2002 | unstructured grids;isosurface extraction;span space;kd-trees |
614335 | Interactive Time-Dependent Particle Tracing Using Tetrahedral Decomposition. | AbstractStreak lines and particle traces are effective visualization techniques for studying unsteady fluid flows. For real-time applications, accuracy is often sacrificed to achieve interactive frame rates. Physical space particle tracing algorithms produce the most accurate results although they are usually too expensive for interactive applications. An efficient physical space algorithm is presented in this paper which was developed for interactive investigation and visualization of large, unsteady, aeronautical simulations. Performance has been increased by applying tetrahedral decomposition to speed up point location and velocity interpolation in curvilinear grids. Preliminary results from batch computations [1] showed that this approach was up to six times faster than the most common algorithm which uses the Newton-Raphson method and trilinear interpolation. Results presented here show that the tetrahedral approach also permits interactive computation and visualization of unsteady particle traces. Statistics are given for frame rates and computation times on single and multiprocessors. The benefits of interactive feature detection in unsteady flows are also demonstrated. | INTRODUCTION
Unsteady particle tracing is a relatively new visualization technique that has emerged because of
the need to visualize unsteady or time-dependent data sets. Steady-state flow simulations only
require one set of grid and solution data to describe a flow, while unsteady flow simulations may
comprise of hundreds or thousands of time steps of data. Each time step has an associated grid and
solution file. The size of these data sets can run into hundreds of gigabytes [2,3].
Numerical techniques for visualizing unsteady flows mirror those used in experimental fluid
mechanics and include path lines, time lines and streak lines.
. path line: generated by tracing the path of a single particle (also called a particle path).
. time line: generated by tracing a line of particles which are all released at the same time.
. streak line: generated by continuously injecting particles from a fixed location.
lines, also called filament lines, are the most popular visualization technique and also the
simplest to generate. In wind tunnels, they are produced by injecting smoke into flow [4]. Stream
lines, curves that are tangential to a vector field, are not generally used to visualize unsteady flows
because they do not show the actual motion of particles in the fluid but rather the theoretical
trajectories of particles with infinite velocity. In steady flows, stream lines are identical to path lines
and streak lines, but in unsteady flows they can differ significantly. For example, Fig. 1 shows
stream lines, path lines, streak lines, and time lines computed from an unsteady simulation of flow
around an oscillating airfoil. All techniques are shown at the 128th time step. The important flow
feature in this data set was the vortex shedding caused by stalling. Both the streak lines and time
lines revealed the vortices well but the stream lines and path lines did not. The latter techniques fail
to capture these features because they do not show the actual motion of the fluid over time.
Fig. 1. Comparison of stream lines, path lines, streak lines and time lines in an unsteady flow.
The goal of this study was interactive computation and visualization of streak lines from large
unsteady computational fluid dynamics (CFD) simulations. In computational flow visualization,
streak lines are generated by releasing particles at discrete intervals, usually in accordance with the
simulation time steps. The continuous injection of particles leads to a rapid growth of the number
of particles in the flow, all of which must be traced until they leave the flow field or until the
simulation ends. There may be several thousand active particles in an unsteady flow, so it is
essential that the advection or tracing process be as efficient as possible.
II. PREVIOUS WORK
Many algorithms have been presented for particle tracing in steady flows yet relatively few
consider the extension to unsteady flows. This extension is not trivial because the time varying
nature of the flow and grid adds complexity to almost every part of the algorithm [3].
Consequently, unsteady particle traces are usually computed in a batch process and then played
back upon completion [5,6,7,8]. A significant problem with this approach is that the particle
injection points must be chosen in advance. Usually, an excessive number of particles are released
(>10,000) to prevent missing important flow structures. A better approach is to interactively
position the streak line injectors and study the particle motion in real time.
Interactive unsteady particle tracing was achieved by Bryson and Levit [9] by simplifying the
algorithm and data set. They transformed the velocity vector field into a uniform (computational)
space to make the numerical integration simpler, and resampled the grid to enable the entire data set
to be stored in memory. However, many of the subtle features in the flow were lost as a result.
Particle tracing in computational space has recently come under scrutiny [10] and was shown to
have poor accuracy compared with physical space schemes.
III. PHYSICAL VS COMPUTATIONAL SPACE TRACING
Body-fitted or curvilinear grids are widely used to model the complex geometries of aerospace
vehicles. Some CFD flow solvers internally transform these curved grids into a uniform Cartesian
space, usually called computational space, to make numerical calculations simpler and more
efficient. On output, the solution data is generally transformed back to the physical grid space for
post analysis and visualization.
As with flow solvers, particle tracing algorithms may operate in both computational and
physical space. Computational space schemes require the curvilinear grid, and the associated
velocity field, to be remapped into computational space where the particles are then advected.
When the mapping is done as a preprocessing step for the whole grid, this makes particle tracing
very efficient [9].
The main disadvantage of tracing in computational space is that the transforming Jacobian
matrices are usually approximations, so the transformed vector field may be discontinuous. Also,
if there are irregularities in the grid, such as cells with collapsed edges, the transformed velocities
may be infinite [11]. Analyses by Sadarjoen et al. [10] and Hultquist [12] in steady flows have
shown that this mapping technique produces significant errors in distorted curvilinear grids.
Calculating Jacobian matrices in unsteady flows with moving grids is more inaccurate because each
matrix has three additional time-dependent terms which must be approximated [13].
Remark: It would be preferable and more accurate if the computational space data were
saved directly from the flow solver. However, there is no provision in the PLOT3D data
file format, used widely by NASA, for storing these fields. This motivated the
improvement of physical space particle tracing.
Physical space tracing schemes are preferred because the interpolation and integration
processes are done on the curvilinear grid which eliminates the need to evaluate Jacobian matrices
and transform the velocity field. The only disadvantage is that the task of locating particles is more
complicated and hence more expensive. This problem is addressed in this paper. An explicit point
location technique is presented which has yielded a significant speed-up over the most common
point location technique: the Newton-Raphson iterative method.
IV. PHYSICAL SPACE TRACING ALGORITHM
Physical space algorithms proceed by first searching out the element or cell which bounds a
given point. This is termed the cell search or point location process. Once found, the velocity is
evaluated at that point by interpolating the nodal velocities. In unsteady flows, the velocity
components usually need to be interpolated temporally as well as spatially. This necessitates
loading two or more time steps of data into memory. Intermediate positions of the grid may also
have to be interpolated if the grid changes in time. The particle's path is determined by solving the
differential equation for a field line:
d
dt
where r is the particle's location and v the particle's velocity at time t. Integrating (1) yields:
r r v r
(2)
The integral term on the right hand side can be evaluated numerically using a multi-stage method
(e.g., Runge-Kutta, Bulirsch-Stoer) or a multi-step method (e.g., backwards differentiation,
Adams-Bashforth). Issues concerning the accuracy and stability of these methods are discussed by
Darmofal and Haimes [14]. Regardless of how it is solved, the end result is a displacement which
when added to the current position, r(t), gives the new particle location at time t+D t.
The essential steps in a time-dependent particle tracing algorithm are as follows:
1. Specify the injection point for a particle in physical space, (x,y,z,t).
2. Perform a point location to locate the cell that contains the point.
3. Evaluate the cell's velocities and coordinates at time t by interpolating between simulation time steps.
4. Interpolate the velocity field to determine the velocity vector at the current position, (x,y,z).
5. Integrate the local velocity field using equation (2) to determine the particle's new location at time t+D t.
6. Estimate the integration error. Reduce the step size and repeat the integration if the error is too large.
7. Repeat from step 2 until particle leaves flow field or until t exceeds the last simulation time step.
It is important to note that step 5 may involve repeated applications of steps 2, 3, and 4 depending
on which numerical integration scheme is used. The 4th order Runge-Kutta scheme used in this
study actually requires three repetitions to advance from time t to t+Dt. Refer to Section IV.E for
more details.
A. Point location in tetrahedral cells
The core problem in all particle tracers is: given an arbitrary point x in physical space, which
cell does this point lie in and what are its natural coordinates. The natural coordinates, also called
barycentric or computational coordinates, are local non-dimensional coordinates for a cell. Strictly,
computational coordinates are different because they are globally defined.
The widely used trilinear interpolation function provides the opposite mapping to that required
for point location, that is, it determines the coordinates of x from a given natural coordinate
(x,h,z). Unfortunately, it cannot be inverted easily because of the non-linear products, so it is
usually solved numerically using the Newton-Raphson method.
By using a point location technique based on tetrahedral elements, the natural coordinates can
be evaluated directly from the physical coordinates. Tetrahedral elements permit the use of a linear
interpolation function to map from natural to physical coordinates:
Note that x 1 , x 2 , x 3 , and x 4 are the physical coordinates at the vertices of the tetrahedron (see Fig.
2). The natural coordinates (x,h,z) vary, as per usual, from 0 to 1 in the non-dimensional cell.
x
z
x
y
z
Fig. 2. Tetrahedron geometry in natural (non-dimensional) and physical coordinate spaces.
Equation (3) can be inverted analytically because it does not have any non-linear terms. The
solution for the natural coordinates at a given physical point
x
z
a a a
a a a
a a a
y y
z z
The constants in the 3x3 matrix are:
a
a
a
a
a
a
a
a
and the determinant, V (actually 6 times the volume of the tetrahedron), is given by:
The natural coordinates (x,h,z) can be evaluated in 104 floating point operations by implementing
the equations above. This figure can be halved by precomputing common terms before evaluating
the matrix coefficients and determinant.
B. Tetrahedral decomposition
The hexahedral cells in curvilinear grids must be decomposed into tetrahedra in order to use
equation (4). Since the data sets for time-dependent flows are usually extremely large, it is
impractical to do the decomposition as a preprocessing step. It must be performed on the fly as
particles enter cells.
A hexahedral cell can be divided into a minimum of five tetrahedra (Fig. 3). This
decomposition is not unique because the diagonal edges alternate across a cell. Since the faces of a
hexahedron are usually non-planar, it is important to ensure that adjoining cells have matching
diagonals to prevent gaps. This is achieved by alternating between an odd and even decomposition
as illustrated in Fig. 3. In a curvilinear grid, the correct configuration is selected by simply adding
up the integer indices of a specific node (the node with the lowest indices was used in practice).
Choosing the odd configuration when the sum is odd and the even configuration when the sum is
even guarantees continuity between cells.
Remark: The odd and even configuration may be switched. This has no impact on the point
location but it can affect the velocity interpolation because different vertices are used. In
practice, because CFD grids have such a fine resolution, these differences are slight and
have only been detected in regions with high velocity gradients.
Fig. 3. Sub-division alternates between two configurations to ensure continuity between cells.
C. Cell-search scheme
Equation (4) allows the natural coordinates to be evaluated directly from the physical
coordinates with relatively little effort. There are four conditions which must be valid for the point
lie within the tetrahedron. They are:
If any one of these is invalid then the point is outside the tetrahedron. In particle tracing algorithms,
this happens when particles cross cell boundaries. The problem then arises of which tetrahedron to
advance to next. The solution is quite simple since the natural coordinates tell you which direction
to move. For example, if x<0, the particle would have crossed the x=0 face. Similarly, if h<0 or
z<0, the particle would have crossed the h=0 or z=0 face respectively. If the fourth condition is
violated, i.e. (1-x-h-z)<0, then the particle would have crossed the diagonal face. The cell-search
proceeds by advancing across the respective face into the adjoining tetrahedron. The look-up tables
needed to identify this tetrahedron are given in Appendix I.
Occasionally, two or more conditions in (5) may be violated if a particle crosses near the corner
of a cell or if it traverses several cells at once. In such cases, the worst violator of the four
conditions is used to predict the next tetrahedron. Even if the bounding tetrahedron is not the
immediate neighbour, by always moving in the direction of the worst violator the search will
rapidly converge upon the correct tetrahedron. Tests in numerous grids, including complex C-grids
with singular points, have never turned up a case where this technique has failed to locate the
bounding tetrahedron.
The cell search procedure described above should only be used if the cell being sought is
nearby, that is, within a few cells of the previous one. This is usually the case during particle
tracing since the majority of particles only cross one cell at a time. There are two situations when
the cell being sought is not likely to be nearby, these being: i) at the start of a particle trace and ii)
after jumping between grids in multi-zone data sets. In these circumstances, the cell search should
be preceded by another scheme in order to prevent weaving across a large grid one tetrahedron at a
time. We use the 'boundary search' technique described by Buning [15].
D. Velocity interpolation in a tetrahedron
In unsteady flows, the velocity field changes in time as well as space. Since the velocity fields
are only solved at discrete time steps and at discrete locations on the grid, they must be interpolated
in both time and space. In the present algorithm, these interpolations are handled separately.
D.1 Temporal Interpolation
The temporal interpolation is performed first using a linear function applied between the two
closest time steps. For example, at a given time t which lies between time steps t l and t l+1 , the
velocity u at an arbitrary grid node (i,j,k) is given by:
where the time fraction d is (t-t l )/(t l+1 -t l ). Note that equation (6) only evaluates the velocity at a
given node, no spatial interpolation is performed at this stage. Note also that if the grid moves in
time, a temporal interpolation of the grid positions is also required. We used a similar linear
interpolation function for this purpose. The temporal grid interpolation must precede the point
location, although the temporal velocity interpolation does not have to. It is, however, convenient
to perform both at once since the time fraction, d, is the same for both interpolants. These temporal
interpolations are only applied locally to a single tetrahedron, that is, the current one being used by
the point location or velocity interpolation procedures.
D.2 Spatial Interpolation
One of three techniques may be used for the spatial interpolation of velocity: physical space
linear interpolation [16], volume weighted interpolation [15], and linear basis function interpolation
[17]. All three are mathematically equivalent [1,18] and produce identical interpolation functions.
The authors showed previously [1] that the linear basis function was the most efficient technique
for this application because it reused the natural coordinates computed during point location. Using
the numbering convention in Fig. 2, the linear basis function for spatial velocity interpolation is:
where x, h and z are the natural coordinates computed in equation (4), and u 1 , u 2 , u 3 , and u 4 are
the velocity vectors at the vertices.
E. Numerical Integration Scheme
The numerical integration of equation (2) was performed with a 4th order Runge-Kutta
scheme. For a time-dependent flow with moving grid geometry, this takes the form:
a t , t t
r r
where r is the particle position, v the velocity vector at that position, and Dt the time step. The four
stages of the Runge-Kutta scheme span three time values (t, t+Dt/2, and t+Dt) and therefore require
new grid and velocity data at each one. Fortunately, these only need to be interpolated in the
tetrahedron that surrounds a particle. However, a particle moves in both time and space during
integration, so it may lie in different tetrahedra during the four intermediate stages of equation (8).
Point location and velocity interpolation must therefore be performed after every stage.
Remark: There are numerical integration schemes which require fewer velocity evaluations
than the Runge-Kutta scheme, e.g., the Bulirsch-Stoer method [19]. These may be
substituted to further improve the performance of this unsteady particle tracing algorithm.
F. Step size adaptation
If the integration step size is fixed at a constant value along the entire particle path, or regulated
to achieve a specified number of steps per cell, a particle may understeer around bends if the flow
changes direction rapidly. This can be prevented by using an adaptive step size control scheme
where the integration step size is changed according to an error tolerance.
The error tolerance can be computed using a standard numerical technique such as step
doubling [19] whereby a particle is advanced forward from a given point using a step size Dt and
then the process is repeated from the same point using two half steps of size Dt/2. The step size is
reduced if the distance between the end-points is greater than a specified tolerance; a number
usually deduced by trial and error. This technique was implemented and tested but was found to be
too expensive for interactive particle tracing because it performed a large number of point locations.
A heuristic technique for adapting step size was suggested by Darmofal and Haimes [20]. They
measured the angle between velocity vectors at successive points along a particle path to estimate
the change in velocity direction. We implemented that scheme and a very similar one which adapted
the step size according to the angle between successive line segments on a path line (Fig. 4).
Particle path
r
r r
cos
Fig. 4. Step size adaptation is based on the change in the path line direction.
Both of these schemes worked well in practice and ran approximately three times faster than the
step doubling algorithm. In both cases, the initial step size was estimated using:
where |u| is the magnitude of the velocity at the current position and V is the determinant from
equation (4). This ensures the initial particle displacement is commensurate with the length scale of
the tetrahedron. Following this initial estimate, the step size is halved if the angle q n is too large
doubled if it is too small (q n < 3 ), or kept the same if it is in between these limits ( 3
q n 15 ). Through experimentation in several data sets we found this scheme produced
particle traces with the same accuracy as the step doubling scheme, based on an error tolerance of
-5 in the latter, when adaptation angles of 15 degrees (upper limit) and 3 degrees (lower limit)
were used.
V. INTERACTIVE PERFORMANCE EVALUATION
In a previous paper by the authors [1], the performance and accuracy of the tetrahedral method
was compared with a conventional particle tracing algorithm in batch computations of streak lines.
The conventional algorithm, originally developed by Buning [15] for steady flows and extended to
unsteady flows by Lane [8], used an iterative Newton-Raphson method for point location and
trilinear functions for grid and velocity interpolations. The execution profiles confirmed that
velocity interpolation and point location were the most computationally expensive tasks in the
conventional algorithm. With the tetrahedral method, computation times were improved by up to a
factor of six while still maintaining the accuracy of the streak lines.
The tetrahedral method has since been implemented in C++ in the Virtual Windtunnel [9], an
interactive visualization system for studying unsteady flows. Results presented here demonstrate
the interactive performance of the tetrahedral method in that system.
The interactive tests were performed on an SGI Onyx with four 75MHz R8000 processors and
five gigabytes of RAM. The large memory capacity enables moderately sized unsteady flow
simulations to be loaded into physical memory, thus avoiding disk latencies when accessing the
data. The tapered cylinder data set [7] was used for all the interactive tests (see Fig. 5). It had 100
simulation time steps and 131k grid points (grid dimensions 64 x 64 x 32). Each time step
consisted of approximately 1.5 megabytes of velocity data.
Fig. 5. The tapered cylinder data set used for the interactive performance tests. The ten streak lines
shown here could be computed and rendered at up to 33 frames per second.
To find important flow features in an unsteady flow, a large number of particles must be
injected into the flow over time. In this respect, our objective was to determine how many streak
lines could be generated interactively with the tetrahedral method.
Fig. 6 shows the aggregate time taken to compute and render streak lines, as a function of the
number of streak lines, for 100 time steps. All computations were performed at run-time and did
not use any precomputed fields. Fig. 6 shows there is a linear relationship between the number of
streak lines and their computation time. It also shows a near linear speed-up when computations
are distributed over two or three processors. These linear trends occur because streak lines are
comprised of discrete particles and can be advected independently. When running on multi-
processors, the Virtual Windtunnel creates p-1 light weight processes, where p is the number of
processors, using SGI parallel programming primitives such as m_fork. One processor is always
reserved by the Virtual Windtunnel for operating system and graphics tasks.
1001030Number of streak lines
Aggregate
time
to
compute
and
renderframes 3 processors
processors
sec
Fig. 6. Time taken to compute and render 100 frames as a function of the number of streak lines.
Timings are given on one, two, and three processors.
The performance is expressed in terms of frame rate in Fig. 7. One frame corresponds to one
simulation time step. A frame rate of 10 frames per second is the accepted baseline for interactive
visualization [9]. The results in Fig. 7 show that up to 15 streak lines could be computed and
rendered at this rate on one processor. Likewise, up to 32 streak lines could be computed and
rendered on two processors and 44 streak lines on three processors at 10 frames/sec. It is
important to note that the number of particles in a streak line increase linearly over time, so earlier
frames are much quicker to compute than later ones. The frame rates shown here are averaged over
steps. Higher frame rates could be obtained by reducing the number of time steps.
processors
processors
processor
Frames
per
second
Number of streak lines
Interactive threshold
Fig. 7. Frame rates for computing and rendering streak lines on one, two, and three processors.
Another interesting metric is the particle advection rate. The total number of particle advections for
lines over n time steps is given by:
number of advections
Dividing this by the time taken to compute the streak lines gives the particle advection rate. At the
interactive threshold (see Fig. 7), the advection rate is about 8k particles/sec on one processor, 14k
particles/sec on two processors, and 22k particles/sec on three processors.
VI. INTERACTIVE STREAK LINES AND FEATURE DETECTION
It was illustrated in Fig. 1 that particle paths of individual particles can give misleading results
because they sometimes fail to detect important flow features. By injecting large numbers of
particles we are more likely to capture these features, although actually seeing them can be difficult
because particles clump together in clouds when there is a lot of mixing or recirculation in the flow.
We found it most useful to interactively probe the flow using a rake with 10 to 40 injectors.
However, users are warned of a problem with interactive streak lines. Streak lines, unlike stream
lines, take time to propagate, so the user should not move the rake of injectors too quickly. It is
best to periodically stop and let the streak lines cycle through all the time steps before moving to
another location.
The benefits of near real-time performance and interactive rake adjustments are demonstrated in
Fig. 8. In this example, the user wanted to visualize the von Karman vortex shedding in the wake
of the tapered cylinder. The general area of interest was first identified by using a wide rake with
injectors. The rake was then shortened and used to probe for a region of flow reversal. Once the
desired location was found, the number of injectors was increased to make the streak lines more
coherent and to highlight the vortex shedding. The particle colour depicts the velocity magnitude.
Fig. 8. Flow features can easily be detected in an unsteady flow by interactively moving the streak
line injectors.
Interactive streak lines were also used to visualize the flow over a delta wing at a high angle of
attack, as shown in Fig. 9. This data set had 100 simulation time steps and a single-zone grid like
the tapered cylinder, but was over nine times larger. The velocity field for each time step was
nearly 14 megabytes, and the entire data set nearly 1.4 gigabytes. Although this data set was
significantly larger than the test case, the performance of the particle tracing was not degraded.
This is because particle advection only requires local cell searches and interpolations. Previous
results [1] have shown that there is a performance penalty on multi-zone grids because global
searches are required when particles move into new grids.
Fig. 9. Both local and global flow structures were visualized by interactively changing the number
of streak lines and the length of the rake in this unsteady flow around a delta wing.
VII. FUTURE DIRECTIONS
The tapered cylinder and delta wing data sets used in this study are small compared to the multi-zone
data sets currently being computed at NASA Ames Research Center. Only a small number of
time steps from these large multi-zone data sets will fit into physical memory on even the largest
workstations. Two approaches are being investigated to allow scientists to visualize more of their
data. The first is by simply subsetting the grid to reduce its size and to localize the region of
interest. The second is by using a large disk array with a high bandwidth. A disk array with 96
disks and 200 gigabytes of storage capacity is currently being evaluated for this application. Data
transfer rates of over 300 megabytes per second have been measured. Potentially, this will enable
us to read solution files with megabytes of data per time step at interactive frame rates.
VIII. CONCLUSIONS
Tetrahedral decomposition has made the point location and velocity interpolation tasks more
efficient in a particle tracing algorithm for time-dependent flows. Streak lines were computed and
rendered at interactive frame rates on an SGI Onyx workstation in a data set with 100 time steps.
Almost linear scaling in performance was measured when one, two, and three processors were
used. Up to 44 streak lines could be computed and visualized at 10 frames per second when the
calculations were performed on three processors. This permitted interactive positioning and
adjustment of streak line injectors which aided vortex detection in an unsteady flow.
I
A. Node Numbering Convention
Different node numbering conventions are used for the "odd" and "even" tetrahedral
decompositions to simplify the look-up tables used for point location in Appendix I.C.2834
x
y
z
(a)752x
y
z
Fig. 10. The (a) odd and (b) even numbering conventions.
B. Tetrahedral Decomposition
The following diagrams illustrate how a hexahedral cell is decomposed into 5 tetrahedra. Note that
all tetrahedra are drawn in the natural coordinate system (refer to Fig. 2 for more details).
x
x
z
x
z
x
z
x
z
Tetrahedron #1 Tetrahedron #2 Tetrahedron #3 Tetrahedron #4 Tetrahedron #5
(a)
x
x
z
x
z
x
z
x
z
Tetrahedron #1 Tetrahedron #2 Tetrahedron #3 Tetrahedron #4 Tetrahedron #5
(b)
Fig. 11. The (a) odd and (b) even tetrahedral decompositions.
C. Point location look-up tables
Two look-up tables are used to locate the tetrahedron that bounds a particle. The first table predicts
which tetrahedron a particle will move into when it exits the current tetrahedron through a particular
face. The exit face is determined from four conditional tests which are based on the natural
coordinates (x,h,z), (refer to Section IV.C for more details). Note that this table applies to both the
odd and even tetrahedral decompositions.
Conditional
test
Tetrahedron #1 Tetrahedron #2 Tetrahedron #3 Tetrahedron #4 Tetrahedron #5
z < 0 #4 #1 #2 #3 #1
Fig. 12. The adjacent tetrahedron look-up table.
A second table is used to update the cell indices (i,j,k) when a particle moves into a different
hexahedral cell. Because each cell is divided into five tetrahedra, particles may cross several
tetrahedra before they actually leave a cell. As with the previous table, the number of the current
tetrahedron and an exit face are needed to identify the appropriate row and column. The entry
"same cell" in this table indicates that the neighbouring tetrahedron is in the same hexahedral cell.
Conditional
test
Tetrahedron #1 Tetrahedron #2 Tetrahedron #3 Tetrahedron #4 Tetrahedron #5
z
1-x-h-z < 0 same cell same cell same cell same cell same cell
(a)
Conditional
test
Tetrahedron #1 Tetrahedron #2 Tetrahedron #3 Tetrahedron #4 Tetrahedron #5
z
1-x-h-z < 0 same cell same cell same cell same cell same cell
(b)
Fig. 13. Cell index look-up tables for the (a) odd and (b) even tetrahedral decompositions
ACKNOWLEDGEMENTS
We are very grateful to Steve Bryson for helping us install the tetrahedral method in the Virtual
Windtunnel. We are also grateful to the following people for allowing us to use their data sets.
Sungho Ko for the oscillating airfoil, Dennis Jespersen and Creon Levit for the tapered cylinder,
and Neal Chaderjian for the delta wing. This work was supported by NASA under contract NAS
2-12961.
--R
"Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition"
"A Software Model for Visualization of Large Unsteady 3-D CFD Results"
"Scientific Visualization of Large Scale Unsteady Fluid Flow"
"Analysis and Visualization of Complex Unsteady Three-Dimensional Flows"
"Numerical Simulation of Streaklines in Unsteady Flows"
"Numerical Simulation of Flow Past a Tapered Cylinder"
"Visualization of Time-Dependent Flow Fields"
"The Virtual Windtunnel: An Environment for the Exploration of Three-Dimensional Unsteady Flows"
"Particle Tracing Algorithms for 3-D Curvilinear Grids"
"Sources of Error in the Graphical Analysis of CFD Results"
Interactive Numerical
"Efficient Solution Methods for Navier-Stokes Equations"
"An Analysis of 3-D Particle Path Integration Algorithms"
"Numerical Algorithms in CFD Post-Processing"
Finite Element Analysis in Fluid Dynamics
The Finite Element Method Displayed
The Finite Element Method in Engineering
Numerical Recipes
"Visualization of 3-D Vector Fields: Variations on a Stream"
--TR
--CTR
Guy Albertelli , Roger A. Crawfis, Efficient subdivision of finite-element datasets into consistent tetrahedra, Proceedings of the 8th conference on Visualization '97, p.213-219, October 18-24, 1997, Phoenix, Arizona, United States
Ralph Bruckschen , Falko Kuester , Bernd Hamann , Kenneth I. Joy, Real-time out-of-core visualization of particle traces, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
Falko Kuester , Ralph Bruckschen , Bernd Hamann , Kenneth I. Joy, Visualization of particle traces in virtual environments, Proceedings of the ACM symposium on Virtual reality software and technology, November 15-17, 2001, Baniff, Alberta, Canada
Dale A. Lawrence , Christopher D. Lee , Lucy Y. Pao , Roman Y. Novoselov, Shock and vortex visualization using a combined visual/Haptic interface, Proceedings of the conference on Visualization '00, p.131-137, October 2000, Salt Lake City, Utah, United States
Jens Kruger , Peter Kipfer , Polina Kondratieva , Rudiger Westermann, A Particle System for Interactive Visualization of 3D Flows, IEEE Transactions on Visualization and Computer Graphics, v.11 n.6, p.744-756, November 2005
John G. Hagedorn , Steven G. Satterfield , John T. Kelso , Whitney Austin , Judith E. Terrill , Adele P. Peskin, Correction of Location and Orientation Errors in Electromagnetic Motion Tracking, Presence: Teleoperators and Virtual Environments, v.16 n.4, p.352-366, August 2007
Dale A. Lawrence , Lucy Y. Pao , Christopher D. Lee , Roman Y. Novoselov, Synergistic Visual/Haptic Rendering Modes for Scientific Visualization, IEEE Computer Graphics and Applications, v.24 n.6, p.22-30, November 2004
J. Schroeder , Berk Geveci , Mathieu Malaterre, Compatible Triangulations of Spatial Decompositions, Proceedings of the conference on Visualization '04, p.211-218, October 10-15, 2004
Shyh-Kuang Ueng , Christopher Sikorski , Kwan-Liu Ma, Out-of-Core Streamline Visualization on Large Unstructured Meshes, IEEE Transactions on Visualization and Computer Graphics, v.3 n.4, p.370-380, October 1997 | streak line;particle tracing;tetrahedral decomposition;unsteady flow;scientific visualization;computational fluid dynamics;time-dependent;curvilinear grid |
614347 | A Real-Time Photo-Realistic Visual Flythrough. | AbstractIn this paper we present a comprehensive flythrough system which generates photo-realistic images in true real-time. The high performance is due to an innovative rendering algorithm based on a discrete ray casting approach, accelerated by ray coherence and multiresolution traversal. The terrain as well as the 3D objects are represented by a textured mapped voxel-based model. The system is based on a pure software algorithm and is thus portable. It was first implemented on a workstation and then ported to a general-purpose parallel architecture to achieve real-time performance. | Introduction
The quest for real-time photo-realistic rendering has been one of the major
goals of 3D computer graphics in recent years. Techniques for adding realism
to the image, such as shading, shadow, textures, and transparency have
been developed. The generation of realistic images in real-time is currently
being researched. Flight simulator applications have always led the way
in real-time performance [3]. Special-purpose machines, dedicated to flight
simulation have been developed. These machines generate images with reasonable
realism in real-time, but are expensive (more than a few million US
dollars) [7, 15]. The main contribution of the work presented in this paper is
that the real-time performance was achieved on commercial general-purpose
parallel architecture, as opposed to specialized rendering hardware.
Generating images of arbitrary complex scenes is not within the reach of
current technology. However, the rate of image generation in flight simulation
can achieve real-time because the scenes that are viewed from the sky are not
too complex. Typical views contain terrains which are merely 2.5D, or 3D
objects which are seen as relatively simple, featureless objects. Nevertheless,
simulating photo-realistic aerial views in real-time is by no means easy [10,
The term visual flythrough can be distinguished from flight simulation.
Visual flythrough generates simulated images as seen from a video camera
attached to a flying object. The camera generates photo-realistic images,
although not necessarily in color, since many video cameras have a grey-level
output. It should be emphasized that the generation of true photo-realistic
images is critical for applications where the user needs to recognize the area
or identify objects on the ground (i.e. targeting, mission rehearsal). See the
photo-realistic impression of the images presented in Figure 1.
In a typical flythrough scenario the camera views a very large area, especially
when the camera pitch angle is high (i.e. towards the horizon). In
many applications the camera flies at high speed over long distances and the
area covered during a few seconds of flight is vast. This suggests that it is
not possible to load the entire terrain data onto the main memory. For some
applications even a Gigabyte of RAM is not enough. It is safe to say that
no size will ever suffice, since the application demands will always increase
according to the availability of space. This suggests that flythroughs require
a large secondary storage together with a fast paging mechanism.
An image of an aerial view gains its realistic impression by mapping a
digital photograph onto the terrain model. In order to achieve high quality,
full resolution of both the digital terrain model and the corresponding aerial
photograph need to be employed. This causes a major load on conventional
graphics hardware based on a geometric pipeline. First, a high resolution
polygonal terrain model contains a vast number of polygons which need to be
processed by the geometric pipeline, while processing tiny polygons loses the
cost-effectiveness of the rasterization hardware. The high resolution photograph
that needs to be texture-mapped during rasterization, creates a further
problem since large photographic maps need to be loaded onto an expensive
cache. For example, the Reality-Engine board cannot hold more than a few
Kilobytes of texture [1], while larger textures need to be paged from the
main memory in real-time [8]. To avoid that, many flight simulators use
repetitive patterns as ground texture, but for some applications where a specific
target area is a vital requirement, a true photograph has to be mapped
on the terrain. These photographs are huge and must be loaded on the fly
to the rasterization hardware, forming a serious bottleneck in the rendering
pipeline.
Instead of using a polygonal model and a geometric pipeline we have
favored a software solution where the model is represented by a voxel-based
representation. The texture-mapping of the photograph over the model is a
preprocessing stage, which is decoupled from the rendering stage [9]. Voxel-based
modeling also lends itself to representing fine grained geometry. The
voxel data is regular, internally represented in an array, and offers easy and
fast access to the data [5]. Each voxel represents and stores some discrete
local attributes of the model. Voxels representing the terrain contain a height
value and a color value, while the voxels representing a 3D model contain
a texture photograph as will be described below in Section 5. A voxel-based
visual flight simulator with real-time performance has been developed
at Hughes Training, Inc. Their flight simulator runs on special purpose
hardware, but yields poor results on a graphics workstation [15].
The visual flythrough that we have developed is hardware independent,
and thus portable. Portability is important, since it enables integration of
the flythrough system with rapid progress in hardware platforms. However,
a software rendering algorithm must be fast enough, around a second or two
per frame running on a sequential machine, so that on a parallel machine
with processors it achieves a rate higher than 20 frames per second. That
Figure
1: Two aerial photo-realistic images generated by the flythough.
is, of course, assuming little overhead is imposed on the parallel version of
the algorithm.
Although we have employed a parallel machine, the real time performance
is mainly due to an innovative rendering algorithm. The new algorithm generates
a photo-realistic image such as in Figure 1 within two seconds, on a
common workstation. The implementation of a parallel version of the algorithm
on a 32-way multiprocessor architecture has sped up the rendering
to achieve the desired real-time rates. It should be noted that other (hard-
ware independent) ray casting algorithms have reached reasonable speeds
([13, 5, 11]), but just for point sampling. Avoiding aliasing artifacts is quite
involved and time costly. The algorithm presented here resembles the principles
of the projection algorithm in [15]. However, their algorithm is based
on a forward mapping method and was designed to be implemented in hard-
ware. The algorithm presented in the next section is a simple ray casting
Figure
2: The image footprint over the terrain is defined by the viewing
parameters.
(forward mapping) accelerated by ray coherence and multiresolution traversal
and highly optimized for hardware independent implementation.
The remainder of this paper is structured as follows: Section 2 describes
the rendering algorithm. Section 3 presents the IBM Power Visualization
System, the current parallel platform of the parallel algorithm, implementation
details concerning the parallelization of the algorithm, and some results.
We discuss the generation of voxel-based objects in Section 5 and conclude
with a brief discussion on our current activity and some final remarks.
2 The Rendering Algorithm
The sequence of images generated by the rendering algorithm is independent.
Each image is defined by the location of the camera in 3D space, the camera
field of view, and its orientation, namely the pitch, roll, yaw angles, and image
resolution. Figure 2 depicts the image footprint defined by the projection
of the image frame over the terrain. The terrain model is represented by
a voxel-based map, generated from a discrete elevation map colored by a
corresponding aerial photo map. The rendering algorithm is based on a
backward mapping approach known as ray casting. The image is generated
by casting a ray-of-sight emanating from the viewpoint through each of the
image pixels towards the model (see Figure 3). The ray traverses above the
Figure
3: Discrete ray casting of a voxel-based terrain
terrain voxels until it intersects the terrain. The terrain color is sampled
and mapped back to the source pixel. Since the model is discrete there is no
explicit intersection calculation, but a sequential search for a "hit" between
the ray and a voxel. The speed of the ray traversal is crucial for achieving
real-time performance.
The technique we employ is based on a discrete grid traversal, where the
steps along the ray are performed on the projection of the ray on the plane
rather than in 3D. The heights along the ray are incrementally and uniformly
sampled and compared to the height of the terrain below it, until a hit occurs
and the color of the terrain at the hit point is mapped back to the source
pixel. If there is no hit, then the background color of the sky is mapped.
This apparently naive traversal is "flat" ([12]) in contrast to a "hierarchical"
traversal ([5]). In [5] a pyramidal elevation map is used. The multiresolution
pyramid is treated as a hierarchy of bounding boxes through which the ray
traverses in a recursive top-down traversal. The number of steps of the
hierarchical traversal is proportional to the height of the pyramid. When
a binary pyramid is used, the number of steps is logarithmic to the terrain
length, rather than linear to the terrain size as in the case of the flat traversal.
Figure
4: Assuming the terrain has no caves, each ray can emanate from the
previous hit point.
Our algorithm is based on the incremental "flat" traversal, but, as will be
shown, some rays are "hierarchically" traversed.
Since the terrain is a height field map we can assume that the terrain
model has no vertical cavities or "overhangs" (i.e., a vertical line has only
one intersection with the terrain). The traversal can be accelerated using ray
coherence [6, 11]. The basic idea is that as long as the camera does not roll,
a ray cast from a pixel vertically adjacent always hits the terrain at a greater
distance from the viewpoint than that of the ray below it. The image pixels
are generated column by column from bottom to top. A ray emanating
above ray i will always traverse a distance not shorter than the distance of
ray i (see Figure 4). Thus, ray its traversal from a distance
equal to the range of the previous hit of ray i. This feature shortens the ray's
traversal considerably.
The total number of steps required to generate one column is equal to
the length of the column footprint, eliminating the factor of the number of
pixel columns. In other words, a naive generation of one column has a time
complexity of O(ml), where l is the length of the column footprint and m
is the number of pixels in the image column. Using ray coherence the time
complexity is reduced to O(l) only, providing an order of magnitude speed-
up. The rays emanating from the bottom of the column cannot gain from a
previous hit and are thus accelerated by a hierarchical traversal [5].
Using the above vertical ray coherence between consecutive rays, each
terrain voxel is virtually traversed once. The time complexity of the traversal
is proportional to the number of voxels in the image footprint. This is still a
huge number since the image footprint can extend to the horizon. Moreover,
this number is view-dependent and causes instability of the frame generation
rate.
Due to perspective projection, the rays diverge with the distance, caus-
Figure
5: Multiresolution traversal. The voxel map resolution corresponds to
the sampling rate.
ing a non-uniform sampling rate of the terrain voxels by the rays. The rays
emanating from the bottom of the image frame hit the terrain at a closer
range than the upper rays. Assuming that the terrain dataset is represented
in a single resolution, then, close voxels tend to be oversampled while far
voxels are undersampled. Using a hierarchy of data resolutions improves the
sampling, since the rays can adaptively traverse and sample voxels of an appropriate
size, proportional to the pixel footprint (see Figure 5). Optimally,
in every step one pixel is generated. In multiresolution traversal the voxel
sampling rate becomes proportional to the number of rays (i.e. pixels), and
the number of steps becomes independent of the viewing direction. That
is, the number of steps over the terrain is in the order of the image space
rather than the object space. Thus, the adaptive hierarchical traversal not
only speeds up the rendering, but also helps to stabilize the frame generation
rate.
In our implementation, we use a pyramid of data resolutions where each
level has half of the resolution of the level below it. Using more resolutions
can be even more successful, in the sense of uniformity of the sampling,
but then it would use more space. Another advantage of a binary pyramid
is the simplicity of alternating between consecutive levels, where the step
sizes are either multiplied or divided by two, taking advantage of the integer
arithmetic of the traversal [5]. Moreover, the pyramid offers a fast first hit for
the first rays which emanate from the bottom row of the pixel array. Those
rays cannot benefit from coherency with the previous rays. For those rays a
top-down traversal of the hierarchy speeds up their first hit [5].
One important issue that must be taken care of in a real-time hierarchical
rendering is creating a soft transition when switching between levels. A sharp
transition is very noticeable and causes an aliasing effect of a wave that
sweeps over the terrain. A simple solution is to interpolate between adjacent
hierarchies [14] where the interpolation weights are defined by the distance
from the viewpoint to the sampled voxels. Since the range gradually changes,
so do the weights, causing a soft transition.
Synthetic objects such as trees, buildings, and vehicles can be placed
over the terrain. The 3D objects are represented by sticks (a run of voxels)
of three types: uniform sticks which are colored by a single color like a
terrain voxel, textured sticks which contain a vertical sequence of colored
voxels, and complex sticks which are textured sticks, but contain some semi-transparent
or fully transparent voxels (see [15]). Synthetic objects are then
described by a set of adjacent sticks. A ray which hits a textured stick
climbs onto the stick and maps back the stick texture to the screen. When
a semi-transparent value is encountered, a secondary ray continues through
the voxel. The results of the secondary ray are then blended with the values
of the primary ray according to the value of the semi-transparent voxels. In
many cases the transparency value indicates a cavity in the stick; in this case
no blending is performed and the colors of the secondary rays are directly
mapped to the pixels.
Since cavities cause the spawning of secondary rays it is clear that they
slow down the rendering process. One way to reduce cavities is to fill them
up at coarse resolutions, assuming the cavities are small enough and their
contribution to the final image is insignificant. One should note that in
typical scenes only a small fraction of the sticks need to be complex. For
example, when viewing woods only the trees at the boundary needs to be
fully represented with their non convex parts, while most of the other trees
are hidden and only their tops can be seen.
A typical scene contains many replicated objects placed at different locations
and orientations. Thus, many sticks are common to many objects. A
complex voxel contains a pointer instead of a color which points into a stick
table. Each stick consists of a header and a sequence of values. The header
contains several attributes like the stick type and the stick length.
2.1 The Basic Algorithm
In this section we present in detail the basic algorithm that generates a
single column of the image. The algorithm is based on a fast traversal of the
column footprint over the terrain. The voxels along the footprint are tested
for visibility and the colors of the visible ones are sampled and mapped back
to the image column. The pseudo-code is shown in Figure 7.
Let E be the location of the eye and P the location of a column pixel.
The parametric equation of a ray emanating from E and passing through P
is E). Denote the ray direction
Then for a given x, the coordinates along the ray are explicitly given by:
and
Assuming the ray is X major, i.e. Q:x ? Q:y, then the sequence of
the voxel coordinates (x; y) along Q is generated by a forward differences
evaluation of the line equation:
z
SIGN(Q.x).
Using fixed point arithmetic the integral coordinate of y, denoted by byc,
is retrieved by a shift operation on the binary representation y, while the
fraction part, used for linear interpolation at the sampling
point (see below). The hit between the ray Q
and the terrain is detected by
comparing height(x; byc), the height of the terrain above (x; byc), against z.
If z ? height(x; byc) then x; y and z are incrementally updated, otherwise a
hit has been detected. The terrain color at (x; byc) is sampled and mapped
to the pixel P j
, and the process proceeds to the next ray Q j+1
emanating
from P j+1
Since the terrain is a height field, the ray Q j+1 does not hit the terrain
before it reaches the hit point of
. The algorithm continues to evaluate the
sequence of the (x; y) coordinates, and their heights need to be compared to
the height of ray Q j+1
(see
Figure
6). The slope (Q:z=Q:x) and the height
image plane
Figure
Climbing from the hit point of ray Q i
to ray Q
of ray Q j+1
above x is evaluated by Equation 3. Note that a small error is
introduced since the plane defined by the rays emanating from a column of
the image plane is not perpendicular to the main plane and may be slightly
slanted due to the perspective projection. However, when the field of view is
small, say under degrees, the error is insignificant.
Let ~
E be the location of the eye.
Let ~
P be the location of the bottom pixel of the column.
Let ~
Up be the vector direction of the image columns.
Let ~
E be the direction of the ray emanating from P .
Assume Q:x ? Q:y and E is above the terrain, and
Let n be the distance between x and the end of the terrain.
while (n -)f // while not reaching end of terrain
while (z ! height[x; byc])f // test for a hit
yield the subvoxel weight
sample the voxels
back map the results
if column done return;
~
move up to next pixel
~
climb to the new ray
// Move on to the next voxel along the ray
x += SIGN(Q.x); // move along the major axes
y += Q.y/Q.x; // incrementally update the Y coordinate
z += Q.z/Q.x; // incrementally update the ray height
if (n) // the sky is seen
color the rest of the pixels with the sky color;
Figure
7: The integer base incremental traversal.
The function Sample(x; byc; w) samples the terrain colors at the integer
coordinates of x. However, the resolution of the fixed point values is higher
than that of the voxel space, and the fraction value, denoted by w, yields the
subvoxel location of the hit point. The exact hit point lies on the vertical grid
line between (x; byc) and (x; byc + 1), (see Figure 8). Thus, the voxel colors
of x; byc and are linearly interpolated at w. Since the size of the
double step size
Figure
8: The samples are always on the vertical grid lines, where w indicates
the subvoxel vertical sample location. The switch to a double step size must
occur at an even step.
pixel footprint is about the size of a voxel, this simple filter is satisfactory.
The traversal algorithm has to switch to a lower resolution at some point.
Since the steps are of unit size along the major direction it is rather simple
to double the step size, and respectively, the ray vector and its slopes. To
preserve the property that the steps are always at integer coordinates of the
major axes, the switching to a double step size at the lower resolution must
occur at an even step of the current resolution (see Figure 8).
The switch to a lower resolution occurs at the distance where the voxel
footprint in the image is narrower than a pixel. In other words we avoid
undersampling the voxels. Since the vertical field of view is not equal to the
horizontal field of view, we consider only the horizontal one allowing vertical
oversampling or undersampling to occur in some rare cases. In particular,
when the viewing pitch angle is low (i.e., shallow), the pixel footprint tends to
elongate and may cause significant undersampling. Vertically supersampling
the pixels to compensate for elongated footprints is not too costly since it
does not require accessing a larger number of voxels. We have implemented
a variation of supersampling where each pixel has been supersampled by
parallel rays. The relaxed assumption that the rays cast from a single pixel
are parallel enables efficient implementation without any significant loss of
quality.
3 Parallel Implementation
Sequential implementation of the rendering algorithm cannot deliver the desired
real-time rates on contemporary workstations. It is vital to use a powerful
parallel machine, not only to speed up the rendering but also to support
the processing of very large databases. The application requires flying over
thousands of square kilometers, including many 3D objects. Taking into account
the hierarchical data structures, the total amount of data is over 35
Gigabytes (see below). Moreover, the relevant data, i.e. the image footprint,
must be continuously loaded into main memory. Thus, the machine needs
to have very large first and secondary memories, and high speed channels
between them. All these requirements need the support of a machine with
high speed and very large storage capacity, with large bandwidth busses.
However, a postprocessor is used to further accelerate the image generation
rate and to enhance the image quality (described below).
A block diagram of the system is illustrated in Figure 9. The IBM Power
Visualization System (PVS) is the parallel machine described below. It is
controlled by an IBM RS/6000 Support Processor which also serves as a connection
to the external world. It reads the commands from the user's control
stick and sends control command from the PVS to a Post Rendering Processor
(PRP) (see below) through an Ethernet LAN. The images generated by
the PVS are sent via an HIPPI (100MB/Sec) channel to the PRP and are
displayed on a standard NTSC monitor.
3.1 The IBM Power Visualization System
The IBM Power Visualization System (PVS) was designed to provide computational
power, high-speed memory and I/O to realize very large amounts
of complex data. The PVS is a shared memory architecture consisting of up
to parallel processing units, and up to 2.5GB of internal local and global
memory.
The architecture consists of up to eight processor cards. Each processor
card consists of four processor elements, each composed of an Intel i860XR
or i860XP microprocessor operating at 40 or 45 MHz.
Processor storage consists of 16 MBytes of local memory per processor
and global memory which can be increased to 2048 MBytes. The global
memory consists of up to four memory cards. Each card is designed to
Disk Array
Support Processor
Monitor
Bus Interface
RS/6000
Post Rendering
Processor
Memory
SCSI
Interface Interface
HIPPI
(up to 2GB)
512MB/Card
Shared Memory
System
Interface
PVS
100MB/sec
Control Stick
System Bus
Ethernet
Figure
9: A block diagram of the system
provide a data bandwidth of 640MB/sec or 720MB/sec. This is accomplished
by partitioning the memory into four interleaved memory banks, each of
which can perform memory reads and writes, thus reducing the latency and
improving throughput. In addition, there is interleaving between cards if
there are multiple memory cards in the system.
An SCSI interface card with four Fast/Wide (peak of 20 MB/Sec) controllers
is used to connect to the disk array. Using an SCSI disk reduces the
system price and promises upgradability. The PVS strips the data into all
the controllers, giving a throughput of more than 70MB/Sec. Thus, it can
contain the database and can load the memory fast enough.
The PVS also provides means for producing and outputting the frames in
real-time. A video controller which is attached via an HIPPI channel to the
Server, includes two logically distinct frame buffers with a total capacity of
up to 32MB. The first 8-bit buffer is used for workstation graphics and text
from an X-Windows system. The other is a 24-bit/pixel double-buffered full
color RGB image buffer at HDTV resolutions and above.
3.2 Implementation Details
The rendering task is partitioned among the processors. One of the them,
selected arbitrarily, operates as the master and the rest are the slaves. The
master processor, among its many tasks, sets the viewing parameters of the
next frame, including the new positioning of the camera and its orientation,
according to the trajectories of the flight. The generated image is treated
as a pool of columns, and each slave processor renders one column of pixels
as an atomic task. As soon as a slave terminates rendering one column, it
picks a new column from the pool. Access to the pool is monitored by a
semaphore operation provided by the PVS library. The semaphore forces
exclusive access to the pool, so that only one processor at a time can pick
a column. Moreover, as soon as the last columns of the frames have been
picked and generated, the free processors start to generate the first columns
of the next frame. Using this strategy the processors are kept busy with a
perfect load balancing.
Although the PVS contains as much as two Gigabytes of RAM, the
database cannot be loaded entirely into the main memory. The entire database
is stored in the disk array while the relevant sections (i.e., the image footprint)
are loaded dynamically into memory. The terrain database is partitioned into
small square tiles. According to the viewing parameters, the master draws
the rectangular frame of the image footprint on the terrain, makes sure that
the tiles that fall in the frame footprint are already in memory, and loads the
missing tiles from the disk-array. Since the footprint changes incrementally,
only a few tiles need to be loaded at each frame. A large configuration of
the main memory consists of two Gigabytes and can contain more than the
size of one frame footprint; thus, we use an extended footprint. Some of the
tiles that are in the larger footprint would otherwise have been loaded on the
next frame. Thus, the extended footprint saves many critical loadings. The
tiles that are loaded are actually prefetched and their presence is not critical
for correct rendering of the current frame. This mechanism is found to be
very efficient as it can treat fast changes of camera, as much as one entire
field of view per second.
Pitch 32p 16p 8p 4p 2p
14.6 11.0 5.4 2.5 1.1 0.36
Table
1: Frames per second (fps) generated by the PVS as a function of the
number of processors. Each line of the table shows the fps sampled at different
pitch angles.
3.3 Results
Quantitative results are presented in Table 1. The frame generation rate of
the PVS has been measured at three different angles for different numbers
of processors. These rates are further accelerated by the PRP to achieve a
steady frame rate of per second. However, from these numbers
we can learn about the performance of the algorithm. First, a linear speed
up is achieved. The above numbers imply that by doubling the number of
processors the frame generation rate is more than doubled. This is because
one processor is dedicated as the master processor. A second observation is
the dependency between the performance and the pitch angle. As the pitch
angle gets smaller, the frame generation rate decreases. This is because at
small pitch angles the frame footprint extends. However, since we use a
hierarchy, the size of the footprint is bounded. It should be noted that there
is a speed quality tradeoff. By scaling the pixel-to-voxel ratio it is possible
to speed up the frame generation rate. As the voxels used are "scaled",
the footprint sizes (voxelwise) decrease. Of course as the pixel-to-voxel ratio
increases the voxels are oversampled and the image is blurred. However, this
ratio is used as a tool to tune the quality of the image as the frame generation
rate is guaranteed by the PRP.
A typical database consists of a large terrain with tens of target areas.
The global terrain is a 1 meter resolution playground of 55x80 square kilo-
meters, which is 4.5Giga voxels. Each voxel is four bytes, thus the size of the
global terrain is 17.6 Gigabytes. Adding the hierarchy requires a third more
(5.9G), thus 23.5G bytes in total. Each target area consists of three levels of
detail: 2.5x2.5 square kilometers of 0.5 meter resolution, 1.25 by 1.25 square
kilometers of 0.25 meter resolution, and 625 by 625 square meters of 12.5
centimeters for the highest resolution. A single target area database size is
bytes. No hierarchy is needed because the coarser levels are given in
the global terrain. Assuming, for example, 40 target areas require over 12G
bytes. In total 35G bytes are needed for the terrain data. The 3D objects
consume more space. A typical object requires about 1.5M bytes. Here we
should mention that if true colors were needed, and not only grey levels, the
database would have been almost double the size.
4 The Post Rendering Processor
The images generated by the PVS are asynchronous since their rate is dependent
on the viewing direction. The frames are created at a rate of 10-15Hz.
From these images an NTSC video signal should be produced. The image
fields, that is the even/odd NTSC rows, have to be transmitted at a rate of
(interlaced). If the fields contain only the last frame generated by the
PVS, the human eye would detect jumps every time the frame is changed.
To achieve a smooth sequence of images that do not irritate the eye, it is
necessary to generate the frames at a synchronous rate. The idea is to simulate
small changes in the camera position as 2D transformations applied to
the last frame available. However, unlike the interpolation method [2], here
the image needs to be extrapolated. The image is digitally warped on the
fly with respect to the flying trajectories. The warping is done using the
machine. The MaxVideo serves as the Post Rendering
Processor and is also used for some other 2D functions, such as automatic
gain control (AGC), filtering, scaling and rolling the image. It should be emphasized
that interpolating between available frames is not possible since it
would cause a small but critical latency which is not acceptable in real-time
systems, where real-time feedback is vital. The extrapolated images may
have minor differences from the next real frame. However, since the flying
trajectories are known and are relatively smooth, the transition between the
extrapolated frame to the real frame is smooth. Since the warping function
might mapped back a point outside the source frame, the real frames are
slightly larger and include margins. These margins are relatively small since
flying trajectories are smooth, recalling that the real images are created at a
rate of 10-15Hz.
Given an image A generated by some camera position, the goal is to
warp the image so that it approximates the image B that would have been
generated by a new camera position. Let us define f as the function that
maps B back to A, such that if p is a point in the 3D space that is seen from
pixel x in B and in pixel
x 0 in A, then
Once f is known, the pixel
color at
x is determined by bilinear interpolation at x 0 .
A perspective warp function would be best; however, the MaxVideo supports
a second degree polynomial warp. Thus, f is composed of two functions
and f y
are two second degree polynomials:
and
f y
To determine the above 12 coefficients, a set of 2n ? 12 equations is
explicitly defined by n control points. The system of 2n equations is' solved
using a least squares method. The 2n equations are defined by calculating
the position of n points in the 3D world coordinate for camera position A
and B, and projecting them back to the image space. We used nine points
evenly distributed in the image plane. During rendering the 3D coordinates
of the terrain point seen from those nine fixed locations are registered.
Denote the vector of unknown coefficients by C j
The system that we need to solve is
and
These are two sets of n equations for six
variables. Assuming n is larger than six, the least squares solution gives us
F t X. Note that for
Note also that since the roll rotation is a simple 2D transformation, it
can be implemented directly using the MaxVideo warper.
Modeling Voxel-Based Objects
The process known as voxelization converts a continuous geometry into a discrete
representation [4]. Many existing models are represented by a polygon
mesh that approximates the real object. However, for a photo-realistic application
photo mapping [9] is essential (see Figure 10). This requires warping
the photograph of the object so that it matches the 3D model, and then applying
it as a texture map to the voxels. Alternatively, a sculpting technique
Figure
10: Voxel-based objects: houses, trees and a tank.
can be employed. Given a set of images of an object from known directions,
one can craft the shape of the model by peeling away the background voxels
around the projected images. We start from a solid box of "black" voxels.
Then, given an image, rays are cast from the background pixels back into the
voxels, "clearing" the voxels encountered into background color. Repeating
this process from many images which view the model from different direc-
tions, leaves the non-background voxels with the shape of the model. This
process of reconstruction from projection yields the texture mapping inherently
by projecting the non-background pixels back towards the voxels by
means of ray casting.
A simplified implementation of the above sculpting technique has been
employed. We use only three photographs of a toy object. For example,
the three photographs of a toy Scud are shown in Figure 11(a). These three
photographs are scaled down to the voxel space resolution as can be seen in
Figure
11(b). At this stage the object is separated from the background pixel.
If this is not achieved by color thresholding, a contour is drawn manually
around the object. The result of the sculpting process and the photomapping
from these images is a 3D voxel-based textured object which can be rendered
from arbitrary viewing direction. The images shown in Figures 12 and 14 are
rendered very close to the object in order to observe the fine details. Note
(a) (b)
Figure
toy Scud. (a) Three photographs (side,front,top). (b) the images
after scaling to the voxel space resolution.
Figure
12: The voxelized Scud from three different viewing directions.
Figure
13: Three photographs of a T62 tank.
Figure
14: The voxelized T62 from three different viewing directions.
that the resolution of the object is higher than of the terrain. However, these
objects are to be seen from a distance as shown in the previous images.
6 Current Porting Activity
The development project was started in 1992 while the PVS was state-of-
the-art, but since then the processing power of a single processor has grown
by a factor of 10 compared to the i860.
Although the performance achieved on the PVS is satisfactory, it is clear
that a faster platform will allow us to deal better with higher resolutions, and
more and more objects of richer detail. The portability of the application
permits the adoption of a new parallel shared memory architecture according
to the behavior of the commercial market.
Using a distributed memory machine was ruled out since the application
was designed for shared memory architecture. The only company that manufactures
shared memory architecture in the same price range as the PVS
is Silicon Graphics Inc. (SGI). SGI's machines have a similar architecture
to the PVS with the exception that SGI uses a large cache (4 MBytes) in
contrast to the 16MBytes of the PVS's local memory.
SGI offers the Challenge with a maximum of 36 R4400/250Mhz CPUs
and the Power Challenge with a maximum of CPUs. The
Power Challenge was designed for floating point applications and each CPU
is more than twice as fast in such applications. In integer applications the
R4400 and R8000 have the same performance, giving the Challenge double
the performance of the Power Challenge. Both machines can store up to 68.8
GBytes internally and up to 6.3 TBytes externally.
The primary results on the SGI Challenge indicate a speed up of about
4.5 times faster than the PVS, while the scalability remains linear. This is
achieved with only minor changes in the code used for the PVS, mainly to
compensate for the absence of local memory.
7 Final Remarks
We have presented a discrete ray casting algorithm accelerated by ray coherence
and multiresolution traversal. The time complexity of the algorithm is
proportional to the number of image pixels, which can be regarded as con-
stant. The combination of the efficient rendering algorithm and the powerful
parallel machine results in a real-time photo-realistic visual flythrough. The
parallel rendering task partitions the image space among the PVS processor
elements, putting the load at the scene space stored in the PVS shared
memory. Due to data prefetching, the wide bandwidth of the busses, linear
speed-up has been observed as well as hardly any read or write contentions in
the shared memory. We have achieved perfect load balancing by overlapping
between frames.
It should be noted that the sequential version of the rendering algorithm
runs well under two seconds on an SGI workstation for a terrain size that can
fit into main memory. It is expected that in the future, with the progress of
memory bandwidth and CPU speed, visual flythrough will be able to run in
real-time on advanced sequential workstations.
Acknowledgments
This work was developed at Tiltan System Engineering in collaboration with
IBM Israel. Our thanks to Meir Nissim-Nir who built the terrain database,
and to Sylvia Kohn who built the objects and developed some new ideas
during the process. We thank all the stuff at IBM who helped us along in
many different ways.
--R
Reality Engine graphics.
View interpolation for image synthesis.
Photorealistic terrain imaging and flight simulation.
3D scan-conversion algorithms for linear and quadratic objects
Shaded display of digital maps.
Evans and Sutherland Computer Corporation.
Hierarchical data structures for real-time three dimensional visual simulation
visualization system.
An efficient ray tracing method for terrain ren- dering
Grid tracing: Fast ray tracing for height fields.
Height distributional distance transform methods for height field ray tracing.
Pyramidal parametrics.
A voxel-based
--TR
--CTR
David Cline , Parris K. Egbert, Terrain Decimation through Quadtree Morphing, IEEE Transactions on Visualization and Computer Graphics, v.7 n.1, p.62-69, January 2001
Dan Gordon, The Floating Column Algorithm for Shaded, Parallel Display of Function Surfaces without Patches, IEEE Transactions on Visualization and Computer Graphics, v.8 n.1, p.76-91, January 2002
Huamin Qu , Ming Wan , Jiafa Qin , Arie Kaufman, Image based rendering with stable frame rates, Proceedings of the conference on Visualization '00, p.251-258, October 2000, Salt Lake City, Utah, United States
Omer Shibolet , Daniel Cohen-Or, Coloring voxel-based objects for virtual endoscopy, Proceedings of the 1998 IEEE symposium on Volume visualization, p.15-22, October 19-20, 1998, Research Triangle Park, North Carolina, United States
Christian Henning , Peter Stephenson, Accelerating the ray tracing of height fields, Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, June 15-18, 2004, Singapore
Boris Rabinovich , Craig Gotsman, Visualization of large terrains in resource-limited computing environments, Proceedings of the 8th conference on Visualization '97, p.95-102, October 18-24, 1997, Phoenix, Arizona, United States
Arie Kadosh , Daniel Cohen-Or , Roni Yagel, Tricubic Interpolation of Discrete Surfaces for Binary Volumes, IEEE Transactions on Visualization and Computer Graphics, v.9 n.4, p.580-586, October
Baoquan Chen , J. Edward Swan, II , Eddy Kuo , Arie Kaufman, LOD-sprite technique for accelerated terrain rendering, Proceedings of the conference on Visualization '99: celebrating ten years, p.291-298, October 1999, San Francisco, California, United States
Ming Wan , Nan Zhang , Huamin Qu , Arie E. Kaufman, Interactive Stereoscopic Rendering of Volumetric Environments, IEEE Transactions on Visualization and Computer Graphics, v.10 n.1, p.15-28, January 2004
Joachim Pouderoux , Jean-Eudes Marvie, Adaptive streaming and rendering of large terrains using strip masks, Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, November 29-December 02, 2005, Dunedin, New Zealand
David Cline , Parris K. Egbert, Interactive display of very large textures, Proceedings of the conference on Visualization '98, p.343-350, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Brandon Lloyd , Parris Egbert, Horizon occlusion culling for real-time rendering of hierarchical terrains, Proceedings of the conference on Visualization '02, October 27-November 01, 2002, Boston, Massachusetts
Reynald Dumont , Fabio Pellacini , James A. Ferwerda, Perceptually-driven decision theory for interactive realistic rendering, ACM Transactions on Graphics (TOG), v.22 n.2, p.152-181, April | parallel rendering;terrain visualization;flight simulator;visual simulations;voxel-based modeling;ray casting |
614361 | Animation of Deformable Models Using Implicit Surfaces. | AbstractThis paper presents a general approach for designing and animating complex deformable models with implicit surfaces. Implicit surfaces are introduced as an extra layer coating any kind of structure that moves and deforms over time. Offering a compact definition of a smooth surface around an object, they provide an efficient collision detection mechanism. The implicit layer deforms in order to generate exact contact surfaces between colliding bodies. A simple physically based model approximating elastic behavior is then used for computing collision response. The implicit formulation also eases the control of the object's volume with a new method based on local controllers.We present two different applications that illustrate the benefits of these techniques. First, the animation of simple characters made of articulated skeletons coated with implicit flesh exploits the compactness and enhanced control of the model. The second builds on the specific properties of implicit surfaces for modeling soft inelastic substances capable of separation and fusion that maintain a constant volume when animated. | Introduction
In traditional animation systems based on key-framing,
specifying the motion and the successive shapes of objects
interacting with a simulated world, requires a great amount
of specialized knowledge and intuition from the animator.
Models based on simplified physical laws have been proposed
for automating these tasks. They generate motion
and deformation from initial conditions and from a set
of externally applied forces over time, and automatically
detect and respond to collisions. These models are particularly
appropriate for facilitating the animation of deformable
objects. They can either be used alone for the
simulation of inanimate bodies, or be combined with user-controlled
structures as has been done for instance in character
animation [25], [7].
This paper shows that a number of problems that are
di#cult to solve with previous models can be easily handled
by combining them with an external deformable layer
based on implicit surfaces. Indeed, the implicit formulation
defines a smooth surface around the object that can
be used to perform e#cient collision detection, to enable
exact contact modeling, and to ease volume preservation.
This approach leads to a variety of applications such as
the animation of elastic bodies, the modeling of soft substances
that can separate or melt, or the animation of sim-
iMAGIS is a joint project of CNRS, INRIA, Institut National Polytechnique
de Grenoble and Universite Joseph Fourier.
ple characters made of articulated skeletons coated with an
implicitly specified volume.
A. Related work
Deformable models give a method for computing the alteration
of an object's shape due to a set of externally
applied forces. Deformations may be elastic or inelastic
depending on whether the original shape is restored when
external forces are removed. Most deformable models in
Computer Graphics result from "nodal approaches", in the
sense that they approximate deformations by the displacements
of elementary nodes inside a flexible body. Some of
them derive from the elasticity theory. Di#erential equations
of motion are discretized in space, and then integrated
over time by resolving a matrix equation at each time step.
This scheme has been used for modeling both elastic [37],
[39], [19] and inelastic [36] deformations. However, since
the topology of the network of nodes does not vary over
time, this approach is restricted to the animation of structured
objects. A solution for modeling soft inelastic bodies
that absorb deformations and may separate into pieces or
melt during an animation is a physically-based particle system
[26], [38], [40], [24]. In this case, a set of elementary
masses called "particles" interact by means of forces that
vary with the distance, such as Lennard-Jones forces that
combine short range repulsion with long range attraction.
Motion is computed by independentely solving the equations
of motion for each particle.
Nodal approaches are often compute intensive, since very
small integration steps may be required. Another class of
methods, called "global approaches", has been introduced
to reduce computational costs [33], [45], [2]. The idea is
to perform global shape transformations rather than simulating
deformations that progressively propagate over a
deformable body. However, this leads to a restricted range
of deformations, and can only be applied to the animation
of homogeneous visco-elastic material.
The way a deformable model detects and responds to collisions
with other objects is very important, since it influences
all subsequent motions and deformations. However,
collision between soft objects is a complex phenomenon
that has not been widely studied in physics 1 . Consequently,
1 Elasticity theory [18] studies small oscillations around equilibrium
states, but does not provide any model for collisions. Moreover, collisions
between flexible bodies have finite time duration and consume
some energy in deformations, so the solutions developed for rigid
solids [27], [1] cannot be applied.
most of the solutions used in computer graphics have been
especially designed for this application.
A first issue is the way interactions between deformable
bodies can be detected. Quite surprisingly, the surface that
is displayed to represent the object during an animation
is seldom used for collision detection. For instance, deformable
bodies have been represented by splines surfaces
controlled by mass-nodes located at control points [25],
[17], but no precise collision detection was performed. Soft
inelastic substances animated with a particle system have
been displayed with implicit surfaces [38], [40], but once
again, the implicit surface itself was not used to collision
detection. Besides being inaccurate, detecting collisions
with mass-nodes is expensive [28]. A better solution is
used by Pentland and Williams [32], who exploit the implicit
inside/outside function that defines the surfaces of
their models for precise and e#cient collision detection.
They test if the points that sample one object's surface are
inside or outside another one in linear time.
Secondly, a method for computing response to collisions
must be designed. Most of the models used so far 2 compute
response forces from penalty methods [28]. These
methods do not generate any contact surface between interacting
flexible bodies, but use instead the amount of
interpenetration for computing a force that pushes the objects
apart. A quite promising approach [2] extends the
analytical interaction processing used for rigid solids [1] to
a global deformable model [45]. However, contact surfaces
are approximated by discrete sets of contact points which,
as the authors emphasize, is somewhat unsatisfactory.
To conclude the review of related work, most previous
deformable models do not present a convincing way of processing
collision and contacts between objects. Exact contact
surfaces should be generated rather than local interpenetrations
or bouncing before visual contact. Moreover,
these problems may be exacerbated by the fact that soft
collisions can last for a finite time. The combination of
implicit surfaces and deformable models described in this
paper supplies a systematic approach to treating collisions
among deformable objects.
B.
Overview
This paper presents an integrated set of methods that
use implicit surfaces for animating a wide variety of deformable
models. Implicit surfaces will be used as an extra
layer that coats a base structure, deformable or not, with
some smooth elastic flesh. While the base structure controls
the large scale behavior, the implicit layer performs
collision processing and generates local deformations due
to contacts. It also handles constant volume deformations
and topological changes such as separation or fusion. This
allows simulation of behaviors that would be extremely difficult
to treat with other methods.
Section II presents the layered approach we use. Section
III details the collision processing method associated
2 As for instance Terzopoulos et al. models [37], [39], [36], and
Pentland's ``Thing World'' system [32], [35].
with the implicit layer. Section IV introduces volume
preservation to the model. Section V presents two di#er-
ent applications of this formalism: the design of simplified
characters made of articulated skeletons coated with elastic
flesh, and the animation of soft inelastic substances capable
of separation and fusion.
II. Building layered models using implicit
surfaces
A. Implicit surfaces generated by skeletons
Implicit isosurfaces such as "distance surfaces" [47], [5]
allow the design of free form shapes through the manipulation
of "skeletons" generating potential fields. Because
they are simple to define and to control, they constitute a
good alternative to traditional implicit surfaces defined by
analytical equations.
An implicit isopotential surface generated by a set of
skeletons s i n) with associated "field functions"
f i is defined, at the isovalue c, by 3 :
In this paper, f will be called the "field function", and
the f i will be designated as the "implicit contributions" of
the di#erent skeletons. The implicit surface surrounds a
solid whose points satisfy (f(P ) # c), which may have several
disconnected components. Normal vectors are directed
along the field's gradient.
The skeletons s i can be any geometric primitive admitting
a well defined distance function: points, curves, parametric
surfaces, simple volumes, etc. The field contributions
f i are decreasing functions of the distance to the associated
skeleton:
where d(., s i ) is the distance to s i , and F i can be defined for
instance by pieces of polynomials [47] or by more sophisticated
anisotropic functions [21], [3]. Most field functions
associate a restricted scope of influence to each skeleton in
order to provide local control of the surface and to optimize
the computations. In practice, we use piecewise polynomial
field contributions that are parametrized by three parameters
called respectively thickness, sti#ness
and radius of influence, as sketched in Figure 1.
B. Embedding implicit surfaces into a layered construction
As emphasized in [7], a layered construction is a very
e#cient tool for creating complex models for animation.
It often provides the user with parameters that are more
intuitive and easier to use. Implicit surfaces generated by
skeletons are especially well suited to this kind of approach.
In our framework, the user defines a deformable object
by specifying:
3 In the remainder of this paper, upper-case letters are used for
points and vectors, and lower-case letters for scalar values.
-k
F (P)
Fig. 1. A typical field contribution, with its three parameters t i , k i
and R i .
1. An internal physically-based model which will be used
as a base structure during the animation. This model
may be for instance a rigid solid defined by a mass
and an inertia tensor, an articulated structure made of
several such solids, a mass/spring network, a particle
system, or any other model.
2. An implicit layer, that "coats" the base structure.
This layer is built by defining skeletons that generate
the implicit surface in local coordinate systems animated
by the base structure. Skeletons may be points,
line segments, triangles, or any graphic primitives.
During animations, the implicit layer immediately follows
the motion and deformations generated by the base
structure, defining a smooth surface that can be used for
display. The topology of this surface may change over time,
since separation or blending may be produced by the relative
motion of skeletons.
C. Benefits of the approach
As stressed before, the coherence between the representation
of an object and the model used for collision processing
is very important for generating convincing motion. Using
an implicit representation of the surface brings several ben-
efits, such as a precise yet e#cient collision detection mech-
anism, and a solution to precise contact modeling between
deformable bodies.
Instead of using a purely geometric definition for the
implicit layer, we use the deformable implicit model first
introduced in [15]. This model, which will be reviewed in
the next section, defines a correspondence between applied
forces and deformations of the implicit surface that approximates
elastic behavior. The latter can therefore be used
for collision detection and response.
D. Animation algorithm
The general scheme for animating the resulting layered
model develops as follows:
1. Animate the base structure by integrating the equations
of motion according to the set of applied forces.
This computes new positions for the skeletons that
generate the implicit layer.
2. Process interactions between objects:
(a) Use the implicit layer for detecting interpenetrations
(b) Model contact by locally deforming the implicit
layer in order to generate exact contact surfaces between
colliding bodies.
(c) Integrate reaction and friction forces along contact
surfaces. Add them to the set of external actions
to be applied to the base structure at the next time
step.
Detailing dynamic equations and integration schemes usable
for the base structure is beyond the scope of this pa-
per. An overview can be found for instance in [42]. The
next section just presents the elastic model we use for the
implicit layer, and details the associated collision detection
and response algorithm.
III. Processing collisions with the implicit layer
In previous deformable models, when a collision is de-
tected, response forces are approximated first. These forces
are then used to compute subsequent deformation and motion
of the objects, without producing an exact contact
surface when collision endures in time. The implicit layer
defined in this section uses a di#erent approach. When a
collision is detected it first performs precise contact modeling
with the other object with local deformations. Compression
along contact surfaces is then used to compute
collision response.
A. Collision detection
Collision detection is performed between each pair of objects
by first testing interpenetration between axis-aligned
bounding boxes. When boxes intersect, the implicit representations
of the surfaces and a set of sample points on
them are used for a more precise detection: the surface
points of each object that lie within the bounding box of
the other object are evaluated by the implicit function of
the other object. When an interpenetration is detected, the
purely geometric contact modeling process described in the
following section is applied. Issues for maintaining samples
on the implicit surface will be discussed in Section III-D.
B. Modeling contact between objects
Once detected, an interpenetration must be avoided by
locally deforming the implicit layer of each object. Both
exact contact surfaces and deformations in propagation regions
that model the transverse propagation of deformation
must be generated (see Figure 2). We accomplish this by
adding new local terms, called "deformation terms", to the
field functions defining the implicit layers of each object.
. A negative field g modeling compression is added in the
interpenetration region in order to generate a contact
surface with the other object.
. A positive field p modeling the transverse propagation
of deformations is added in the propagation region.
B.1 Deformation in the interpenetration region
The deformation field terms g ji and g ij to be added to
the equations of the objects i and j should generate an
exact contact surface. Thus the two equations
Interpenetration region
Propagation region Solids after deformation
Object Object Object Objectf
Transverse
Propagation
Fig. 2. Modeling contact consists in applying di#erent deformation
fields in the interpenetration region and in the propagation regions
associated with each object (cross-sectional views).
must have a common solution. A simple and symmetric
solution is to define the compression field terms by:
This choice is appropriate since deformation field terms
are negative in the interpenetration region, and locally generate
in this region a contact surface defined by the set of
points such that:
We can remark at this point that an object can be defined
as rigid if no deformation term is added to its surface
equation. In that case, collision with a deformable implicit
object can easily be modeled. The compression field term
applied to the deformable object i is:
This makes the implicit layer of the deformable body exactly
fit with the rigid object in the whole interpenetration
region.
B.2 Deformation in the propagation areas
Our aim is to optimize the contact modeling process
by directly computing deformed shapes in contact situations
rather than simulating deformations that progressively
propagate over the implicit layer. The bulge generated
in the propagation region (see Figure 2) must be
computed such that there is a smooth junction between
the interpenetration region and the region where the object
remains undeformed.
The user controls s i 's propagation field term p ji (P ) (due
to the collision with s j ) through two additional parameters
in s i 's description:
. A value w i giving the o#set distance where deformations
propagate around the interpenetration region
(see
Figure
2). No deformations will be generated outside
this area.
. An "attenuation value" # i giving the ratio between the
maximal value desired for p ji and the current maximal
compression term in the interpenetration area.
We then define the propagation field term p ji to be applied
in the propagation zone of the object s i as:
where P 0 the closest point to s j in s j 's gradient direction
(see
Figure
is the maximal propagation
value equal to # i times the maximal compression
field value, and a k,a0 ,w (x) is a piecewise polynomial function
as depicted in Figure 3. P 0 is computed in practice
by iteratively performing small steps from P in the opposite
direction of the object j's gradient. Our choice for the
slope k ensures that the shape of the implicit object stays
C 1 at the border of the interpenetration zone. One can use
the following equation for a k,a0,w (x):
a k,a0 ,w
a (x)
Fig. 3. Attenuation function defining the field in propagation areas.
The slope at is k, the maximum is a 0 , and the function
has zero values and derivatives for x # w.
Fig. 4. (left) Contact between two colliding deformable objects.
Cross-sectional view showing the exact contact modeling.
A step-by-step description of this animation is described in Appendix
C. Response to collision
Now that correctly deformed shapes are generated beween
colliding objects as depicted in Figure 4, the deformation
of the implicit layer must be used to deduce response
forces. To achieve this, we describe in this section a model
designed to approximate elastic behavior, first introduced
in [15].
C.1 Modeling elasticity with implicit surfaces
A deformable model is defined by a given correspondence
between applied forces and deformations. Both linear [39],
[19], [45] and non-linear [37] elasticity have been used in
previous models. Linear elasticity states that sti#ness near
a given point P of a solid remains constant during defor-
mations. The displacement of P from an initial position
to a final position X(P
then a linear function of the applied force R(P
In non-linear models, the sti#ness k is not only a function
of the point P , but may also depend on its current location
inside the solid. The force applied during a displacement
of P from X
For generality, the implicit layer we are defining should be
able to exhibit both linear and non-linear behaviors.
Deformations of an implicit surface can be modeled by
variations in its field function f . To express non-linear
elasticity with this formalism, we let dR(Y ) be a infinitesimal
radial force and dY the resulting infinitesimal radial
displacement. From equation (12) they must satisfy:
We can then make the following observation: the set of
points P satisfying f(P is the field function,
is su#cient to define a surface. This set of points being
fixed, the variation of f around the isosurface can then
be used to model physical properties. Consequently, we
choose to model sti#ness with the field's gradient:
This choice simplifies equation (13), yielding:
be the deformation field term
associated at equilibrium with the radial force R. If the
normal vector N(P ) has remained constant during the deformation
(which is then said to be "radial"), the formula
giving the correspondence between applied forces and de-
formations, obtained by integrating equation (15), is:
In practice, we will use a rewritten version of this latter
The correspondence between applied forces and deformations
will only be used to integrate radial response forces
during collisions. It can be noted that this formulation only
approximates elastic behavior since only the radial component
of forces is computed.
C.2 Sti#ness control
As shown in [15], defining sti#ness with the field's gradient
has a simple geometric interpretation since field contributions
are decreasing functions of the distance to the
associated skeleton: the user defines local sti#ness as the
opposite of the slope of the field function. Both linear and
non-linear elastic behaviors are easily designed, as depicted
in
Figure
5.
-k
F (P)
F (P)
(a) (b)
Fig. 5. Examples of field functions for the implicit layer. (a) Linear
elasticity: sti#ness, represented by the opposite of the slope, is
constant during deformations. (b) Non-linear elasticity: sti#ness
increases during compressions.
In practice, the field's gradient also a#ects the object geometry
when contributions from di#erent skeletons blend
together (equation (1)). It is di#cult to specify the ob-
ject's shape and its dynamic behavior at the same time
by adjusting the di#erent field contributions. We partially
overcome this problem by introducing a scaling parameter
K relating the e#ective sti#ness and the field function's
This parameter K enables us to adjust the object sti#ness
according to its mass and inertia tensor without modifying
its geometry.
C.3 Radial response forces
We have defined a way to model contact and a correspondence
between deformations of the implicit layer and
radial applied forces. Thus evaluating the resulting normal
reaction force R(P ) at a point P on the contact surface
during collision is straightforward. Equations (6) and (17)
give us a value for the normal response force:
is the normal vector to the deformed surface
of object i at point P . Since g ji , which models local com-
pression, is negative, R i has the same orientation as the
normal vector, and models the internal force that tends to
restore the initial shape of the object.
An important remark is that our model is consistent with
the action/reaction principle: along contact surfaces, the
deformation terms applied to both objects are equal, since
while the normal vectors are opposite. Thus, opposite
reaction forces R i are generated.
C.4 Friction and damping forces
To model both tangential friction in contact areas and
damping due to the progressive compression of the solids,
Fig. 6. Flexible clover falling on a rigid staircase.
we include a friction coe#cient # i in the description of each
object. When a collision occurs, the friction and damping
force F i at a point P of the contact surface between two
objects i and j is expressed by:
is the speed of P , a point
on the surface of the object i (respectively j).
The sum of radial and frictional forces is transmitted to
the base structure of each object, and will be taken into
account in subsequent motion. An example of animation
using this technique is depicted on Figure 6.
D. Implementation
D.1 Sampling the isosurface
Sample points on the implicit surface are needed for both
collision detection and numerical integration of response
forces. Points that appear to be inside another object are
moved to the deformed isosurface (for instance by performing
search along the gradient direction), and the value of
the deformation field at each new location directly gives
the intensity of radial contact force along the small surface
area sampled. These forces are added to friction forces and
stored to be integrated by the base structure at the next
time step.
Any method could be used for generating sample points
on the implicit surface. The most widely used are spatial
partitioning techniques [47], [23], [4], [43], [29]. However,
approaches that take advantage of temporal coherence are
more e#cient for our application, since sample points do
not move much between two consecutive steps of an anima-
tion. One method of this kind [44] maintains sample points
called "floaters" on the isosurface, and ensures a good sampling
distribution by connecting them with repulsive inter-action
laws. During deformations, more floaters may be
automatically generated, or some of them may be removed,
according to the local sampling density. The method we
use, first introduced in [10], is slightly di#erent. We present
it briefly in the next paragraph since it has the advantage
of also being convenient for volume preservation, as will be
described below.
D.2 An adaptive sampling technique
The central idea is the following: each skeleton s i contributing
to the implicit layer emits a set of sample points in
directions that are fixed in its local coordinate system, and
are well distributed around it as illustrated in the stages 1
to 3 of
Figure
7. Those of the points that reach the iso-surface
without going through an area already sampled by
another skeleton are said to be "valid", and will be used as
samples on the isosurface at a current time step (see stage
4 in
Figure
7).
Stage 4
Stage 3
e
Fig. 7. Di#erent stages of sampling initialization
This process can be defined more formally by associating
a territory T i to each skeleton s
T i is the part of the implicit object where s i 's field contribution
is the highest. This is equivalent to splitting the
implicit volume into Vorono regions defined by the skele-
tons, the "distance" from a point P to a skeleton s i being
defined by the field function. Sample points sent from s i
stop at T i 's boundary, and are valid if they lie on the iso-surface
points then start from their previous position in
local coordinate system at each time step to meet the
surface again. Thus using temporal coherence increases
e#ciency. In addition, sample points that were located
between two territories at a given animation step may be
brought to the surface during later deformations. The sampling
of the isosurface automatically adapts to large deformations
and changes in topology.
D.3 Interactive visualization
A further benefit of the sampling method above is that
the implicit surface can be displayed, during an interactive
animation process, as a set of polygonal meshes built on
the sample points (valid or not) that belong to the same
skeleton. This provides a legible solid representation of
the surface with no extra cost [10] as depicted in Figure 8.
Note that we do not use the sample points for final high
quality rendering of an animation. Storing the parameters
of the isosurface such as the positions of the skeletons and
the lists of colliding objects o#ers a much more compact
representation that can then be used for computing direct
ray-tracing of the implicit surfaces [13].
Fig. 8. Left: visualization of sample points as scales on the surface.
Right: visualization of the same sampling but with a piecewise
polygonization.
IV. Controlling volume during animations
The preservation of constant volume of deformable objects
is desirable in animation [22]. Additionally, the user
may desire precise control over volume variation in order to
emphasize certain motion. The problem of volume preservation
has been solved by methods based on Lagrange multipliers
in the specific case of objects discretized into lattices
of fixed topology [33], [34]. We present in this section
the only solution to our knowledge that has been proposed
for controlling the volume of bodies that undergo large deformations
and topological changes such as separation and
fusion [9].
Unwanted volume variations are exacerbated in implicit
surface animation. They are produced by the field blending
process during the relative motion of the skeletons, and
they may be particularly annoying when the object undergoes
separation or fusion. Although the problem has
already been identified [47], [8], previous approaches only
provide partial solutions that ensure volume preservation
between an initial and a final state. But they do not
ensure volume preservation during intermediate deforma-
tions, while drastically restricting the range of field functions
that can be used.
This section presents a general method, applicable to
any field function and isovalue, for controlling the volume
of objects defined by implicit surfaces.
A. Local volume variations
The first problem is the detection of volume variation.
The volume of an implicitly-defined object is given by:
R R R f(P )#c
dx dy dz. This expression cannot be computed
analytically for most field functions. A simple method for
volume approximation consists of discretizing space into
voxels, and expressing the volume as the sum of voxels
that lie inside the object. However, this technique would
not provide a solution to the problem, since we also need
to know near which skeleton the volume is changing.
Suppose the volume has been modified, as in Figure 9
(a), by the relative motion of some skeletons of the implicit
layer. A solution for avoiding the variation consists
of adjusting the strength of the field functions so that the
(b)
step 1
(a)
Fig. 9. (a) Volume variations of an implicit surface generated by point
skeletons between two steps of animation. (b) Volume controlled
locally in step 2.
volume keeps its initial value. However, these adjustments
should not be done in areas where the object has not been
deformed. As a consequence, volume variations should be
detected and treated locally.
To define local volumes we use the notion of skeleton
already introduced in Section III-D.2: we define
the local volume V i associated with a skeleton s i as the
volume of its territory T i . The total volume of the implicit
object can then be expressed as the sum of local volumes.
approximation
Sampling Ti's boundaries
Skeleton Territories
Fig. 10. Particle territories and sample points used for volume approximation
Computing local volumes is straightforward when the
adaptive sampling method of Section III-D is used. As
shown in Figure 10, a local volume V i is approximated as
the sum of small pyramidal volumes defined around each
sample point sent by s
where P i is the set of sample points sent by a skeleton
is the distance, and b i only depends on the angular
distribution of samples for s i . In practice, the factor b i
can be omitted in volume computations, since controlling
the value of
su#cient for avoiding volume
variations.
B. Volume control
We control local volume variations by associating a
proportional-derivative controller to each skeleton. This
controller can be seen as a black box that, given the current
local volume V i,t and the value V i,0 to reach or to maintain,
outputs an adequate adjustment of the field function f i of
this particular skeleton.
For our application, the way to modify f i must be chosen
carefully since the norm of f i 's gradient gives the object
local sti#ness (see Section III-C.1). In order to adjust
the volume of skeleton territories without modifying the
object's physical properties, we combine the original field
function with a translation # i,t . At each time step, the field
originally defined by the decreasing function of the distance
replaced by:
Since we need regular shape variations, we control the time
derivative
# i,t of the translation parameter rather than its
value. The inputs of the controller are then the normalized
volume variation # i,t and its time derivative
Its output is:
where # and # are appropriately chosen parameters. A
simple example of volume control is given in Figure 11.
Fig. 11. Preserving volume during a blend between two point-
skeletons. The leftmost picture shows the initial configuration
and the speed vector. (a) shows the blending without any con-
trol, whereas (b) depicts a controlled blending ensuring constant
volume.
This method used for maintaining a constant volume
during an animation can be extended in order to impose
specific volume variations that may be locally specified by
the user: the target volumes V i,0 simply have to be changed
over time. These capabilities are useful in a broad range of
applications in the field of animation with implicit surfaces.
The next section presents two of them.
V. Applications
A very simple use of the implicit layer consists of combining
it with a rigid internal structure, as was done in [15].
This leads to animations of elastic objects that locally deform
during contacts, but return to their initial shape when
no external force is applied. Snapshots from such an animation
[12] are displayed in Figure 12.
This section presents two further applications of the
model. The first is an animation of simple characters made
of articulated skeletons coated with implicit flesh, which exploits
the compactness and enhanced control o#ered by a
layered structure. The second builds on the specific properties
of implicit surfaces for modeling soft inelastic substances
capable of separation and fusion, and that preserve
their volume during the animation.
A. Modeling simplified characters
Specifying complex motion is greatly simplified when the
animation system is able to abstract basic shapes from the
representation of an object. Motion can then be refined
with these shapes, and the animator only switches to the
Fig. 12. Four frames of the animation Simply Implicit.
detailed representation when necessary. This is particularly
true in character animation, where animators often spend
a lot of time specifying motion and deformations of an articulated
"skeleton" representing the character. Much less
time is spent on animating the skin deformations, which
may be generated automatically from the skeleton motion.
Layered models are particularly well adapted to this con-
text. Various approaches, either purely geometric [6], [11]
or physically-based [7], [17], [31], [41] have already been
proposed for the automatic animation of the skin from the
motion of the underlying skeleton.
This section explains how to adapt the implicit layer
model we have developed for this applications. One advantage
of using an implicit representation of the flesh and
skin is the compactness it o#ers: only skeletons and field
functions need to be specified since the field models geometric
and elastic properties. Another advantage is the ability
to automatically detect collisions and model contact with
other objects. This is particularly useful since characters
often need to interact with the simulated world. Moreover,
the volume control method we have developed can be used
for creating more lively animations, by animating muscles
for instance.
A.1 Structure used
In the terminology we have developed in Section II, a
character will be represented by:
. Base structure: an articulated structure composed
of a set of "links" connected by hinges. This structure
may be animated with key frames, inverse kinematics,
through the use of physically-based animation, or by
any other technique.
. Implicit layer: each skeleton contributing to the implicit
surface is defined in the local coordinate systems
of one of the links.
The animation of this model is straightforward in the general
framework we have defined: the general algorithm described
in Section II-D is used. However, two problems due
to the relative motion of skeletons inside the implicit layer
have to be discussed:
1. Unwanted blending e#ects must be avoided during deformations
2. Intercollisions should be detected between the di#er-
ent parts of a character.
The two next paragraphs explain how we deal with these
problems.
A.2 Avoiding unwanted blending e#ects
The unwanted blending case is a di#cult problem that
has been known for a long time [46]. When we implicitly
model characters for instance, we want their arms to blend
with their shoulders, but not with another part of the body,
as illustrated by Figure 13.
Fig. 13. The unwanted blending problem
A solution, first suggested in [46] and further developed
in [30], consists in defining a neighboring graph between
the di#erent skeletons, and stating that a skeleton's field
only blends with contributions from neighboring skeletons.
More precisely, the field function f is replaced by the following
procedure for computing the field value at a point
1. Compute all the field contributions at point P ,
2. Select the predominant contribution from those of
groups of skeletons that blend together,
3. Return this value without summing the other field
contributions.
This algorithm avoids surface discontinuity during the controlled
blending process, as explained in [20]. However, the
method can not guarantee C 1 continuity everywhere.
A.3 Processing intercollisions
Instead of processing collisions between pairs of objects,
we use the algorithm described in Section III-B for processing
collisions between pairs of skeleton territories that do
not blend. The sampling method described in Section III-D
provides us with a set of sample points, so we can easily
compute local bounding boxes from the position of valid
sample points and associate them with each skeleton ter-
ritory. These boxes (enlarged by the maximal distance
between sample points) cover the isosurface, allowing for
precise collision detection even when the surface separates
into several components.
When the underlying links are animated with a
physically-based approach, contact forces computed between
skeleton territories are transmitted to the reference
link of the skeletons, to be integrated at the next time step.
A.4 Results
An example of animation performed with this model is
depicted in Figure 14. A simulation method based on displacement
constraints [14] is used for animating the base
structure of the characters. We can observe several inter-
collisions as the character falls.
Fig. 14. Collisions with the ground and intercollisions of a simple
articulated implicit object falling on its side.
B. Animating soft substances
This section presents a quite di#erent application of the
techniques we have described. The soft substance model
we develop here, first introduced in [9], is particularly interesting
since it benefits from the specific capability of
implicit surfaces to model separation and fusion. The constant
volume deformations generated by our model are very
important in this case, otherwise an important increase of
volume would be produced during fusion.
B.1 Structure used
As emphasized in the introduction, a simple and unified
way of modeling a large variety of behaviors, including inelasticity
and fractures, is to use physically-based particle
systems. The main drawback of these systems, when used
alone, is the lack of a method for defining a smooth surface
for the objects. This is not a problem when several thousands
of particles are used for accurately simulating fluids
for instance. For animation purposes, the use of far fewer
particles is su#cient for producing su#cient deformations.
A surface, defined around the particles, should be used for
visualization and for processing contact with other objects.
The general framework we have defined provides a solution
to this problem since the implicit layer seems well adapted
for coating the particles. A piece of soft substance will then
be composed of:
. Base structure: a particle system, made of a few tens
of particles. Interactions between particles are modeled
by attraction/repulsion forces such as Lennard-Jones
forces, combined with friction forces that depend
on the local density of particles. In our implementa-
tion, we use the following expressions for the interaction
force and the friction force between two particles:
F int (P
r 8
r 4
is the sti#ness parameter,
stands for the speed vector of particle P i , and is a
decreasing continous function with finite support.
. Implicit layer: an implicit surface defined by point
skeletons located on each particle. We use field contributions
with relatively large thickness and radius of
influence in order to give a smooth aspect to the simulated
material even if only a few particles are used. Local
volume controllers are associated with each skele-
ton, in order to prevent volume variations.
The general animation algorithm we have developed also
applies to this case. The next paragraphs explain how we
handle separation and fusion of the substance.
B.2 Modeling separation
When the particle system moves and deforms, a piece
of substance may separate into several components, due to
the relative motion of point skeletons defining the surface.
However, if these disconnected chunks come back close to
each other, they will blend as in Figure 11 rather than
collide, since they are considered to be parts of the same
This artifact is related to the unwanted blending problem
we have referred to in the previous section. However,
the problem is more complicated here since we cannot use
a predefined blending graph: the blending properties between
the set of point skeletons must change during the
animation, according to the separation that is detected.
As a consequence, our method is based on the computation
of a time varying blending graph. At each animation
step, the current blending graph is stored as a list of neigh-
bors, so called "blending list", associated with each point
skeleton. Processing unwanted blending is done by reducing
blending lists each time the implicit surface breaks into
disconnected components that must not blend any more.
The algorithm we use is the following:
. Before the animation, the blending graph is initialized
as a complete graph, where each skeleton is connected
to every other one. This corresponds to the standard
field function, computed as the sum of all the field
contributions.
. At each animation step:
1. For each pair of point skeletons that blend together,
we check if their spheres of influence, defined by the
radius of influence of their fields, intersect. This
relation defines an "influence graph". We then use
the transitive closure of this graph for computing
the blending graph we are looking for. For instance,
in
Figure
15, point A is detected to be in the same
components than point B, while the separation with
the other part is detected.
2. Collisions are detected between skeleton territories
not connected in the blending graph, as was done
in Section V-A.3. As a result, pieces of substance
that separate from the same body collide instead of
blending when they come back close to each other.
Note that intersection tests on spheres of influence do
not detect disconnections as soon as they appear, but when
there is no more implicit contribution between the discon-
AFig. 15. The influence graph and its connected components. Particles
A and B lie in the same component, and thus their fields will
blend if they come close to each other.
nected parts. However, we cannot reduce the blending list
earlier without a sudden alteration of the shape of the disconnected
components.
B.3 Modeling fusion under compression
Blocks of soft substance such as clay or dough merge
under compression forces that exceed a specified threshold.
This behavior can easily be simulated with our model.
A fusion threshold is associated with each substance.
Each time a collision is computed between two components
of the same substance, the compression force along the contact
surface between two skeleton territories is compared
with the fusion threshold. If the threshold is exceeded,
those skeleton that contribute to the contact surface for one
component are added to the blending list for those skeletons
contributing to the other component's contact surface.
At the next time step, fields from the two pieces will then
locally blend in this area, while collisions will still be computed
between the rest of the components as illustrated in
Figure
16. This merging will endure in time, unless the
two pieces happen to be disconnected again by subsequent
deformations. This method do not ensure, however, C
continuity everywhere, as observed in [20]: tangent discontinuities
may appear locally at some intermediate steps of
the fusion.
Fig. 16. Progressive fusion under compression of two soft substances.
Instant fusion can be handled even more easily. Figure
shows four steps of an animation where a piece of
soft substance is grabbed away by pliers and then released.
The substance is made of nine particles only, and the fusion
threshold has been set to zero, so that the substance
immediately merges back after a collision. Here, volume
preservation is essential. Otherwise, a very large and sudden
increase of volume would be produced between the two
last frames.
Figure
exhibits four frames from the animation
Kitchen Fiction [16]. It shows the application of all the
techniques detailed above to a more complex animation
where a set of rigid tools are manipulating three di#erent
soft substances.
Fig. 17. Soft substance grabbed away by pliers and released.
Fig. 18. Four frames of the animation Kitchen Fiction.
VI. Conclusion
This paper has presented a general framework for building
layered deformable models with implicit surfaces. The
implicit formulation is particularly well adapted to a layered
construction. It can be used for coating any reference
component as, for instance, a rigid solid, an articulated
structure, a mass-springs network, or a particle system. It
defines a smooth surface around the object that can be
used for rendering, and o#ers simple yet precise processing
of collisions and contacts. The implicit inside-outside function
faciliates collision detection, while the deformation of
the implicit layer generates exact contact surfaces between
colliding bodies. The physically-based model associated
with the implicit layer approximates elasticity and allows
the computation of response forces due to compression and
friction. Moreover, preservation of deformed objects' volume
is possible, even when the objects undergo significant
changes such as separation or fusion.
We have illustrated this framework by detailing two very
di#erent applications: the animation of rigid articulated
bodies coated with implicit flesh and the simulation of soft
substances performing separation and fusion. The first of
these applications should lead to interesting developments
in the character animation area. Our layered framework
would o#er a compact way of modeling both geometry and
the physical characteristics of simplified characters. Local
adjustments of volume through time could be used for generating
more expressive animations. Lastly, the capability
of processing collisions and contact between a character and
other objects of the scene would be an essential benefit of
our approach.
Acknowledgements
We wish to thank Jean-Dominique Gascuel and Nicolas
Tsingos for their contributions to the developement of the
animation software. Many thanks to Agata Opalach and
Jules Bloomenthal for fruitful discussions, to reviewers for
very helpful comments, and to Andrew Hanson and George
Drettakis for carefully rereading this paper.
--R
David Bara
David Bara
Extended field functions for soft objects.
Polygonisation of implicit surfaces.
Interactive techniques for implicit modeling.
Interactive skeleton technique for enhancing motion dynamics in key frame animation.
Highly deformable material for animation and collision processing.
Animating soft substances with implicit surfaces.
Adaptive sampling of implicit surfaces for interactive modeling and animation.
A surface model for skeleton-based character animation
Simply implicit.
Implicit patches: An optimized and powerful ray intersection algorithm for implicit surfaces.
Displacement constraints for interactive modeling and animation of articulated structures.
An implicit formulation for precise contact modeling between flexible solids.
A modeling system for complex deformable bodies suited to animation and collision processing.
Classical Mechanics.
Simulation of object and human skin deformations in a grasping task.
Controlled blending for implicit surfaces using a graph.
Controlled blending of procedural implicit surfaces.
Principles of traditional animation applied to 3d computer animation.
Marching cubes: a high resolution 3D surface construction algorithm.
The motion dynamics of snakes and worms.
Globular dynamics: A connected particle system for animating viscous fluids.
Impulse based simulation of rigid bodies.
Collision detection and response for computer animation.
An evaluation of implicit surface tilers.
Implicit surfaces: Appear- ance
High level control of implicit surfaces for character animation.
Good vibrations: Modal dynamics for graphics and animation.
Constraint methods for flexible mod- els
Alla She
Stan Sclaro
Modeling inelastic deformations: Viscoelasticity
Elastically deformable models.
Physically based model with rigid and deformable components.
Modeling liquids and solids using thermal par- ticles
A system for construsting and animating layered elastic characters.
Advanced Animation and Rendering Techniques.
Octree for faster isosurface generation.
Using particles to sample and control implicit surfaces.
Fast animation and control for non-rigid structures
--TR
--CTR
Daniel Nixon , Richard Lobb, A Fluid-Based Soft-Object Model, IEEE Computer Graphics and Applications, v.22 n.4, p.68-75, July 2002
Bryan E. Feldman , James F. O'Brien , Bryan M. Klingner , Tolga G. Goktekin, Fluids in deforming meshes, Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 29-31, 2005, Los Angeles, California
Tolga G. Goktekin , Adam W. Bargteil , James F. O'Brien, A method for animating viscoelastic fluids, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Bryan M. Klingner , Bryan E. Feldman , Nuttapong Chentanez , James F. O'Brien, Fluid animation with dynamic meshes, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Bryan E. Feldman , James F. O'Brien , Bryan M. Klingner, Animating gases with hybrid meshes, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Guillaume Dewaele , Marie-Paule Cani, Interactive global and local deformations for virtual clay, Graphical Models, v.66 n.6, p.352-369, November 2004
Jing Hua , Hong Qin, Haptics-based volumetric modeling using dynamic spline-based implicit functions, Proceedings of the 2002 IEEE symposium on Volume visualization and graphics, October 28-29, 2002, Boston, Massachusetts
Victor B. Zordan , Bhrigu Celly , Bill Chiu , Paul C. DiLorenzo, Breathe easy: model and control of human respiration for computer animation, Graphical Models, v.68 n.2, p.113-132, March 2006
Capturing the complexity of hair motion, Graphical Models, v.64 n.1, p.40-58, January 2002
Victor Brian Zordan , Bhrigu Celly , Bill Chiu , Paul C. DiLorenzo, Breathe easy: model and control of simulated respiration for animation, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Adam W. Bargteil , Tolga G. Goktekin , James F. O'brien , John A. Strain, A semi-Lagrangian contouring method for fluid simulation, ACM Transactions on Graphics (TOG), v.25 n.1, p.19-38, January 2006
Alexis Angelidis , Marie-Paule Cani, Adaptive implicit modeling using subdivision curves and surfaces as skeletons, Proceedings of the seventh ACM symposium on Solid modeling and applications, June 17-21, 2002, Saarbrcken, Germany
Xiaogang Jin , Chiew-Lan Tai , Jieging Feng , Qunsheng Peng, Convolution surfaces for line skeletons with polynomial weight distributions, Journal of Graphics Tools, v.6 n.3, p.17-28, 2001
Marie-Paule Cani , Alexis Angelidis, Towards virtual clay, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts
G. H. Bendels , R. Klein, Mesh forging: editing of 3D-meshes using implicitly defined occluders, Proceedings of the Eurographics/ACM SIGGRAPH symposium on Geometry processing, June 23-25, 2003, Aachen, Germany
Hai-Yin Xu , Dan Li , Jian Wang, Implicit curve oriented inbetweening for motion animation, Proceedings of the 4th international conference on Computer graphics and interactive techniques in Australasia and Southeast Asia, November 29-December 02, 2006, Kuala Lumpur, Malaysia
Daniel Nixon , Richard Lobb, A Fluid-Based Soft-Object Model, IEEE Computer Graphics and Applications, v.22 n.4, p.68-75, July 2002
Masatoshi Matsumiya , Haruo Takemura , Naokazu Yokoya, An immersive modeling system for 3D free-form design using implicit surfaces, Proceedings of the ACM symposium on Virtual reality software and technology, October 22-25, 2000, Seoul, Korea
Jing Hua , Hong Qin, Haptics-Based Dynamic Implicit Solid Modeling, IEEE Transactions on Visualization and Computer Graphics, v.10 n.5, p.574-586, September 2004
Allgre , Eric Galin , Raphalle Chaine , Samir Akkouche, The HybridTree: mixing skeletal implicit surfaces, triangle meshes, and point sets in a free-form modeling system, Graphical Models, v.68 n.1, p.42-64, January 2006
Mashhuda Glencross , Alan G. Chalmers , Ming C. Lin , Miguel A. Otaduy , Diego Gutierrez, Exploiting perception in high-fidelity virtual environmentsAdditional presentations from the 24th course are available on the citation page, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts | inelasticity;animation;implicit surfaces;collision detection;collision response;deformable models |
614370 | The Lazy Sweep Ray Casting Algorithm for Rendering Irregular Grids. | AbstractLazy Sweep Ray Casting is a fast algorithm for rendering general irregular grids. It is based on the sweep-plane paradigm, and it is able to accelerate ray casting for rendering irregular grids, including disconnected and nonconvex (even with holes) unstructured irregular grids with a rendering cost that decreases as the "disconnectedness" decreases. The algorithm is carefully tailored to exploit spatial coherence even if the image resolution differs substantially from the object space resolution.Lazy Sweep Ray Casting has several desirable properties, including its generality, (depth-sorting) accuracy, low memory consumption, speed, simplicity of implementation, and portability (e.g., no hardware dependencies).We establish the practicality of our method through experimental results based on our implementation, which is shown to be substantially faster (by up to two orders of magnitude) than other algorithms implemented in software.We also provide theoretical results, both lower and upper bounds, on the complexity of ray casting of irregular grids. | Introduction
For the visualization of three-dimensional data, whether scalar or vector, direct volume rendering
has emerged as a leading, and often preferred, method. While surface rendering method can be
Partially supported by Sandia National Labs, and by the National Science Foundation (NSF), grant CDA-9626370.
Part of this work was conducted while C. Silva was partially supported by CNPq-Brazil on a Ph.D. fellowship.
y Partially supported by grants from NSF (CCR-9504192), Hughes Aircraft, Boeing, and Sun Microsystems.
applied to visualize volumetric data, they require the extraction of some structure, such as isosurfaces
or streamlines, which may bias the resulting visualization. In rendering volumetric data directly,
we treat space as composed of semi-transparent material that can emit, transmit, and absorb light,
thereby allowing one to "see through" (or see inside) the data [43, 22, 21]. Volume rendering also
allows one to render surfaces, and, in fact, by changing the properties of the light emission and
absorption, different lighting effects can be achieved [18].
The most common input data type is a regular (Cartesian) grid of voxels. Given a general scalar
field in ! 3 , one can use a regular grid of voxels to represent the field by regularly sampling the
function at grid points (-i; -j; -k), for integers i; j; k, and some scale factor - 2 !, thereby creating
a regular grid of voxels. However, a serious drawback of this approach arises when the scalar field
is disparate, having nonuniform resolution with some large regions of space having very little field
variation, and other very small regions of space having very high field variation. In such cases,
which often arise in computational fluid dynamics and partial differential equation solvers, the use
of a regular grid is infeasible since the voxel size must be small enough to model the smallest
"features" in the field. Instead, irregular grids (or meshes), having cells that are not necessarily
uniform cubes, have been proposed as an effective means of representing disparate field data.
Irregular grid data comes in several different formats [37, 41]. One very common format has
been curvilinear grids, which are structured grids in computational space that have been "warped"
in physical space, while preserving the same topological structure (connectivity) of a regular grid.
However, with the introduction of new methods for generating higher quality adaptive meshes, it is
becoming increasingly common to consider more general unstructured (non-curvilinear) irregular
grids, in which there is no implicit connectivity information. Furthermore, in some applications
disconnected grids arise.
Rendering of irregular grids has been identified as an especially important research area in visualization
[17]. The basic problem consists of evaluating a volume rendering equation [21] for each
pixel of the image screen. To do this, it is necessary to have, for each line of sight (ray) through
an image pixel, the sorted order of the cells of the mesh along the ray. This information is used to
evaluate the overall integral in the rendering equation.
In this paper, we present and analyze the Lazy Sweep Ray Casting algorithm, a new method for
rendering general meshes, which include unstructured, possibly disconnected, irregular grids. A primary
contribution of the Lazy Sweep Ray Casting (LSRC) algorithm is a new method for accurately
calculating the depth-sorted ordering. LSRC is based on ray casting and employs a sweep-plane ap-
proach, as proposed by Giertsen [15], but introduces several new ideas that permit a faster execution,
both in theory and in practice.
This paper is built upon the paper of Silva, Mitchell, and Kaufman [36], where the fundamentals
of our method were developed. In the months since the writing of [36], we have made several
improvements and extensions; as we report our latest results here, we will compare them to the
results in the earlier work of [36].
Definitions and Terminology
A polyhedron is a closed subset of ! 3 whose boundary consists of a finite collection of convex
polygons (2-faces, or facets) whose union is a connected 2-manifold. The edges (1-faces) and
vertices (0-faces) of a polyhedron are simply the edges and vertices of the polygonal facets. A
convex polyhedron is called a polytope. A polytope having exactly four vertices (and four triangular
is called a simplex (tetrahedron). A finite set S of polyhedra forms a mesh (or an unstructured,
irregular grid) if the intersection of any two polyhedra from S is either empty, a single common
edge, a single common vertex, or a single common facet of the two polyhedra; such a set S is said
to form a cell complex. The polyhedra of a mesh are referred to as the cells (or 3-faces). If the
boundary of a mesh S is also the boundary of the convex hull of S, then S is called a convex mesh;
otherwise, it is called a nonconvex mesh. If the cells are all simplices, then we say that the mesh is
simplicial.
We are given a mesh S. We let c denote the number of connected components of S. If
we say that the mesh is connected; otherwise, the mesh is disconnected. We let n denote the total
number of edges of all polyhedral cells in the mesh. Then, there are O(n) vertices, edges, facets,
and cells.
For some of our theoretical discussions, we will be assuming that the input mesh is given in any
standard data structure for cell complexes (e.g., a facet-edge data structure [10]), so that each cell
has pointers to its neighboring cells, and basic traversals of the facets are also possible by following
pointers. If the raw data does not have this topological information already encoded in it, then it can
be obtained by a preprocessing step, using basic hashing methods.
Our implementation of the LSRC algorithm relies on only a very simple and economical structure
in the input data. In particular, we store with each vertex v its "use set" (see [32]), which is simply a
list of the cells of the mesh that "use" v (have v as a vertex of the cell). Note that this requires only
O(n) storage, since the total size of all use sets is bounded by the sum of the sizes of the cells.
The image space consists of a screen of N-by-N pixels. We let ae i;j denote the ray from the eye
of the camera through the center of the pixel indexed by (i; j). We let k i;j denote the number of
facets of S that are intersected by ae i;j
. Finally, we let
be the total complexity of all ray
casts for the image. We refer to k as the output complexity. Clearly, \Omega\Gamma k) is a lower bound on the
complexity of ray casting the mesh. Note that since each of the N 2 rays intersects at
most O(n) facets.
Related Work
A simple approach for handling irregular grids is to resample them, thereby creating a regular grid
approximation that can be rendered by conventional methods [28, 42]. In order to achieve high
accuracy it may be necessary to sample at a very high rate, which not only requires substantial
computation time, but may well make the resulting grid far too large for storage and visualization
purposes. Several rendering methods have been optimized for the case of curvilinear grids; in partic-
ular, Fr-uhauf [12] has developed a method in which rays are "bent" to match the grid deformation.
Also, by exploiting the simple structure of curvilinear grids, Mao et al. [20] have shown that these
grids can be efficiently resampled with spheres and ellipsoids that can be presorted along the three
major directions and used for splatting.
A direct approach to rendering irregular grids is to compute the depth sorting of cells of the mesh
along each ray emanating from a screen pixel. For irregular grids, and especially for disconnected
grids, this is a nontrivial computation to do efficiently. One can always take a naive approach, and for
each of the N 2 rays, compute the O(n) intersections with cell boundary facets in time O(n), and then
these crossing points (in O(n log n) time). However, this results in overall time O(N 2 n log n),
and does not take advantage of coherence in the data: The sorted order of cells crossed by one ray
is not used in any way to assist in the processing of nearby rays.
Garrity [14] has proposed a preprocessing step that identifies the boundary facets of the mesh.
When processing a ray as it passes through interior cells of the mesh, connectivity information is
used to move from cell to cell in constant time (assuming that cells are convex and of constant
complexity). But every time that a ray exits the mesh through a boundary facet, it is necessary to
perform a "FirstCell" operation to identify the point at which the ray first reenters the mesh. Garrity
does this by using a simple spatial indexing scheme based on laying down a regular grid of voxels
(cubes) on top of the space, and recording each facet with each of the voxels that it intersects. By
casting a ray in the regular grid, one can search for intersections only among those facets stored with
each voxel that is stabbed by the ray.
The FirstCell operation is in fact a "ray shooting query", for which the field of computational
geometry provides some data structures: One can either answer queries in time O(log n), at a cost of
preprocessing and storage [2, 4, 8, 27], or answer queries in worst-case time O(n 3=4 ), using
a data structure that is essentially linear in n [3, 33]. In terms of worst-case complexity, there are
reasons to believe that these tradeoffs between query time and storage space are essentially the best
possible. Unfortunately, these algorithms are rather complicated, with high constants, and have not
yet been implemented or shown to be practical. (Certainly, data structures with super-linear storage
costs are not practical in volume rendering.) This motivated Mitchell, Mount, and Suri [23] to devise
methods of ray shooting that are "query sensitive" - the worst-case complexity of answering the
query depends on a notion of local combinatorial complexity associated with a reasonable estimate
of the "difficulty" of the query, so that "easy" queries take provably less time than "hard" queries.
Their data structure is based on octrees (as in [31]), but with extra care needed to keep the space
complexity low, while achieving the provably good query time.
Uselton [39] proposed the use a Z-buffer to speed up FirstCell; Ramamoorthy and Wilhelms [30]
point out that this approach is only effective 95% of the time. They also point out that 35% of the
time is spent checking for exit cells and 10% for entry cells. Ma [19] describes a parallelization
of Garrity's method. One of the disadvantages of these ray casting approaches is that they do not
exploit coherence between nearby rays that may cross the same set of cells.
Another approach for rendering irregular grids is the use of projection ("feed-forward") methods
[22, 45, 34, 38], in which the cells are projected onto the screen, one-by-one, in a visibility
incrementally accumulating their contributions to the final image. One advantage of these
methods is the ability to use graphics hardware to compute the volumetric lighting models in order
to speed up rendering. Another advantage of this approach is that it works in object space, allowing
coherence to be exploited directly: By projecting cells onto the image plane, we may end up with
large regions of pixels that correspond to rays having the same depth ordering, and this is discovered
without explicitly casting these rays. However, in order for the projection to be possible a depth
ordering of the cells has to be computed, which is, of course, not always possible; even a set of three
triangles can have a cyclic overlap. Computing and verifying depth orders is possible in O(n 4=3+ffl )
time [1, 7, 9], where ffl ? 0 is an arbitrarily small positive constant. In case no depth ordering exists,
it is an important problem to find a small number of "cuts" that break the objects in such a way that
a depth ordering does exist; see [7, 5]. BSP trees have been used to obtain such a decomposition, but
can result in a quadratic number of pieces [13, 26]. However, for some important classes of meshes
(e.g., rectilinear meshes and Delaunay meshes [11]), it is known that a depth ordering always ex-
ists, with respect to any viewpoint. This structure has been exploited by Max et al. [22]. Williams
[45] has obtained a linear-time algorithm for visibility ordering convex (connected) acyclic meshes
whose union of (convex) cells is itself convex, assuming a visibility ordering exists. Williams also
suggests heuristics that can be applied in case there is no visibility ordering or in the case of non-convex
meshes, (e.g., by tetrahedralizing the nonconvexities which, unfortunately, may result in a
quadratic number of cells). In [40], techniques are presented where no depth ordering is strictly
necessary, and in some cases calculated approximately. Very fast rendering is achieved by using
graphics hardware to project the partially sorted faces.
A recent scanline technique that handles multiple, and overlapping grids is presented in [44]. They
process the set of polygonal facets of cells, by first bucketing them according to which scanline contains
the topmost vertex, and then maintaining a "y-actives list" of polygons present at each scanline,
as they sweep from top to bottom (in y). Then, on each scanline, they scan in x, bucketing polygons
according to their left extent, and then maintaining (via merging) a z-sorted list of polygons,
as they scan from left to right. The method has been parallelized and used within a multi-resolution
hierarchy, based on a k-d tree.
Two other important references on rendering irregular grids have not yet been discussed here -
Giertsen [15] and Yagel et al. [47]. We elaborate on these in the next section, as they are closely
related to our method.
In summary, projection methods are potentially faster than ray casting methods, since they exploit
spatial coherence. However, projection methods give inaccurate renderings if there is no visibility
ordering, and methods to break cycles are either heuristic in nature or potentially costly in terms of
space and time.
A standard algorithmic paradigm in computational geometry is the "sweep" paradigm [29]. Com-
monly, a sweep-line is swept across the plane, or a sweep-plane is swept across 3-space. A data
structure, called the sweep structure (or status), is maintained during the simulation of the continuous
sweep, and at certain discrete events (e.g., when the sweep-line hits one of a discrete set of
points), the sweep structure is updated to reflect the change. The idea is to localize the problem
to be solved, solving it within the lower dimensional space of the sweep-line or sweep-plane. By
processing the problem according to the systematic sweeping of the space, sweep algorithms are
Sweep Plane
Intersection with sweep plane
Z axis
Viewing Plane
Y axis
Scanline X axis
Figure
1: A sweep-plane (perpendicular to the y-axis) used in sweeping 3-space.
able to exploit spatial coherence in the data.
Giertsen's Method
Giertsen's pioneering work [15] was the first step in optimizing ray casting by making use of coherency
in order to speed up rendering. He performs a sweep of the mesh in 3-space, using a
sweep-plane that is parallel to the x-z plane. Here, the viewing coordinate system is such that the
viewing plane is the x-y plane, and the viewing direction is the z direction; see Figure 1. The
algorithm consists of the following steps:
1. Transform all vertices of S to the viewing coordinate system.
2. Sort the (transformed) vertices of S by their y-coordinates; put the ordered vertices, as well
as the y-coordinates of the scanlines for the viewing image, into an event priority queue,
implemented in this case as an array, A.
3. Initialize the Active Cell List (ACL) to empty. The ACL represents the sweep status; it maintains
a list of the cells currently intersected by the sweep-plane.
4. While A is not empty, do:
(a). Pop the event queue: If the event corresponds to a vertex, v, then go to (b); otherwise,
go to (c).
(b). Update ACL: Insert/delete, as appropriate, the cells incident on v. (Giertsen assumed
that the cells are disjoint, so each v belongs to a single cell.)
(c). Render current scanline: Giertsen uses a memory hash buffer, based on a regular grid of
squares in the sweep-plane, allowing a straightforward casting of the rays that lie on the
current scanline.
By sweeping 3-space, Giertsen reduces the ray casting problem in 3-space to a 2-dimensional cell
sorting problem.
Giertsen's method has several advantages over previous ray casting schemes. First, there is no
need to maintain connectivity information between cells of the mesh. (In fact, he assumes the cells
are all disjoint.) Second, the computationally expensive ray shooting in 3-space is replaced by a
simple walk through regular grid cells in a plane. Finally, the method is able to take advantage of
coherence from one scanline to the next.
However, there are some drawbacks to the method, including:
(1) The original data is coarsened into a finite resolution buffer (the memory hashing buffer) for
rendering, basically limiting the resolution of the rendering, and possibly creating an aliasing
effect. While one could simply increase the size of the buffer, this approach is impractical in
large datasets, where the cell size variation can be on the order of 1:100,000. Further, Giertsen
mentions that when cells get mapped to the same location, this is considered a degenerate
case, and the later cells are ignored; however, this resolution might lead to temporal aliasing
when calculating multiple images.
(2) Another disadvantage when comparing to other ray casting techniques is the need to have two
copies of the dataset, as the transformation and sorting of the cells must be done before the
sweeping can be started. (Note that this is also a feature of cell projection methods.) One
cannot just keep re-transforming a single copy, since floating point errors could accumulate.
Yagel et al.'s Method
In [46, 47], Yagel et al. proposed a method that uses a sweep-plane parallel to the viewing plane.
At each position of the sweep-plane, the plane is intersected with the grid, resulting in a two-dimensional
slice, each of whose cells are then scan-converted using the graphics hardware in order
to obtain an image of that slice, which can then be composited with the previously accumulated
image that resulted from the sweep so far. Several optimizations are possible. For example, instead
of performing a full sort along the z-direction, a bucketing technique can be used. Also, the intersections
of mesh edges with the slices can be accelerated by storing incremental step sizes (\Deltax
and \Deltay) corresponding to the interslice distance (\Deltaz); however, this speedup requires considerably
more memory. Furthermore, the storage of the polygons in any given slice requires a significant
amount of memory (e.g., 13.4 MB for the Blunt Fin [47]).
This method can handle general polyhedral grids without having to compute adjacency informa-
tion, and conceptually it can generate high quality images at the expense of "slice oversampling".
The simplicity of the method makes it very attractive for implementation and use. (Ideally, the user
should have access to high-performance graphics hardware and an abundance of memory.)
3 The Lazy Sweep Ray Casting Algorithm
The design of our new method is based on two main goals: (1) the depth ordering of the cells should
be correct along the rays corresponding to every pixel; and (2) the algorithm should be as efficient
as possible, taking advantage of structure and coherence in the data.
With the first goal in mind, we chose to develop a new ray casting algorithm, in order to be able
to handle cycles among cells (a case causing difficulties for projection methods). To address the
second goal, we use a sweep approach, as did Giertsen, in order to exploit both inter-scanline and
inter-ray coherence. Our algorithm has the following advantages over Giertsen's:
(1) It avoids the explicit transformation and sorting phase, thereby avoiding the storage of an extra
copy of the vertices;
(2) It makes no requirements or assumptions about the level of connectivity or convexity among
cells of the mesh; however, it does take advantage of structure in the mesh, running faster in
cases that involve meshes having convex cells and convex components;
(3) It avoids the use of a hash buffer plane, thereby allowing accurate rendering even for meshes
whose cells greatly vary in size;
(4) It is able to handle parallel and perspective projection within the same framework, without
explicit transformations.
3.1 Performing the Sweep
Our sweep method, like Giertsen's, sweeps space with a sweep-plane that is orthogonal to the viewing
plane (the x-y plane), and parallel to the scanlines (i.e., parallel to the x-z plane).
Events occur when the sweep-plane hits vertices of the mesh S. But, rather than sorting all of the
vertices of S in advance, and placing them into an auxiliary data structure (thereby at least doubling
the storage requirements), we maintain an event queue (priority queue) of an appropriate (small)
subset of the mesh vertices.
A simple (linear-time) preprocessing pass through the data readily identifies the set of vertices on
the boundary of the mesh. We initialize the event queue with these boundary vertices, prioritized
according to the magnitude of their inner product (dot product) with the vector representing the
y-axis ("up") in the viewing coordinate system (i.e., according to their y-coordinates). (We do not
explicitly transform coordinates.) Furthermore, at any given instant, the event queue only stores
the set of boundary vertices not yet swept over, plus the vertices that are the upper endpoints of
the edges currently intersected by the sweep-plane. In practice, the event queue is relatively small,
usually accounting for a very small percentage of the total data size. As the sweep takes place, new
vertices (non-boundary ones) will be inserted into and deleted from the event queue each time the
sweep-plane hits a vertex of S.
As the sweep algorithm proceeds, we maintain a sweep status data structure, which records the
necessary information about the current slice through S, in an "active edge" list - see Section 5.
When the sweep-plane encounters a vertex event (as determined by the event queue), the sweep
status and the event queue data structures must be updated. In the main loop of the sweep algorithm,
we pop the event queue, obtaining the next vertex, v, to be hit, and we check whether or not the
sweep-plane encounters v before it reaches the y-coordinate of the next scanline. If it does hit v first,
we perform the appropriate insertions/deletions on the event queue and the sweep status structure;
these are easily determined by local tests (checking the signs of dot products) in the neighborhood
of v. Otherwise, the sweep-plane has encountered a scanline. And at this point, we stop the sweep
and drop into a two-dimensional ray casting procedure (also based on a sweep), as described below.
The algorithm terminates once the last scanline is processed.
3.2 Processing a Scanline
When the sweep-plane encounters a scanline, the current (3D) sweep status data structure gives
us a "slice" through the mesh in which we must solve a two-dimensional ray casting problem.
Let S denote the polygonal (planar) subdivision at the current scanline (i.e., S is the subdivision
obtained by intersecting the sweep-plane with the mesh S.) In time linear in the size of S, the
subdivision S can be recovered (both its geometry and its topology) by stepping through the sweep
status structure, and utilizing the local topology of the cells in the slice. (The sweep status gives
us the set of edges intersecting the sweep plane; these edges define the vertices of S, and the edges
of S can be obtained by searching the set of triangular facets incident on each such edge.) In our
implementation, however, S is not constructed explicitly, but only given implicitly by the sweep
status data structure (a list of "active edges"), and then locally reconstructed as needed during the
two-dimensional sweep (described below). The details of the implementation are non-trivial and
they are presented in Section 5.
The two-dimensional ray casting problem is also solved using a sweep algorithm - now we
sweep the plane with a sweep-line parallel to the z axis. (Or, in the case of perspective projection,
we sweep with a ray eminating from the viewer's eye.) Events now correspond to vertices of the
planar subdivision S, which occur at intersection points between an "active edge" in the (3D) sweep
status and the current sweep-plane. These event points are processed in x-order; thus, we begin by
sorting them. (An alternative approach, mentioned in Section 4, is to proceed as we did in 3D, by
first identifying and sorting only the locally extremal vertices of S, and then maintaining an event
queue during the sweep. Since a single slice has relatively few event points, compared with the size
of S, we opted, in our implementation, simply to sort them outright.) The sweep-line status is an
ordered list of the segments of S crossed by the sweep-line. The sweep-line status is initially empty.
Then, as we pass the sweep-line over S, we update the sweep-line status at each event point, making
(local) insertions and deletions as necessary. (This is analogous to the Bentley-Ottmann sweep that
is used for computing line segment intersections in the plane [29].) We also stop the sweep at each
of the x-coordinates that correspond to the rays that we are casting (i.e., at the pixel coordinates
along the current scanline), and output to the rendering model the sorted ordering (depth ordering)
given by the current sweep-line status.
Analysis: Upper and Lower Bounds
We proceed now to give a theoretical analysis of the time required to render irregular grids. We
begin with "negative" results that establish lower bounds on the worst-case running time:
Theorem 1 (Lower Bounds) Let S be a mesh having c connected components and n edges. Even
if all cells of S are
log n) is a lower bound on the worst-case complexity of ray
casting. If all cells of S are convex and, for each connected component of S, the union of cells in
the component is convex,
log c) is a lower bound. Here, k is the total number of facets
crossed by all N 2 rays that are cast through the mesh (one per pixel of the image plane).
Proof. It is clear
is a lower bound, since k is the size of the output from the ray casting.
Let us start with the case of c convex components in the mesh S, each made up of a set of convex
cells. Assume that one of the rays to be traced lies exactly along the z-axis. In fact, we can assume
that there is only one pixel, at the origin, in the image plane. Then the only ray to be cast is the one
along the z-axis, and k simply measures how many cells it intersects. To show a lower bound of
log c), we simply note that any ray tracing algorithm that outputs the intersected cells, in order,
along a ray can be used to sort c numbers, z i . (Just construct, in O(c) time, tiny disjoint tetrahedral
cells, one centered on each z i .)
Now consider the case of a connected mesh S, all of whose cells are convex. We assume that all
local connectivity of the cells of S is part of the input mesh data structure. (The claim of the theorem
is that, even with all of this information, we still must effectively perform a sort.) Again, we claim
that casting a single ray along the z-axis will require that we effectively sort n numbers, z
We take the unsorted numbers z i
and construct a mesh S as follows. Take a unit cube centered on
the origin and subtract from it a cylinder, centered on the z-axis, with cross sectional shape a regular
2n-gon, having radius less than 1/2. Now remove the half of this polyhedral solid that lies above the
x-z plane. We now have a polyhedron P of genus 0 that we have constructed in time O(n). We refer
to the n (skinny) rectangular facets that bound the concavity as the "walls". Now, for each point
(0; 0; z i ), create a thin "wedge" that contains (0; 0; z i ) (and no other point (0; 0; z j ), j 6= i), such
that the wedge is attached to wall i (and touches no other wall). Refer to Figure 2. We now have a
polyhedron P , still of genus 0, of size O(n), and this polyhedron is easily decomposed in O(n) time
into O(n) convex polytopes. Further, the z-axis intersects (pierces) all n of the wedges, and does so
in the order given by the sorted order of the z i
's. Thus, the output of a ray tracing algorithm that has
one ray along the z-axis must give us the sorted order of the n wedges, and hence of the n numbers
. The
log n) bound follows. ut
Remark. It may be tempting to think that if one is given a convex mesh (e.g., connected, with
tetrahedral cells), that this information can be used to sort the vertices of the mesh (e.g., by x-
coordinate) in linear time, thereby using topological information to make sweep algorithms more
efficient. However, it is easy to show that, even in 2 dimensions, if we are given a triangulation,
with complete topological information, it still requires
log n) to sort the n vertices by their
x-coordinates. (The proof, based on a reduction from sorting, is left to the reader.)
Upper Bounds
The previous theorem establishes lower bounds that show that, in the worst case, any ray casting
method will have complexity that is superlinear in the problem size - essentially, it is forced to do
Figure
2: Lower bound construction.
some sorting. However, the pathological situations in the lower bound constructions are unlikely to
arise in practice.
We now examine upper bounds for the running time of the sweep algorithm we have proposed,
and we discuss how its complexity can be written in terms of other parameters that capture problem
instance complexity.
First, we give a worst-case upper bound. In sweeping 3-space, we have O(n) vertex events, plus
N (presorted) "events" when we stop the sweep and process the 2-dimensional slice corresponding
to a scanline. Each operation (insertion/deletion) on the priority queue requires time O(log M),
where M is the maximum size of the event queue. In the worst case, M can be of the order of n, so
we get a worst-case total of O(N log n) time to perform the sweep of 3-space.
For each scanline slice, we must perform a sweep as well, on the subdivision S, which has worst-case
size O(n). The events in this sweep algorithm include the O(n) vertices of the subdivision
(which are intersections of the slice plane with the edges of the mesh, S), as well as the N (presorted)
"events" when we stop the sweep-line at discrete pixel values of x, in order to output the ordering
(of size k i;j for the ith pixel in the jth scanline) along the sweep-line, and pass it to the rendering
module. Thus, in the worst case, this sweep of 2-space, requires time O(
log n) for slice
j, for an overall cost, for all N slices, of O( P
Now, the product term, Nn, in the bound of O(k log n) is due to the fact that each of
the N slices might have complexity roughly n. However, this is a pessimistic bound for practical
situations. Instead, we can let n s denote the total sum of the complexities of all N slices; in practice,
we expect n s
to be much smaller than Nn, and potentially n s
is considerably smaller than n. (For
example, if the mesh is uniform, we may expect each slice to have complexity of n 2=3 , as in the case
of a n 1=3 -by-n 1=3 -by-n 1=3 grid, which gives rise to n If we now write the complexity
in terms of n s , we get worst-case running time of O(k
Note that, in the worst case, may be that every one of the N 2 rays crosses
The upper bound of O(k n) should be contrasted with the bound O(N 2 n log n) obtained from the most
naive method of ray casting, which computes the intersections of all N 2 rays with all O(n) facets, and then sorts the
intersections along each ray.
\Omega\Gamma n) of the facets in the mesh. Thus, the output size k could end up being the dominant term in the
complexity of our algorithm. Note too that, even in the best case, since there are N 2
rays.
The O(n log n) term in the upper bound comes from the sweep of 3-space, where, in the worst
case, we may be forced to (effectively) sort the O(n) vertices (via O(n) insertions/deletions in the
event queue). We now discuss how we can analyze the complexity in terms of the number, n c , of
"critical" vertices; this approach was used in the 2-dimensional triangulation algorithm of Hertel
and Mehlhorn [16].
Consider the sweep of 3-space with the sweep-plane. We say that vertex v is critical if, in a small
neighborhood of v, the number of connected components in the slice changes as the sweep-plane
passes through v. (Thus, vertices that are locally min or max are critical, but also some "saddle"
points may be critical.) Let n c
denote the number of critical vertices. Now, note that the lower
bound construction that shows that, in the worst case, we must resort to sorting, is quite contrived:
In particular, it has n c = \Omega\Gamma n), while one would expect in practice that n c
is very small (say, on the
order of c, the number of connected components of the mesh).
Now, if we conduct our sweep of 3-space carefully, then we can get away with only having to
sort the critical vertices, resulting in total time O(n constructing all N of the
slices. (Similarly, Hertel and Mehlhorn [16] were able to triangulate polygonal regions in the plane
in time O(n compared with the previous bound of O(n log n) based on plane sweep.)
The main idea is to exploit the topological coherence between slices, noting that the number of
connected components changes only at critical vertices (and their y-coordinates are sorted, along
with the N scanlines). In particular, we can use depth-first search to construct each connected
component of S within each slice, given a starting "seed" point in each component. These seed
points are obtained from the seed points of the previous slice, simply by walking along edges of the
grid (in the direction of increasing y-coordinate), from one seed to the next slice (in total time O(n),
for all walks); changes only occur at critical vertices, and these are local to these points, so they can
be processed in time linear in the degree of the critical vertices (again, overall O(n)). This sweep of
3-space gives us the slices, each of which can then be processed as already described. (Note that the
extremal vertices within each slice can be discovered during the construction of the slice, and these
are the only vertices that need to be sorted and put into the initial event queue for the sweep of a
slice.)
In summary, we have
Theorem 2 (Upper Bound) Ray casting for an irregular grid having n edges can be performed in
n) is the size of the output (the total number
of facets crossed by all cast rays), n is the total complexity of all slices, and n
is the number of critical vertices.
Remark. The upper bound shows only linear dependence on n, while the lower bound theorem
showed an \Omega\Gamma n log n) lower bound. This is not a contradiction, since, in the proof of the lower
bound, the construction has n critical vertices; this is in agreement with the upper bound
Figure
3: Illustration of a sweep in one slice.
Another potential savings, particularly if the image resolution is low compared with the mesh
resolution, is to "jump" from one slice to the next, without using the sweep to discover how one
slice evolves into the next. We can instead construct the next slice from scratch, using a depth-first
search through the mesh, and using "seed" points that are found by intersecting the new slice plane
with a critical subgraph of mesh edges that connects the critical vertices of the mesh. Of course,
we do not know a priori if it is better to sweep from slice i to slice or to construct slice
scratch. Thus, we can perform both methods in parallel, on two processors, and use the
result obtained by the first processor to complete its task. (Alternatively, we can achieve the same
effect using a single processor by performing a "lock step" algorithm, doing steps in alternation
between the two methods.) This results in an asymptotically complexity that is the minimum of the
complexities of the two methods. This scheme applies not just to the sweep in 3-space, but also to
the sweeps in each slice.
As an illustration of how these methods can be quite useful, consider the situation in Figure 3,
which, while drawn only in 2 dimensions, can depict the cases in 3-space as well. When we sweep
from line 2 to line 3, a huge complexity must be swept over, and this may be costly compared to
rebuilding from the scratch the slice along line 3. On the other hand, sweeping from line 5 to line
6 is quite cheap (essentially no change in the geometry and topology), while constructing the slice
along line 6 from scratch would be quite costly. By performing the two methods in parallel (or in
"lock step"), we can take advantage of the best of both methods. The resulting algorithm exploits
coherence in the data and has a running time that is sensitive, in some sense, to the complexity of the
visualization task. Note that, in practice, when the image resolution is very low, one would probably
prefer to oversample and then filter, rather than to use this method of "jumping" from slice to slice
or from ray to ray.
5 Implementation Details
We have implemented a version of the main LSRC algorithm, with some simplifications. Here, we
discuss some of the details of our implementation, concentrating on the most relevant issues unique
to implementing LSRC. We try to present enough details that an experienced graphics programmer
can reproduce our results with minimal guess work.
Our current implementation handles general disconnected grids; however, it also assumes, for
simplicity, that cells of the mesh are tetrahedra (simplices). The extension to more complex convex
(or even nonconvex) cells is conceptually straightforward, while the details are somewhat tedious
and do not contribute to the basic understanding of the algorithm.
There are other ways in which our implemented algorithm differs from the methods discussed
previously, in the section on upper bounds. This is for two reasons - simplicity of coding, and
efficiency in practice (both in terms of running time and in terms of memory). In our discussions
below, we point out how the implemented algorithm differs both for the 3D sweep (in inserting into
the heap all boundary vertices of the grid, rather than just the critical vertices) and for the 2D sweep
(in our maintaining of the sweep-line status).
Our implementation, in its entirety, consists of less than 5,000 lines of C code. We have not yet
attempted to optimize the code, so we expect that it can be further improved.
The major modules of the program include:
- 3D sweep, which sweeps the input mesh with a plane orthogonal to the viewing plane, while
maintaining an active edge list (AEdge), and marking those tetrahedra that have been swept
- 2D sweep, which sweeps a slice, producing the sorted intersections of cells along each ray of
a scanline.
We also have a graphics module that handles computations of coordinates with respect to the
viewing coordinate frame, manages the other modules, and computes the transfer function and the
optical integration (or simple shading). When we speak below of x-, y-, or z-coordinates, these are
all calculated using simple dot products (with the defining unit vectors of the viewing frame), and
are not the result of a full coordinate transformation (which we seek to avoid).
Major Data Structures
Due to the large sizes of irregular grids, efficient data structures can substantially influence the
performance and memory requirements of the implementation.
We basically have two "big" data structures:
- The Vertex list, which contains, for each vertex, its position and field value(s), its "use set"
(list of tetrahedra containing it), and a couple of other utility data fields (e.g., a general-purpose
flag).
- The Tetrahedron list, which contains, for each tetrahedron, pointers to its four vertices
and one flag data field, used to indicate if the sweep-plane has reached it yet in the 3D sweep.
In our experiments, these two main data structures typically occupy 95% of the overall space used
by the algorithm. This organization of the data is memory efficient, while allowing the necessary
connectivity information to be recovered quickly within the algorithm. Since each tetrahedron contains
4 vertices, the total amount of memory required by all of the "use sets" is bounded by 4 \Theta
the number of tetrahedra (this clearly extends to other cell complexes composed of cells of bounded
complexity). We collect the vertices on the boundary of the meshes in lists, so during the sweep, we
can pre-insert them on the priority queues. It is important to note that not all points on the boundary
need to be inserted.
In the 3D sweep, our sweep-plane (orthogonal to the y-axis) is moved from top to bottom, in the
direction of decreasing y. As the sweep progresses, we need to be able to detect what is the next
event, which corresponds to the closest vertex in the direction of the 3D sweep (y-axis). This is
done by maintaining a priority queue that contains (some of the) vertices sorted along the y-axis. In
particular, the priority queue contains those vertices, not yet encountered by the sweep-plane, which
are the bottom endpoints of the "active" edges of S intersected by the sweep-plane. The priority
queue is implemented as a heap, 3DHeap. Vertices are inserted as they are discovered (when a
neighboring vertex above is encountered by the sweep-plane), and they are deleted as they are swept
over.
For the sweep status data structure, we do not explicitly keep a list of active tetrahedra, as this is
not necessary, but we do keep a list, AEdge, of which edges are currently active.
The AEdge (active edge) list is the central data structure in our implementation. Each AEdge
element contains data fields used in several different phases of the algorithm. We have not yet
attempted to optimize the storage space associated with the AEdge list; it typically does not contain
a particularly large number of elements, since it represents only a cross section of the dataset. (The
size of a cross section is typically only about n 2=3 ; e.g., in a regular m-by-m-by-m mesh, a cross
section has complexity O(m 2 ), while active edge entry in AEdge contains:
- Pointers to its endpoints.
- A record of its intersection with the current position of the sweep-plane.
- Pointers to its "top segment" and "bottom segment" (defined below, when we detail the 2D
sweep).
- A few other data fields used for bookkeeping.
In addition to insertions and deletions, the AEdge list must support endpoint queries: Given a pair
of vertices, v and w, determine the entry of AEdge that have the pair as endpoints. For this, we have
implemented a simple-minded, hash-based dictionary data structure. We have experimented with
using other data structures for keeping AEdge, such as a binary tree, but the overhead of keeping
these more complex data structures seems to outweigh their advantages.
(a)
(b)
(c)
(d)
Figure
4: Processing a cell C during the 3D sweep: (a). The sweep-plane hits the topmost vertex
of C - the three incident edges are added to AEdge; (b). The sweep-plane hits an intermediate
vertex of C - one edge is removed from AEdge and two edges are added; (c). The sweep-plane
hits another intermediate vertex of C - two edges are removed from AEdge and one edge is added;
(d). The sweep plane hits the bottommost vertex of C - three edges are removed from AEdge.
In the 2D sweep, we will also use the Segment data structure, which stores pairs of active edges
that belong to the same facet of some cell. Such a pair of active edges determines a line segment
in the current slice. Each Segment object also has two pointer fields to allow for the construction
of double-linked lists of Segment objects, corresponding to the sweep-line status data structure, a
depth-sorting of segments along each ray.
3D Sweep
In the 3D sweep, the events are determined by when the sweep-plane hits a vertex or arrives at a
scanline. Since the y-coordinates of the scanlines are predetermined (and sorted), we have only to
concern ourselves with the y-coordinates of the vertices. Since we are trying to be "lazy" about the
sweep, we are interested in avoiding creating a single sorted list of all vertices, so we proceed as
follows. First, in a single (preprocessing) pass over the Vertex list, we identify all of the vertices
that lie on the boundary of the grid S; typically, this set of vertices is only a tiny fraction of the total
set. Then, for a given viewing frame, we insert these boundary vertices into the 3DHeap, based on
y-coordinate key values. (Our current implementation does not take advantage of the fact that we
can restrict attention to critical vertices, as discussed in Section 4; the boundary vertices, which can
be identified in a preprocessing step (as opposed to critical vertices, which are defined with respect
to a view-dependent y-axis) will be a (still small) superset of the critical vertices.) This aspect of the
algorithm allows us to exploit nice structure that may be present in the input - grids that have few
connected components, with each component being well-shaped (having relatively few boundary
vertices), will allow our 3D sweep algorithm to run faster, as the only non-linear time component of
the algorithm is sensitive to the number of vertices on the boundary of the grid.
Next, we begin the sweep, using this 3DHeap to identify vertex events. As the sweep progresses,
we process vertex events, in a natural way, by making insertions and deletions to the 3DHeap and
the AEdge list accordingly. Based on the "use set" of a vertex, we can determine the local geometry
about it, and thereby decide what insertions/deletions to make; see Figure 4. The vertex event
processing proceeds as follows:
While 3DHeap is not empty do
(1) Remove from 3DHeap the vertex v that has smallest key value (y-coordinate).
(2) For each cell C that contains v,
(a) If v is the topmost vertex of C, insert the other vertices of C into the 3DHeap, add
the incident edges to the AEdge list, and mark C, and its vertices, as "visited".
(b) If v is the bottommost vertex of C, remove the incident edges from the AEdge list.
(c) Otherwise, make insertions and deletions from the AEdge list, according to which
edges incident on v are below or above it.
The 3D sweep stops each time that the sweep-plane arrives at a scanline, at which point a 2D
sweep occurs in the corresponding slice of S. Rather than explicitly constructing the slice (e.g.,
building a winged-edge data structure for the 2-dimensional subdivision S), we use only the AEdge
list to represent implicitly the structure of S. We refer to the line segments that are edges in the
subdivision S as segments, rather than "edges", in order to to distinguish them from the edges of the
3-dimensional mesh S (which are elements of AEdge). Since segments are determined by a pair of
edges bounding a common face, the Segment data type simply stores such pairs. (The endpoints
of a segment are determined by the intersection of the edges of the pair with the sweep-plane.)
In the 2D sweep, we maintain the ordering of segments intersected by a line, parallel to the z-axis,
which is swept across the slice; this data structure is the sweep-line status. Typically, sweep-line algorithms
utilize some form of balanced binary tree in order to effect efficient (logarithmic) insertion,
deletion, and lookup in the sweep-line status structure. Indeed, in our first implementation of the 2D
sweep, we too used a binary tree to store the sorted order of segment crossings; see [36]. However,
through further experimentation, we have determined that a different (and simpler) approach works
faster in practice, even though it cannot guarantee logarithmic worst-case performance. Thus, we
describe here our current method of maintaining the sweep-line status structure.
Our 2D sweep begins by computing the intersections of the active edges (in AEdge) with the
sweep-plane, caching them, and sorting them in x as we place them into the event priority queue,
which is implemented as a heap - 2DHeap. (Since a single slice is relatively small in size, we
go ahead in this case with a full sorting, for simplicity of implementation.) The sweep-line status
structure is implemented as a doubly linked list of Segment objects, which represent the sorted list
of segments intersecting the current sweep-line. When the sweep-line hits an active edge (i.e., hits
a point p in the slice, where an active edge intersects the slice), we process this event, making updates
to the sweep-line status structure and the 2DHeap as necessary. The overall sweep algorithm
proceeds as follows:
While 2DHeap is not empty do
(1) Remove from 2DHeap the active edge, (v with the smallest key value (x-coordinate).
Let v 0 be the vertex that is above (in y) the current slice.
(2) For each cell C in the use set of v 0 ,
(a) If C is not in the use set of v 1 , then we are done considering C (since (v
an edge of C); otherwise, proceed to (b).
(b) For each of the other vertices of C (exactly two, in the case of tetrahedral cells),
determine if it forms an active edge (by querying the AEdge list) with one of v 0 or
instantiate a Segment corresponding to that edge and (v These
Segment objects are inserted, as explained below, in a doubly linked (sorted) list
that corresponds to the sweep-line status structure.
Step 2(b) above discovers the segments that are incident on the event point p, which is the intersection
of the active edge (v with the sweep-plane.
The updates to the sweep-line status structure are done in a manner that exploits the topological
structure in the mesh (See Figure 5). In particular, when point p is encountered, if there are leftward
segments incident on p, then we identify them (using "top" and "bottom" pointers, described below),
and delete them from the doubly-linked list. At the same time, we insert the rightward segments
incident on p, after sorting them by angle (using only dot product computations) about p, using as
insertion point the position in the list where the leftward segments had been. In this way, we need to
do no searching in the sorted list of segments, except in the case that there were no leftward segments
incident on p (in which case, we do a naive linear-time search in the linked list). While we could do
these search and insertions more efficiently, in worst-case logarithmic time, we have found that the
overhead associated with the data structures does not pay off in practice. Further, in the vast majority
of cases, there is no linear search to do, since most event points have one or more leftward segments.
(Indeed, those event points having no leftward segments are "critical" in the sense described earlier,
as in Hertel and Mehlhorn [16].)
Specifically, we maintain, with each active edge, pointers to two additional Segment objects: a
top segment and a bottom segment, representing the topmost and bottommost, respectively, among
the leftward segments incident on the corresponding crossing point, p. These pointers are initialized
to NULL. We maintain these pointers each time a new segment is added (when we discover its left
endpoint), at which point we check its right endpoint and potentially update the top/bottom segment
pointer of its corresponding active edge. If the active edge corresponding to event point p has a non-NULL
top and bottom pointer, we know where to add the new segments (to the right of p), without
having to search the whole sweep-line status structure. (If the pointers are NULL, then we must do
a linear search to locate p in the linked list, since, in this case, p has no leftward segments.)
There are several advantages to our new approach (compared to the former binary-tree approach).
Notice that now, we are only inserting edges where they share an endpoint. This allows for a much
simpler and more robust ordering function. In our implementation we use a 2D determinant method,
which requires 4 subtractions, two multiplications and one comparison in the general case, plus
two extras comparisons to handle degeneracies when determining the correct ordering between two
segments that share an endpoint. When performing the insertions into the sweep-line status, we still
have to be careful in handling degeneracies, like in [36], but the case analysis is much simpler.
Figure
5: Illustration of the action of the 2D sweep. The solid "thick" edges represent the elements
of the Segment data structure currently in the sweep-line. The dashed elements have not been
touched by the sweep-line yet. When the sweep-line encounters event point p, we discover edge
(p; q), and therefore update the bottom segment of q, from (b; q) to (p; q). (The top segment of q,
(t; q), remains unchanged.)
Final Rendering Issues
There is an issue of handling degeneracies when event points happen to coincide with y-coordinates
of scanlines or with x-coordinates of pixels within a scanline. Thus, in our 3D sweep, we must be
careful to process all event vertices that have the same y-coordinate, before starting the processing
of the 2D slice. Similarly, when sweeping a slice, we only perform the rendering along a ray once
all event points that may have the same x-coordinate as the ray are processed.
Interpolation. Because the original scalar field is only provided at the original vertices, and during
rendering we need to be able to evaluate the field at any given point, some form of interpolation is
necessary. This is a non-trivial step in general, and considerable research has been devoted to this
topic. We refer the reader to [24, 25]. In our current implementation, for tetrahedral cells, our
approach is straightforward. To compute the value of the scalar field at the point, r, where a ray
crosses a segment (p; q) (in a 2D slice), we first use linear interpolation along each of the active edges
(in AEdge) that define p and q, to compute the values at p and q, and then do a third interpolation
along (p; q) to determine the value at r.
Lighting Model. Once the stabbing order of the cells along a ray has been computed, any single-
scattering lighting model can be applied. (See [21] for a survey.) We implemented the simple
lighting model proposed by Uselton [39], in which cell size is not taken into consideration. The
assumption is that each cell is as important as any other cell. We have been able to generate very
good pictures with this method, but it does tend to overemphasize portions of the volume having
particularly high cell density.
6 Experimental Results
Datasets
The code currently handles datasets composed of tetrahedral grids (possibly disconnected, with
nonconvex boundary). The input format is very similar to the GeomView "off" file format: It
simply has the number of vertices and tetrahedra, followed by a list of the vertices and a list of the
tetrahedra, each of which is specified using the vertex locations in the file as an index. This format
is compact, can handle general (disconnected) grids, and it is fairly simple (and fast) to recover
topological information. Maintaining explicit topological information in the input file would waste
too much space.
For our test runs we have used tetrahedralized versions of 4 different datasets, all originally in
NASA Plot3D format. For each dataset we broke each (hexahedral) cell into 5 tetrahedra. Information
about the datasets are summarized in Table 1. (See volume-rendered images in Figures 8-11.)
Besides these, we tested LSRC on several artificial datasets for debugging purposes; in particular,
we generated simple datasets that have disconnected components.
Name Dimensions # of Vertices # of Cells
Blunt Fin 40 \Theta 32 \Theta 32 40,960 187,395
Liquid Oxygen Post 38 \Theta 76 \Theta 38 109,744 513,375
Wing 56 \Theta 54 \Theta 70 211,680 1,005,675
Combustion Chamber 57 \Theta 33 \Theta 25 47,025 215,040
Table
1: A list of the datasets used for testing. "Dimensions" are the original NASA Plot3D sizes.
"# of Vertices" and "# of Cells" are the actual sizes used by LSRC during rendering.
Memory Requirements
LSRC is very memory efficient. (See Section 5 for details about the data structures.) Besides the
input dataset, the only other memory consumption is in the priority queues, and the AEdge and
Segment data structures, which are very small in practice. This low storage requirement is due to
our incremental computations, which only touch a cross section of the dataset at a time. See Table 2
for details about the overall memory consumption during the rendering of each dataset. These
numbers are independent of the screen size being rendered, although they do depend on the "view",
given that different cross sections of the datasets might lead to different memory usage patterns.
Data Structure Blunt Fin Liquid Oxygen Post Delta Wing Combustion Chamber
Dataset Size 7.8MB 21.3MB 41.8MB 9MB
AEdge 390KB 675KB 2.14MB 375KB
Segment 8KB 8KB 20KB 4KB
Table
2: Memory consumption during rendering. "Dataset Size" includes the memory necessary
to keep all the vertices (including their "use set") and tetrahedra. The AEdge row gives the space
used in storing the list of active edges cut by the current sweep-plane. The Segment row gives the
storage requirement for the sweep-line status, representing the stabbing order of the cells along each
ray.
Performance Analysis
Our primary system for measurements was a Silicon Graphics Power Challenge, equipped with
processors (R10000 195MHz), and 3GB of RAM. We only used one of the processors during our
experiments. All of the disk I/O numbers reflect reading off a local disk. We present rendering
figures for the tetrahedralized version of the datasets described in Table 1. (We expect our rendering
times to be considerably less, if we work directly with the hexahedral cells, without first tetrahe-
dralizing them; however, the current implementation assumes tetrahedral cells.) The LSRC code
was compiled with the native SGI compiler (for IRIX 6.2) and optimization level "-O3". All times
reported are in seconds and represent measured wall-clock times.
In
Table
3, we present the times to read and preprocess the datasets. Our input files are currently
ASCII, which requires some amount of parsing upon reading; thus, the "Reading" time is dominated
by this parsing time, not by disk access time. (The use of binary files would likely improve efficiency,
but using ASCII files simplifies the manual creation of test samples.)
Operation Blunt Fin Liquid Oxygen Post Delta Wing Combustion Chamber
Reading 3.86s 10.48s 20.69s 4.51s
Connectivity 3.47s 9.62s 18.98s 4.02s
Boundary vertices 6,760 13,840 20,736 7,810
Table
3: Times spent reading and preprocessing the data. "Reading" accounts for the time spent
reading and parsing the dataset off the disk. "Connectivity" represents the time spent recovering the
adjacency and boundary information. The "Boundary vertices" row gives the number of vertices we
classified as being on the boundary of the dataset.
Table
4 presents the rendering times for the different datasets. Each dataset has been rendered at
a different resolution, primarily because it would not make sense to present square images for all
of them, since their projections do not cover a square region. We also present the pixel coverage
(number of "Full Pixels") for each image. These rendering times are about 3-4 times faster than
the ones presented earlier in [36]. (In [36], only the Blunt Fin and Liquid Oxygen Post were used;
there it was reported that it took 70 seconds to render the Blunt Fin, while the new results reported
here obtain a time of 22 seconds, an improvement by a factor of 3.1; for the Post, dataset, the
improvement has been from 145 seconds to 37 seconds, a factor of 3.9.)
Blunt Fin Liquid Oxygen Post Delta Wing Combustion Chamber
Image Size 530 \Theta 230 300 \Theta 300 300 \Theta 300 300 \Theta 200
Rendering Time 22s 37s 64s 19s
Full Pixels 83,264 70,503 48,009 33,947
Table
4: Rendering results for the four datasets.
We also tested how our algorithm scales with the image size: We rendered the Liquid Oxygen
Post in 3 different resolutions: 300 \Theta 300 (70,503 full pixels), 600 \Theta 600 (282,086 pixels), and
900 \Theta 900 (634,709 pixels), and the rendering times were 37 seconds, 82 seconds and 136 seconds
respectively. This indicates that the cost per pixel actually decreases as the image size increases.
This matches our intuition, because the larger the image, the less "useless sorting" we have to do
per scanline. That is, in 2Dsweep, we basically get all the sorting information in a continuum along
the scanline, but we only use that information along each pixel actually rendered. As the image size
gets larger, the less "sorting" work 2Dsweep has to do per pixel rendered. For very large images, the
shading cost should dominate, at this point, the sorting becomes essentially "free", as it has constant
cost for a given dataset and view.
So far, we have shown that the new method is over 3 times as fast as the one presented in [36].
It is important to understand where the speedup was achieved. In order to be able to analyze the
differences, we will recalculate Figures 5 and 6 from [36] using our new method. We are using
the same dataset (e.g., the Blunt Fin), in order to make direct performance comparisons possible.
Figure
6 illustrates how the number of active edges varies in y during the 3D sweep. (This figures
corresponds to Figure 5 in [36].)
Figure
7 illustrates how the rendering time breaks down by task, as a function of the scanline
(again, for the Blunt Fin dataset so we can compare with the earlier results presented in [36]). Rendering
a scanline involves computing the intersection points, sorting them along the direction of the
scanline, and then performing a 1D sweep (or sort) along each ray incrementally (which basically involves
processing events), and finally shading (or intergration time). The two components presented
in
Figure
7 correspond to over 85% of the overall time spent in rendering. (The "Event Handling
Time" is approximately 50% of the time and "Integration Time" is about 30%).
The results in Figure 7 should be compared to Figure 6 in [36]. Our improvements to the 2D
sweep, as explained in the previous section, resulted in several changes. First, the processing of
each scanline is about 3 times as fast. Second, the event handling time is much lower (previously, it
accounted for over 80% of the rendering time). Because of the lowering of the cost of handling the
events, we can now clearly see the relative increase in the cost for the shading phase. (Before, the
event handling cost was so dominating that all of the other processing time was negligible and did
not appear clearly on the graph.)
The performance numbers indicate that:
(1) the time to process a given scanline is directly correlated to the number of active edges corre-
0Number
of
Active
Edges
Scanline Number
Active Edges
Figure
The size of the AEdge list as a function of the scanline (y-coordinate).0.050.150.250.350 20 40
Time
Scanline Number
Total Time
Event Handling Time
Integration Time
Figure
7: An illustration of the breakdown of the total rendering time per scanline. The "Total Time"
represents the actual time each scanline required for rendering. In order to avoid clutter in the plot,
only the two major components of the rendering time are shown: the "Event Handling Time" (which
is the time to process each active edge as it enters and exists the sweep-line status), and "Integration
Time" (which is the time necessary for the shading calculations).
sponding to that slice;
(2) the cost per scanline varies depending on the complexity of the slice being rendered; and,
(3) the event handling time still dominates the total time spent per scanline.
In [36], the event handling time was clearly the bottleneck of the rendering speed. Now, it still
accounts for about 50% of the overall rendering time. Future improvements may be possible based
on re-use of inter-sweep planes sorting information or the use of some form of "jumping" over
complexity between pixels (as in the lockstep idea proposed before).
Performance Comparisons
The most recent report on an irregular grid ray caster is that of Ma [19], from October 1995. Ma is
using an Intel Paragon (with superscalar 50MHz Intel i860XPs). He reports rendering times for two
datasets - an artificially generated Cube dataset with 130,000 tetrahedra and a Flow dataset with
45,500 tetrahedra. He does not report times for single CPU runs; his experiments use two processing
nodes. For the Cube, he reports taking 2,415 seconds (2234 seconds for the ray casting - the rest is
parallel overhead) for a 480-by-480 image (approximately 230,000 pixels), for a total cost of 10.5
milliseconds per pixel. The cost per tetrahedron is 18.5 (17.18) milliseconds. For the Flow
dataset he reports 1593 (1,585) milliseconds (same image size), for a cost of 6.9 (6.8) milliseconds
per pixel, and 35.01 (34.8) milliseconds per tetrahedron.
Giertsen [15] reports running times of 38 seconds for 3,681 cells (10.32 milliseconds per cell).
His dataset is too small (and too uniform) to allow direct and meaningful comparisons; however,
our implementation handles a cell complex that has over 100 times the number of cells he used, at a
fraction of the cost per cell.
Yagel et al. [47] report rendering the Blunt Fin, using an SGI with a Reality Engine 2 , in just over
9 seconds, using a total of 21MB of RAM, using 50 "slicing" planes; with 100 planes, they report
a rendering time of 13-17 seconds. (Their rendering time is dependent on the number of "slicing"
planes, which, of course, affects the accuracy of the picture generated.) For a 50-slice rendering of
the Liquid Oxygen Post, it takes just over 20 seconds, using about 57MB of RAM. For the
it takes almost 43 seconds and uses 111.7MB of RAM.
In order to facilitate comparisons, Table 5 summarizes all the performance results with the available
data for each reported algorithm. Comparing these numbers with those in Table 4, we see that
LSRC is much faster than the other ray casting algorithms. Furthermore, it is comparable in performance
to Yagel et al.'s method for 100-slice rendering, but it uses less than half of the memory used
by their technique. By looking at the increase in rendering times as the datasets get larger, we see
that the larger the dataset the more advantageous it is to use LSRC over these other techniques.
7 Algorithm Extensions
In this section, we mention some of the possible extensions to this work:
Dataset # of Cells Ren. Time -/Pixel -/Cell Image Size Memory Algorithm
Blunt Fin 187,395 22s 180-s 117-s 530 \Theta 230 8MB LSRC
Post 513,375 37s 411-s 72-s 300 \Theta 300 22MB LSRC
Post 513,375 82s 227-s 159-s 600 \Theta 600 22MB LSRC
Post 513,375 136s 167-s 264-s 900 \Theta 900 22MB LSRC
Wing 1,005,675 64s 711-s 63-s 300 \Theta 300 44MB LSRC
Chamber 215,040 19s 316-s 88-s 300 \Theta 200 9MB LSRC
Blunt Fin 187,395 70s 373-s 664-s 527 \Theta 200 8MB [36]
Post 513,375 145s 1,611-s 282-s 300 \Theta 300 22MB [36]
Cube 130,000 2,415s 10,500-s 18,500-s 480 \Theta 480 N/A Ma
Flow 45,500 1,593s 6,900-s 35,010-s 480 \Theta 480 N/A Ma
Blunt Fin 187,395 9.11s N/A 48-s N/A 21MB Yagel
*Blunt Fin 187,395 13s-17s N/A 69-91-s N/A 21MB Yagel
Post 513,375 20.45s N/A 40-s N/A 57MB Yagel
Wing 1,005,675 42.97s N/A 42-s N/A 112MB Yagel
N/A 3,681 38s 144-s 10,320-s 512 \Theta 512 2.7MB Giertsen
Table
5: Performance summary of several algorithms (indicated in the last column): "LSRC" are
for results for the lazy sweep ray casting algorithm proposed in this paper; "[36]" are for the results
we obtained in our previous work; "Yagel" are for results reported in [47]; "Ma" are for results
reported in [19]; and "Giertsen" are results reported in [15]. The table includes columns indicating
the datasets used, their sizes, and, when possible, the cost per pixel and per cell, and the memory
usage of each algorithm. (For Yagel et al., 50-plane rendering times are reported, with the exception
of the row marked with a "*", which represents the rendering times using 100 planes.)
(1) While our current implementation assumes tetrahedral cells, it is conceptually simple to extend
it to arbitrary cells. The method itself applies in general.
(2) It is straightforward to generalize our method to the case of multiple grids: We simply perform
the sweep independently in each of the several grids and do a merge sort of the results along
each ray, just before rendering.
(3) We are investigating now some possible methods to improve our algorithm, so that it exploits
more of the coherence between scanline slices. It is reasonable to expect us to be able to reuse
much of the slice information from one scanline to the next. In particular, the order of the
event points is nearly the same for two consecutive slices. An improvement here could
help to address the current bottleneck in the code.
An interesting possible extension of our work that we are now investigating is its application
in "out-of-core" cases, in which the dataset is too large to fit in main memory, and we must be
careful to control the number of paging operations to disk. The spatial locality of our memory
accesses indicates that we should be able to employ pre-fetching techniques to achieve fast
rendering when the irregular grids are much larger than memory.
Finally, our method is a natural candidate for parallelization. See Silva [35], Chapter 5, for
further discussion on parallelization issues.
Conclusions
In this paper we have proposed a fast new algorithm, termed the "Lazy Sweep Ray Casting" (LSRC)
algorithm, for rendering irregular grids based on a sweep-plane approach. Our method is similar to
other ray casting methods in that it does not need to transform the grid; instead, it uses (as do projection
methods) the adjacency information (when available) to determine ordering and to attempt to
optimize the rendering. An interesting feature of our algorithm is that its running time and memory
requirements are sensitive to the complexity of the rendering task. Furthermore, unlike the method
of Giertsen [15], we conduct the ray casting within each "slice" of the sweep-plane by a sweep-line
method whose accuracy does not depend on the uniformity of feature sizes in the slice. Our method
is able to handle the most general types of grids without the explicit transformation and sorting used
in other methods, thereby saving memory and computation time while performing an accurate ray
casting of the datasets. We established the practicality of our method through experimental results
based on our implementation. We have also discussed theoretical lower and upper bounds on the
complexity of ray casting in irregular grids.
We have reported timing results showing that our method compares favorably with other ray
casting schemes, and is, in many instances, two orders of magnitude faster than other published
ray casting results. Another advantage of our method is that it is very memory efficient, making it
suitable for use with very large datasets.
It is difficult to give a direct comparison of our method with hardware-based techniques (e.g.,
[47]), which can yield impressive speed-ups over purely software-based algorithms. On the other
hand, software-based solutions broaden the range of machines on which the code can run; e.g., much
of our code was developed on a small laptop, with only 16MB of RAM.
Acknowledgements
We are indebted to Arie Kaufman for extensive discussions and encouragement on this research,
as well as contributions to this paper; a precursor [36] to this paper was prepared jointly with A.
Kaufman. We also thank the Center for Visual Computing (A. Kaufman, Director), for use of the
computing resources in our experiments. We thank Dirk Bartz, Pat Crossno, George Davidson,
Juliana Freire, Dino Pavlakos, Ashish Tiwari and Brian Wylie for useful criticism and help in this
work. The Blunt Fin, the Liquid Oxygen Post, and the Delta Wing datasets are courtesy of NASA.
The Combustion Chamber dataset is from Vtk [32].
--R
"Computing
"On Range Searching with Semialgebraic Sets,"
"Applications of a New Partition Scheme,"
"Counting and Cutting Cycles of Lines and Rods in Space,"
Introduction to Algorithms.
Lecture Notes in Computer Science
"Efficient
"Computing and Verifying
"Primitives for the Manipulation of Three-Dimensional Subdi- visions,"
"An Acyclicity Theorem for Cell Complexes in d Dimensions,"
"Raycasting of Nonregularly Structured Volume Data,"
"On Visible Surface Generation by A Priori Tree Structures,"
"Raytracing Irregular Volume Data,"
"Volume Visualization of Sparse Irregular Meshes,"
"Fast Triangulation of the Plane with Respect to Simple Polygons,"
"Research Issues in Volume Visualization,"
"Display of Surfaces From Volume Data,"
"Parallel Volume Rendering for Unstructured-Grid Data on Distributed Memory Machines,"
"Splatting of Curvilinear Grids,"
"Optical Models for Direct Volume Rendering,"
"Area and Volume Coherence for Efficient Visualization of 3D Scalar Functions,"
"Query-Sensitive
"Scattered data modeling,"
"Visualizing and modeling scattered multivariate data,"
"Efficient Binary Space Partitions for Hidden-Surface Removal and Solid Modeling,"
"Ray Shooting on Triangles in 3-Space,"
"Parallel Voxelization Algorithms for Volume Rendering of Unstructured Grids,"
Computational Geometry: An Introduction.
"An Analysis of Approaches to Ray-Tracing Curvilinear Grids,"
The Design and Analysis of Spatial Data Structures.
The Visualization Toolkit.
"A Polygonal Approximation to Direct Scalar Volume Rendering,"
"Parallel Volume Rendering of Irregular Grids,"
"Fast Rendering of Irregular Grids,"
"Volume Probes: Interactive Data Exploration on Arbitrary Grids,"
"Sorting and Hardware Assisted Rendering for Volume Visualization,"
"Volume Rendering for Computational Fluid Dynamics: Initial Results,"
"Rapid Exploration of Curvilinear Grids Using Direct Volume Rendering,"
"Pursuing Interactive Visualization of Irregular Grids,"
"Direct Volume Rendering of Curvilinear Volumes,"
"A Coherent Projection Approach for Direct Volume Render- ing,"
"Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids,"
"Visibility Ordering Meshed Polyhedra,"
"Volume Rendering Polyhedral Grids by Incremental Slicing,"
"Hardware Assisted Volume Rendering of Unstructured Grids by Incremental Slicing,"
--TR
--CTR
Yang , Tulika Mitra , Tzi-Cker Chiueh, On-the-Fly rendering of losslessly compressed irregular volume data, Proceedings of the conference on Visualization '00, p.101-108, October 2000, Salt Lake City, Utah, United States
Lichan Hong , Arie Kaufman, Accelerated ray-casting for curvilinear volumes, Proceedings of the conference on Visualization '98, p.247-253, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Stefan Guthe , Stefan Roettger , Andreas Schieber , Wolfgang Strasser , Thomas Ertl, High-quality unstructured volume rendering on the PC platform, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, September 01-02, 2002, Saarbrucken, Germany
Bruno Lvy , Guillaume Caumon , Stphane Conreaux , Xavier Cavin, Circular incident edge lists: a data structure for rendering complex unstructured grids, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Ricardo Farias , Joseph S. B. Mitchell , Cludio T. Silva, ZSWEEP: an efficient and exact projection algorithm for unstructured volume rendering, Proceedings of the 2000 IEEE symposium on Volume visualization, p.91-99, October 09-10, 2000, Salt Lake City, Utah, United States
Ricardo Farias , Cludio T. Silva, Out-Of-Core Rendering of Large, Unstructured Grids, IEEE Computer Graphics and Applications, v.21 n.4, p.42-50, July 2001
Lichan Hong , Arie E. Kaufman, Fast Projection-Based Ray-Casting Algorithm for Rendering Curvilinear Volumes, IEEE Transactions on Visualization and Computer Graphics, v.5 n.4, p.322-332, October 1999
Rdiger Westermann , Thomas Ertl, Efficiently using graphics hardware in volume rendering applications, Proceedings of the 25th annual conference on Computer graphics and interactive techniques, p.169-177, July 1998
Stefan Rttger , Martin Kraus , Thomas Ertl, Hardware-accelerated volume and isosurface rendering based on cell-projection, Proceedings of the conference on Visualization '00, p.109-116, October 2000, Salt Lake City, Utah, United States
J. Schroeder , Berk Geveci , Mathieu Malaterre, Compatible Triangulations of Spatial Decompositions, Proceedings of the conference on Visualization '04, p.211-218, October 10-15, 2004
Peter L. Williams , Nelson L. Max , Clifford M. Stein, A High Accuracy Volume Renderer for Unstructured Data, IEEE Transactions on Visualization and Computer Graphics, v.4 n.1, p.37-54, January 1998
T. Silva , Joseph S. B. Mitchell , Peter L. Williams, An exact interactive time visibility ordering algorithm for polyhedral cell complexes, Proceedings of the 1998 IEEE symposium on Volume visualization, p.87-94, October 19-20, 1998, Research Triangle Park, North Carolina, United States
Cevdet Aykanat , B. Barla Cambazoglu , Ferit Findik , Tahsin Kurc, Adaptive decomposition and remapping algorithms for object-space-parallel direct volume rendering of unstructured grids, Journal of Parallel and Distributed Computing, v.67 n.1, p.77-99, January, 2007
Yi-Jen Chiang , Ricardo Farias , Cludio T. Silva , Bin Wei, A unified infrastructure for parallel out-of-core isosurface extraction and volume rendering of unstructured grids, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California | ray tracing;computational geometry;scientific visualization;irregular grids;volumetric data;volume rendering;sweep algorithms |
614371 | Speeding Up Isosurface Extraction Using Interval Trees. | AbstractThe interval tree is an optimally efficient search structure proposed by Edelsbrunner [5] to retrieve intervals on the real line that contain a given query value. We propose the application of such a data structure to the fast location of cells intersected by an isosurface in a volume dataset. The resulting search method can be applied to both structured and unstructured volume datasets, and it can be applied incrementally to exploit coherence between isosurfaces. We also address issues about storage requirements, and operations other than the location of cells, whose impact is relevant in the whole isosurface extraction task.In the case of unstructured grids, the overhead, due to the search structure, is compatible with the storage cost of the dataset, and local coherence in the computation of isosurface patches is exploited through a hash table. In the case of a structured dataset, a new conceptual organization is adopted, called the chess-board approach, which exploits the regular structure of the dataset to reduce memory usage and to exploit local coherence. In both cases, efficiency in the computation of surface normals on the isosurface is obtained by a precomputation of the gradients at the vertices of the mesh.Experiments on different kinds of input show that the practical performance of the method reflects its theoretical optimality. | Introduction
A scalar volume dataset is a pair (V; W ), ng is a finite set
of points spanning a
ng is a corresponding
set of values of a scalar field f(x; z), sampled at the points of V , i.e., w
mesh \Sigma
subdividing\Omega into polyhedral cells having their vertices at the points of V is also
given (or computed from V , if the dataset is scattered): \Sigma can be made of hexahedra,
or tetrahedra, or it can be hybrid, i.e., made of tetrahedra, hexahedra, triangular prisms,
and pyramids.
Given an isovalue q 2 !, the set
called an isosurface of
field f at value q. For the purpose of data visualization, an isosurface S(q) is approximated
by a triangular mesh, defined piecewise on the cells of \Sigma: a cell oe j 2 \Sigma with vertices
is called active at q if min i w
. An active cell contributes to the
approximated isosurface for a patch made of triangles: patches are obtained by joining
points on the active cells' edges that intersect the isosurface (active edges), by assuming
March 7, 1997
linear interpolation of the field along each edge of the mesh. Such intersection points are
called isosurface vertices. In order to use smooth shading to render the isosurface, the
surface normal at each surface vertex must also be estimated.
Therefore, the isosurface extraction problem consists of four main subproblems:
1. Cell selection: finding all active cells in the mesh \Sigma.
2. Cell classification: for each active cell, determining its active edges, and how corresponding
isosurface vertices must be connected to form triangles.
3. Vertex computation: for each active edge, computing the 3D coordinates of its surface
vertex by linear interpolation.
4. Surface normal computation: for each vertex of the isosurface, computing its corresponding
surface normal.
In terms of computation costs, the impact of cell selection may be relevant on the whole
isosurface extraction process, in spite of the simplicity of operations involved at each cell,
because it involves searching the whole set of cells of \Sigma. Cell classification has a negligible
cost, because it is performed only on active cells, and it involves only comparisons of
values. Although also vertex and normal computations are performed only on active cells,
they have a relevant impact, because they involve floating point operations. Besides, such
operations can also be redundant, if the dataset is processed on a per-cell basis, because
each active edge is shared by different cells.
In order to speedup such tasks, it can be worth using search structures, and techniques
that permit to traverse as less non-active cells as possible during cell selection, and to
avoid redundant vertex and normal computations. Speedup techniques can be classified
according to the following criteria:
ffl search modality adopted in selecting active cells; there are three main approaches:
in space-based methods, the domain spanned by the dataset is searched for portions
intersected by the isosurface; in range-based methods, each cell is identified with the
interval it spans in the range of the scalar field, and the range space is searched
for intervals containing the isovalue; in surface-based methods, some facets of the
isosurface are detected first, and the isosurface is traversed starting at such faces and
moving through face/cell adjacencies;
ffl local coherence (or coherence between cells), refers to the ability of a method to avoid
redundancy in geometric computations, by reusing the results obtained for an active
face or edge at all its incident cells.
Since additional data structures may involve non-negligible storage requirements, it is
important to look for methods and structures that warrant a good tradeoff between time
efficiency and memory requirements. The overhead due to auxiliary structures must be
compared to the cost of storing a minimal amount of information necessary to support iso-surface
extraction, disregarding the computational complexity: this can be highly variable
depending on whether the dataset considered is structured or unstructured (i.e., its connectivity
is implicitly given, or it must be stored explicitly, respectively [17]). Therefore,
evaluation criteria for a speedup method must take into account: its range of applicabil-
ity, i.e., the types of dataset (structured, or unstructured, or both) for which the method
is suitable; its efficiency, i.e., the speedup it achieves with respect to a non-optimized
reference method; its overhead, i.e., the storage cost due to auxiliary structures.
On the basis of our preliminary work on unstructured data [3], in this paper we address
the application of speedup techniques in the various phases of isosurface extraction from
both structured and unstructured datasets. A highly efficient technique for cell selection
that is based on the interval tree [5] is adopted. In the unstructured case, this technique is
associated to the use of a hash table in order to exploit local coherence to avoid redundant
vertex computations. In the structured case, a new conceptual organization of the dataset,
called the chess-board approach, is adopted in order to reduce the memory requirements
of the interval tree, and to exploit local coherence intrinsic in the implicit structure of
the dataset. In both cases, we adopt a pre-computation of field gradients at data points
in order to speedup the computation of surface normals. Moreover, we describe how the
interval tree can be efficiently used to develop an incremental technique that exploits
coherence between isosurfaces.
Time and space analyses of our isosurface extraction technique are given, and compared
with those of other methods. Theoretical results are borne out by experimental results
that show the performance of the method on different test datasets.
II. Related Work
The simplest speedup methods for cell selection are based on space partitions, and they
are only suitable for structured data. Wilhelms and Van Gelder [19] use of a branch-
on-need octree to purge sub-volumes while fitting isosurfaces, based on the range interval
spanned by each sub-volume. This method achieves a worst case time efficiency O(k
k log(n=k)) (where n is the total number of cells, and k is the number of active cells)
[11], with small overhead (the octree increases storage occupancy only for about 16%).
An alternative approach for structured data is also proposed by Criscione et al. [4], which
is based on a pyramid data structure. This approach has similar efficiency and overhead,
while it is easier to implement than the previous one. Space-based techniques cannot
be generalized easily to unstructured data, because spatial indexes rely on the regular
structure of the underlying dataset.
Range-based techniques apply to both structured and unstructured datasets, but they
are generally more suitable for unstructured datasets, because they cannot exploit the
implicit spatial information contained in structured datasets, and they have higher memory
requirements. In the unstructured case, there is no implicit spatial information to exploit,
and the higher storage cost for the input mesh hughly reduces the overhead factor of
auxiliary structures.
Gallagher [6] proposes a method based on a subdivision of the range domain into buckets,
and on a classification of intervals based on the buckets they intersect. The tradeoff
between efficiency and memory requirements is highly dependent on the resolution of the
bucketing structure. Giles and Haimes [7] report an approach in which two sorted lists
of intervals are constructed in a pre-processing phase by sorting the cells according to
their minimum and maximum values, respectively. This method is addressing the specific
problem of global coherence (or coherence between isosurfaces), which is aimed to exploit
part of the information derived from the extraction of a given isosurface to speedup the
selection of active cells for another isosurface, corresponding to a near isovalue. This
feature is useful in applications that change the isovalue continuously and smoothly, while
it gives small improvement over a non-optimized method in the generic extraction of
isosurfaces at arbitrary isovalues. In a more recent paper, Shen and Johnson [16] try
March 7, 1997
Fig. 1. The span space. Each interval I = [a; b] is represented as a point of coordinates (a; b). To detect
the intervals that contain the query value q we have to find the points which lye to the left of the line
above the line
to overcome some limitations of [6], and [7] by adopting similar auxiliary structures to
address global coherence. However, a worst case computational complexity of O(n) has
been estimated for all three methods outlined above [11].
Livnat et al. [11] introduce the span space (see Figure 1), which is a two-dimensional space
where each point corresponds to an interval in the range domain. The span space is very
useful to geometrically understand range-based methods, therefore we will refer to this
representation also in the next sections. A kd-tree is used to locate the active intervals
in this space, achieving an O(
in the worst case. The possibility
of exploiting global coherence is also outlined. In a more recent paper, Shen et al. [15]
propose the use of a uniform grid to locate the active intervals in the span space. Such an
approach is suitable to parallel implementation.
The data structure we adopt in this paper, i.e., the interval tree, was proposed by
Edelsbrunner [5] to support queries on a set of intervals. It is optimally efficient, i.e.,
it warrants a worst case time complexity of '(k log n), while its memory overhead is
comparable with those of the other range-based methods. It is worth mentioning that,
although our proposal is the first application of such a data structure to speedup isosurface
extraction, other authors have used it to address related problems: Laszlo [10] considers
the extraction of wireframes from a grid of generic polyhedra, by using an interval tree,
where each interval corresponds to an edge of the input grid; van Kreveld [18] extracts
isolines from triangulated terrain data, by associating each triangle with the interval of
altitudes it spans.
Surface-based approaches rely essentially on two requirements: the ability to find an active
cell (seed) for each connected component of the isosurface; and the ability to propagate
the surface by traversing the mesh from cell to cell through adjacencies [17]. Adjacencies
are implicit in structured datasets, while they need to be stored explicitly in unstructured
datasets. Storing adjacencies explicitly roughly doubles the memory requirement of the
dataset, hence making the overhead of surface-based methods in the unstructured case either
comparable to, or even higher than the overhead of range-based methods. Moreover,
further auxiliary structures are needed in order to find seeds.
Itoh et al. [9], [8] base the search of seeds on a graph, whose nodes are the cells holding
local minima or maxima data values: therefore, an arc of the graph spans an interval in
the range domain. Each arc supports a list of cells connecting its two end nodes. Given
an isovalue, the graph is searched for an active arc, and the cells connected to this arc
are sequentially scanned until a seed is found. A propagation method is activated on this
seed. Since the isosurface can be made of many connected components, seed search must
be repeated until all active arcs have been visited. This can take O(n) time in the worst
case [11]. A more efficient method to find active seeds is proposed by Bajaj et al. [1]: a
minimally sufficient seed set is found in a pre-processing phase, such that any connected
component of any arbitrary isosurface is guaranteed to traverse at least one cell in the seed
set. The seed set can be encoded in a range-based search structure, in order to efficiently
locate active seeds for a given isovalue: optimal time efficiency can be achieved by using
an interval tree. The seed set is very small on average, hence causing a small overhead,
but it can be as big as O(n) in the worst case (e.g., if the underlying field has a sinusoidal
shape). The algorithm for finding the seed set is complicated, and its time complexity is
high.
Local coherence is exploited only by some of the reviewed methods. Spatial indexes
adopted by space-based methods destroy the locality of computation, which supports
local coherence in the original Marching Cubes [12]. For this reason, following Wyvill et
March 7, 1997
al. [20], Wilhelms and Van Gelder [19] adopt a hash-based caching strategy to save and
reuse local computations. The tradeoff between efficiency and overhead depends also on
the size of the hash table: Wilhelms and Van Gelder empirically define statically a size
eight times the square root of the size of the dataset.
Surface-based methods can partially exploit local coherence during the propagation phase:
since each new cell is accessed through an adjacent cell, having a common face intersected
by the isosurface, it is easy to reuse vertices related to such a common face without
computing them again. This implies no further overhead with respect to the structure that
supports dataset encoding and traversal. However, this fact alone cannot warrant that no
redundant computations will be performed, unless explicit links from cells and edges of the
input mesh to the list of extracted cells are maintained, with a further relevant overhead.
III. Selecting Cells Through Interval Trees
The technique we propose for active cell selection is in the class of range-based methods,
and therefore it can be used both for structured and unstructured datasets. Let \Sigma be the
input mesh. Each cell oe j 2 \Sigma is associated with an interval I j , whose extremes a j and b j
are the minimum and maximum field values at the vertices of oe j , respectively. Since oe j is
active for an isovalue q if and only if its corresponding interval I j contains q, the following
general query problem is resolved:
"given a set I = fI of intervals of the form [a on the real line, and
a query value q, report all intervals of I that contain q".
The problem is effectively visualized using the span space introduced by Livnat et al. [11]
(see
Figure
1): each interval represented as a point in a 2D Cartesian space
using the extremes a i and b i as the x and y coordinates of the point, respectively. From
a geometrical point of view, the problem of reporting all intervals that contain the query
value q reduces to collecting the points in the span space lying in the intersection of the
two half-spaces min - q and max - q.
An optimally efficient solution for the query problem above can be obtained by organizing
the intervals of I into an interval tree, a data structure originally proposed by
Edelsbrunner [5] (see also [14]), which is reviewed in the following. For each
let us consider the sorted sequence of values corresponding to distinct
March 7, 1997
extremes of intervals (i.e., each extreme a i , b i is equal to some x j ). The interval tree for
I consists of a balanced binary search tree T whose nodes correspond to values of X,
plus a structure of lists of intervals appended to non-leaf nodes of T . The interval tree is
defined recursively as follows. The root of T has a discriminant I is
partitioned into three subsets as follows:
g.
The intervals of I ffi r
are arranged into two sorted lists AL and DR as follows:
ffl AL contains all elements of I ffi r
sorted in Ascending order according to their Left
extremes a
ffl DR contains all elements of I ffi r
sorted in Descending order according to their Right
extremes b i .
The left and the right subtrees are defined recursively by considering interval sets I l and I r ,
and extreme sets
respectively. The interval tree can
be constructed in O(m log m) time by a direct implementation of its recursive definition.
The resulting structure is a binary balanced tree with h nodes, and a height of dlog he,
plus a collection of lists of type AL and DR, each attached to a node of the tree, for a
total of 2m list elements.
An example of a simple interval tree, built on few intervals, is shown in Figure 2; a
representation of the same structure in the span space is given by the subdivision in
Figure
3 (solid lines) It is noteworthy that, by construction, the last level of the tree is
generally empty. The intervals of this level, if they exist, have to be null intervals (in our
case, such intervals are in fact associated with cells having the same values at all vertices).
Given a query value q, tree T is visited recursively starting at its root:
list AL is scanned until an interval I i is found such that a i ? q; all
scanned intervals are reported; the left subtree is visited recursively;
list DR is scanned until an interval I i is found such that b i ! q; all
scanned intervals are reported; the right subtree is visited recursively;
then the whole list AL is reported.
Fig. 2. An example of an interval tree built over a simple dataset (13 cells). The white dots represent
nodes with empty AL and DR lists.
Fig. 3. A graphical representation of the interval tree of Figure 1 in the span space. By definition, the
intervals lying on subdivision lines belong to the upper level of the tree. The tree search for a value
q: sectors with (intersected by the horizontal line are visited top-down; sectors with
(intersected by the vertical line are visited left to right.
Fig. 4. In our implementation of the Interval Tree data structure, a generic node contains the discriminat
value (ffi r ), the length of the AL and DR lists (l) and the starting position in the big AL and big
DR arrays; in the example, the values stored in the left-most, not empty node of the interval tree of
previous Figure 2 are reported.
The geometric interpretation of the search in the span space is also given in Figure 3.
The regions containing the active intervals is that left and above dotted lines from q.
Each sector of space (node of tree) which contains the horizontal dotted line (i.e., such
that visited top-down (scanning the AL list) until such a line is reached; each
sector containing the vertical dotted line is visited left to right (scanning the DR list)
until such a line is reached. Therefore, dlog he nodes of the tree are visited, and for each
node only the intervals reported in output, plus one, are visited. Hence, if k is the output
size, then the computational complexity of the search is O(k log h). Since log h is the
minimum number of bits needed to discriminate between two different extreme values, no
query technique could have a computational complexity smaller than \Omega\Gammaan/ h), hence the
computational complexity of querying with the interval tree is output-sensitive optimal.
It is interesting to note that the time complexity is independent of the total number m
of intervals, i.e., on the input size: indeed it only depends on the output size, and on the
number of distinct extremes.
A. A General Data Structure
A general data structure for the interval tree can be devised by assuming that the set
of input intervals is stored independently from the search structure, while each interval in
the set can be accessed through a pointer. Therefore, for each element of AL and DR lists
we store only a pointer to its related interval. All such lists are packed into two arrays,
one for lists of type AL, and one for lists of type DR, which will be called the big AL,
and big DR arrays, respectively. Lists are packed in a consistent order (e.g., by following
a depth-first visit of the tree), in such a way that the AL and DR lists attached to a given
node of the tree start at the same location in the two big arrays, respectively. For each
node r of the tree (see Figure 4) we store the discriminant value ffi r , an index referring to
the starting element of its lists in the two arrays described above, and the length of such
lists (recall that both lists have the same length). Since the tree is binary balanced, it can
be also stored implicitly by using an array of nodes.
Therefore, if we assume a cost of a word for both integers, pointers, and floating point
values, we will have that the bare tree requires 3h words, while the lists will require 2m
words, for a total of 3h+2m. It shoud be taken into account that the cost of encoding the
bare tree is expected to be small, at least in our application. Indeed, although in general
we have h - 2m, in practice the intervals of I can only have extremes at a predefined,
and relatively small, set of values: for instance, if data values are encoded by 16 bits, h is
at most 65,536, while m can be several millions.
As for all other range-based methods, the storage cost of the interval tree is a crucial
issue. In Section IV, we will address separately its application to unstructured and structured
datasets, respectively, and we will discuss how storage cost can be optimized by
exploiting special characteristics of the two kinds of datasets, respectively.
B. Exploiting Global Coherence
The interval tree can be used as an effective structure also to address coherence between
isosurfaces: active cells for a given isovalue q 0 , sufficiently close to another isovalue q, can
be extracted efficiently by exploiting partial information from the set of cells active at
isovalue q. Following Livnat et al. [11], this problem can be visualized in the span space
March 7, 1997
as in Figure 5a: assuming that active cells at isovalue q are known, the list of active cells
at isovalues q 0 are obtained by eliminating all points lying in the right rectangular strip
(dashed), and by adding all points lying in the bottom rectangular strip (gridded).
In order to perform this task, active cells at q must be stored into an active list, which is
updated next to obtain the corresponding active list for isovalue q 0 . By using an interval
tree, the active list can be maintained in a compressed form, as a path on the tree,
namely the path that is traversed when extracting active cells for isovalue q through the
query algorithm described in Section III. The path starts from the root node, and has
a length of log h. For each node in the path we just need to maintain one flag (1 bit)
to discriminate whether the AL or the DR list was used, one index addressing the first
interval that was not active in such a list, and one flag (1 bit) to denote whether the next
node is the left or the right child of the current node. In the example of Figure 3, the
path is encoded as follows (by assuming that list locations are addressed starting at 0):
(DR,4,right),(AL,4,left),(DR,0,right), (AL,1,null). It is evident that with a real dataset,
the length of such a path is (on average) extremely smaller than the actual number of
active cells.
The algorithm for computing active cells at isovalue q 0 scans the tree path, and updates it
by either adjusting the index associated to each node, or recomputing the node completely.
The traversal algorithm is described in detail by the pseudo-code in Figure 6. The main
principle of the algorithm is the following. As long as both q and q 0 lie on the same side
of the discriminant of the current node, then the same list is used, and the same child will
while it is sufficient to adjust the interval index by moving it either backward or
forward, depending on whether q ? q 0 or q ! q 0 , and whether AL or DR list is used. In
the example of Figure 5b, this happens for nodes 1 and 2 in the path: in this case, all
intervals in the gridded part of the horizontal stripe are included simply by advancing the
index in the first triple from 4 to 8, while all intervals in the dashed part of the vertical
stripe are included simply by backtracking the index in the second triple from 4 to 1. As
soon as a branching node is found, i.e., a node such that its discriminant lies between q
and q 0 , the search is continued independently of the rest of the active path at q. Indeed,
in this case, the other list for the current node must be used, while the rest of the path
March 7, 1997
Fig. 5. The active intervals at q 0 are obtained by taking active intervals at q, subtracting those in the
dashed strip, and adding those in the gridded strip (a). Active list update: node 1 is updated by
moving the index forward, in order to include points in the gridded strip; node 2 is updated by moving
the index backward, in order to remove points in the dashed strip; tree traversal is repeated for nodes
3 and 4 (b).
will be certainly not active at q 0 . This happens at node 3 in the example (compare it with
Figure
3), when the DR list was traversed for q, while the AL list must be traversed for
. Note that after visiting such a node,the opposite branch of the tree (in the example
just the new node 4) must be visited.
In conclusion, we have that the update has a small overhead for encoding the list of
active intervals, while it involves only traversing the intervals that make the difference
between q and q 0 , plus all the intervals appended to the branching node (in the example,
node 3). In the worst case (i.e., when q and q 0 lie on opposite sides of the discriminant of
the root node, this algorithm is totally equivalent to performing the query from scratch on
the interval tree. An average case analysis depends on the distribution of intervals, and it
involves evaluating the probability that a branching node is more or less deep in the path,
and the probability to have more or less intervals inside that node. Due to its complexity,
such an analysis is omitted.
begin
while (q and q 0
are on the same side of ffi r )
while intervals not active at q 0
are found move i backward
else
while intervals active at q 0
are found move i forward
else
while intervals active at q 0
are found move i forward
else
while intervals not active at q 0
are found move i backward;
if r not empty then
set flag L to the other list;
set flag c to the other child;
traverse list L to set value of
discard the rest of the path;
traverse T starting at child(r;c);
Fig. 6. Pseudo-code of the algorithm for active list update
IV. Extraction of Isosurfaces from Structured and Unstructured Grids
As stated in Section I, the isosurface extraction problem is not limited to the selection
of active cells. Other important aspects (cell classification, vertex and normal computa-
tion) must be taken into account in order to ensure the efficiency of the whole extraction
process. Moreover, the memory overhead of the auxiliary data structure used for cell selection
has to be considered in order to get a good tradeoff between time efficiency and
memory requirements. While referring to the general method described in the previous
section, we stress these aspects in the next subsections, by distinguishing between unstructured
6datasets, whose cells can be tetrahedra, hexahedra, prisms, or pyramids, whose
connectivity must be encoded explicitly, and structured datasets (i.e., cartesian, regular,
rectilinear, curvilinear, and block structured grids), in which the connectivity among the
hexahedral cells is implicit [17].
A. The case of Unstructured Grids
In the case of unstructured datasets, the input mesh is encoded by an array of vertices,
where for each vertex we maintain its three coordinates, and its field value; and by a list
of cells, where for each cell we maintain its connectivity list, made of four, five, six, or
eight indices addressing its vertices in the vertex array, depending on whether the cell
is a tetrahedron, a pyramid, a prism, or a hexahedron, respectively. The indices in the
connectiviy list of each cell are sorted in ascending order, according to the field value
of their corresponding vertices, in such a way that the minimum and maximum of the
interval spanned by each cell will be given by the field values of the first and last vertex
in the connectivity list, respectively. For a hybrid dataset, the list of cells can be encoded
by using up to four different arrays, one for each type of cells. However, the list can be
addressed as a single array, by assuming a conventional order (e.g., tetrahedra come first,
next pyramids, next prisms, and last hexahedra), and by using the length of each list as
an offset.
Given a dataset composed of n points, t tetrahedra, p pyramids, s prisms, and k hexa-
hedra, we have a storage cost of 4n for the whole dataset. Recall
that is the total number of cells, that 3h + 2m is the cost of the
interval tree, and that h - n. Therefore, we have a memory overhead for the interval tree
variable between 25% and 50%, depending on the number of cells of the different types,
25% being obtained for a dataset made only of hexahedra, and 50% for a dataset made
only of tetrahedra: extreme values are not significative, however, since in the first case the
dataset would probably be a structured one, while in the second case further optimization
can be adopted, as discussed next.
If the input mesh is a tetrahedralization, the cost of storing the input mesh is 4n
Since all cells are of the same type, we can sort the whole array of tetrahedra according
to the order of their corresponding intervals in the big AL array, described in the Section
March 7, 1997
III-A. In this case, we can avoid storing the big AL array explicitly, since this comes free
from the list of tetrahedra. In this case, we only need to maintain the big DR array, with
a total cost of 3h +m, hence less than 25% overhead.
After active cells have been extracted, cell classification consists in testing the values of
the cell's vertices with respect the user-selected threshold, in order to devise the topology
of the isosurface patch inside the active cell. Cell classification is generally not a critical
task in the isosurface extraction process. However, in the case of tetrahedral meshes,
this step can be slightly improved by exploiting the fact that indices of vertices in the
connectivity list of each cell are stored in ascending value of field [16]. This implies that
cell classification can be performed with at most two tests by using bisection.
A more critical task is the computation of vertices and normals. Due to the computational
cost of this task, it is important to exploit the local coherence in order to avoid
redundant computations. In the case of unstructured datasets, we adopt a dynamic hash
indexing technique. For each isosurface extraction a hash table is built, and it is used
to store and retrieve efficiently isosurface vertices and normals. In our implementation,
the extracted surfaces are represented by adopting an indexed representation: an array of
isosurface vertices is maintained, storing coordinates and normal vectors for each vertex,
and an array of isosurface faces, each storing a connectivity list of three indices to the
vertex array. Each isosurface vertex is identified by the active edge of the cell where
it lies by using the two data points indexes v 1 and v 2 to build the hash key:
where n prim is a sufficiently large prime number. The computational overhead due to the
computation of hash indexes is therefore small. When processing an edge during vertex
computation, the hash table is inquired to know whether such computation has been
done before and, if so, to retrieve the index of the interpolated vertex and normal in the
corresponding array. Isosurface vertices and normals are computed explicitly, and stored
only if the hash search fails. In this way each interpolation is done exactly once.
A common problem in the use of hashing is the definition of a suitable size for the hash
table. In our case, all vertex and normal computations are performed after cell selection
is completed, hence the hash table can be sized up dynamically by using the number k of
March 7, 1997
active cells. Other approaches based on hashing define statically the hash table size [19];
using an hash table much larger than the current number of active cells may result in a
degradation of efficiency due to more frequent chache miss. The number of intersected
cells gives us a good estimate of the number of vertices in the resulting surface, and
therefore of the number of entries in the hash table. Given k active cells, the number of
vertices produced is lower than 3k in the case of hexahedral cells (redundancy factor
and 4
3 k for tetrahedral cells (redundancy factor not less than 3). The effective redundancy
measured in our experiments on tetrahedral meshes (the ratio between the number of
access to the hash table and the number of vertices interpolated) is approximately equal
to 5. In our implementation, we obtained a low rate of hash collisions by setting the hash
table size equal to a prime slightly larger than 2:5k. We detected a collision approximately
every 100 accesses, and the maximum number of subsequent collisions detected in different
extractions was in the range 3:7. Collisions are managed by adopting a linear scan strategy.
In order to speedup the computation of surface normals during isosurface extraction,
we compute as a preprocessing step all field gradients at mesh vertices. Therefore, the
surface normal at an isosurface vertex v can be simply obtained by linear interpolation
from the normalized gradients at the endpoints of cell edge where v lies. In the case
of tetrahedral meshes, the gradient of the scalar field within each cell oe of the mesh is
assumed constant, i.e., it is the gradient of the linear function interpolating the field at the
four vertices of oe. Similar interpolating functions can be adopted in order to estimate the
gradient within a single cell of the other types. Then, the gradient at each mesh vertex
v is computed as the weighted average of normalized gradients at all cells incident at v,
where the weight for the contribution of a cell oe is given by the solid angle of oe at v. Note
that this optimization on surface normal computation involves a further 3n storage cost,
due to the need of maintaining gradients for all data points. The corresponding overhead
is highly dependent on the ratio between the number of points n and the number of cells
m. For a tetrahedral mesh, we have that on average m - 6n, and, therefore, the overhead
would be less than 12:5%.
Fig. 7. The chess-board arrangement: in the case of regular grids, the data structure used to speedup
the isosurface extraction does not need to store the min-max intervals of all the cells of the volume.
Because each internal edge belongs to four cells, only the intervals corresponding to the black cells
(as in a 3D chess-board) have to be maintained.
B. The case of Structured Grids
In the case of structured datasets, i.e. grids based on a hexahedral decomposition in
which the connectivity information is implicit, we propose the use of a new conceptual
organization of the dataset which both reduces the number of interval to be stored into
the interval tree, and permits to devise a dataset traversal strategy that can efficiently
exploit the local coherence. The resulting technique is in practice a compromise between a
space-based approach, and a range-based approach, which tries to exploit the advantages
of both. Though our proposal applies to every structured dataset, we will refer to regular
ones in our discussion for sake of semplicity.
The number of intervals stored can be reduced on the basis of a simple but effective
observation: in a Marching Cubes-like algorithm, the vertices of the triangular patches that
form the extracted isosurface lie on the edges of the cells and, in a regular dataset, each
internal edge is shared by four cells. Therefore, in order to be sure that every isosurface
parcel will be detected, we just need to store the intervals of a minimal subset of cells that
hold all the edges of the dataset.
Such a subset can be devised easily if we think to the volume dataset as a 3D chess-
Fig. 8. Isosurface extraction is propagated from each active black cell to the adjacent white cells which
share one or more active edges.
board in which the black cells (Figure 7) are those we are interested in. In other words, if
is a black cell, then its adjacent black cells are those which share a single vertex
with c[i; j; k]. This conceptual arrangement presents some advantages:
ffl given a regular I \Theta J \Theta K dataset (i.e. a volume of
the black cells can be easily indexed as follows:
with
where terms it possible to compute the indices for the even
and odd layers of cells.
ffl the number of black cells in the dataset is 1/4 of the total number of cells, hence
the number of intervals to be stored in the interval tree data structure is 1/4 of the
total. This not only implies lower memory occupancy but also shorter construction
and traversal times for the auxiliary data structure.
Each black cell has (at most) edge-connected white cells. For each active black cell,
the adjacent white cells that are also active (because of isosurface intersections occurring
at the edges of the black cell) are determined easily on the basis of the configuration of
the current black cell (Figure 8). Conversely, if a white cell is active, there must exist
at least a black cell adjacent to it that is also active (special cases of white cells lying
on the boundary of the dataset are discussed later). Therefore, once all active black cells
March 7, 1997
have been located efficiently with an interval tree, all active white cells can be located by
searching, in constant time, the neighbourhood of each active black cell.
The chess-board reduces the number of intervals to be stored, but it does not help
with the local coherence: this can be managed by maintaining in a compact and easy-to-
access data structure the information already computed for vertices and normals of the
isosurface. Such an auxiliary structure would require a relevant memory overhead, unless
we maintain a sort of locality of the computations. This simple observation gives the key
to a compromise between a space-based and a range-based approach: we need to visit the
black cells not only on the basis of the intervals arrangement, but also taking into account
the topology of the grid.
In order to achieve this objective, we build an interval tree for each layer of cells (i.e.,
the cells formed by two adjacent slices of data), rather than building a single interval tree
for the whole dataset. The interval tree for each layer stores the min-max intervals of
the black cells in that layer. Each tree is then labelled with the Tmin-Tmax interval,
where Tmin [T max] represents the minimum [maximum] of the min [max] values in the
corresponding layer. Therefore, we have a forest of interval trees, and, for each tree, we
can know in constant time whether the tree contains active cells or not.
If the interval trees in the forest are visited according to the sequence of layers in the
dataset, then, during the traversal of the k-th tree, we only need to maintain a compact
auxiliary data structure (called a Vertex&Normal data structure) for active cells of the
three layers indexed by 1. The Vertex&Normal data structure stores
information (vertices, normals, visited cells, etc.) that is being computed at each active
cell, and it avoids redundant geometrical computations. Advancing to the
interval tree simply implies a circular shift of the indices of the layers in the Vertex&Normal
data structure. The extraction strategy and the exploitation of the local coherence (i.e.,
the runtime part of the method) can be now summarized as follows:
ffl Interval tree selection: given an isovalue q, the trees in the forest are tested in sequence
in order to individuate the active trees, i.e. the trees for which Tmin - q - Tmax.
Each active interval tree, say the k-th, is visited using the algorithm presented in
Section III;
ffl Black cell processing: for each active black cell, the Marching Cubes [12] algorithm is
applied: on the basis of the configuration of the cell (determined with respect to q) we
access the Marching Cubes lookup table, and we find the active edges of the current
cell. By exploiting the Vertex&Normal data structure, we compute (and save) only
the vertices and the normals not already computed in the processing of an adjacent
white cell. On the basis of the configuration of the cell, we also select the adjacent
active white cells where the isosurface extraction must be propagated. For example, if
a vertex of the isosurface has been found on the edge E of the black cell c[i; j; k] of the
example in Figure 8, then the edge-connected white cells c[i
and will be examined;
ffl Active white cells processing: once a black cell has been processed, the algorithm
examines the connected active white cells that have not been processed yet. For each
of them, the Marching Cubes algorithm is applied as in the previous case. White cells
already examined are individuated by means of simple flags in the Vertex&Normal
data structures. Note that a propagation list for the white cells is not necessary,
because we individuate all the active white cells starting from one of the adjacent
black cells;
ffl Advancing: the algorithm iterates on the next k + 1-th interval tree (if it is active)
by a simple circular shift of the layers in the Vertex&Normal data structure: information
for the k \Gamma 1-th layer is no longer necessary, and it is therefore rewritten by
information on the k
A further remark is necessary for the white cells that lie on the boundary of the dataset.
As shown in Figure 7, some boundary edges of the dataset are not captured by black cells
(e.g., the external edges of cell labeled A in the figure). However, if all sizes of the dataset
are even, no further information is needed for such edges: it is easy to see that if an
isosurface cuts one or more edges of a white cell that do not belong to any black cell, the
isosurface must also cut some edge of the same cell internal to the volume, hence shared
by a black cell.
In case one or more of the sizes I, J , and K of the dataset are odd numbers, then
March 7, 1997
Fig. 9. Some of the cells of a dataset with two odd sizes are not covered by the chess-board. Small parts
of the two isosurfaces could be lost.
part of the edges of at most 2(I (i.e. cells forming six
of the twelve corners of the volume) are not captured (not even indirectly) by the black
cells of the chess-board (see Figure 9). As shown in the figure, in these situations small
isosurface subsections can be lost. To solve this problem we can add the following step to
our algorithm:
ffl Unreachable cells test: once an active tree has been visited and the corresponding
active cells processed, the algorithm examines the (still not processed) white cells of
the current layer whose edges are not captured by black cells.
An alternative solution to the previous step could be the insertion into the corresponding
interval tree of the black edges of the unreachable white cells. However small, the
number of cells to be tested separately does not justify the effort.
With the chess-board approach, the total asymptotic time for a query is, in the worst
case, O( 3
is the output size, by assuming a dataset with 3
layers
(i.e., K). Note that using a forest rather than a single interval tree adds an
extra factor of 3
n to the optimal query time. Therefore, in this case we trade optimal
asymptotic time for space. However, it should be noted that the 3
log n factor is usually
negligible in practice, while the advantage that derives from exploiting local coherence is
Dataset Interval Tree
grid nodes intervals nodes creation
Name type (n) (m) depth
Fighter Unstr 13,832 70,125 15 12,538 1.50
Bluntfin Unstr 40,960 224,874
CTHead
I
Data on the interval trees for the test datasets (times are CPU seconds). For the
structured datasets, the column nodes indicates the sum of the nodes of all the 2D
interval trees, the column depth indicates the depth of the deepest tree.
relevant.
As stated for the case of unstructured datasets, the complexity in space for the interval
tree data structures can be expressed in terms of 3h + 2m, with h the number of distinct
interval endpoints and m the number of intervals to be stored. For a regular dataset with
data values, we have to store the intervals corresponding to the black cells, i.e.,
intervals. Since in real applications we usually have h - n, the requirement for n=2
storage locations is very close to one half of dataset size.
The ratio between the interval tree memory requirements and the dataset occupancy
becomes obviously more propitious in the case of non-regular structured datasets (e.g. the
curvilinear ones).
Therefore, the chess-board approach helps in solving the problem of the memory occupancy
of the interval tree data structure together with the problem of the local coherence.
V. Experimental Results
Our proposals, based on the interval tree data structure, were tested on a number of
different datasets. We report here the results for two unstructured datasets:
Fighter, an unstructured dataset built on the Langley Fighter, reporting a wind tunnel
model simulation performed at NASA Langley Research Center. The dataset was
March 7, 1997
Threshold Facets IT On IT Off
Nasa Fighter - 70125 Tetrahedral cells
2.6534 386 3 142
2.4420 1754 13 154
2.2007 5545 41 185
2.0221 9735 78 220
Bluntfin - 224874 Tetrahedral cells
4.8722 444 3 255
2.1305 10384 72 304
II
Isosurface extraction times on tetrahedrized datasets, in milliseconds.
represented by adopting a 3D Delaunay triangulation;
Bluntfin, originally defined as a curvilinear dataset, it has been represented here by
adopting a tetrahedral decomposition; Bluntfin represents the air flow over a flat plate
with a blunt fin rising from the plate. Courtesy of NASA Ames Research Center;
and for two structured ones:
Bucky, a 128 \Theta 128 \Theta 128 regular dataset representing the electron density map of a C
fullerene molecule. Courtesy of AVS International Centre;
CTHead, a 256 \Theta 256 \Theta 113 CAT scan of a head. Courtesy of the University of North
Carolina at Chapel Hill.
The results refer to:
ffl the use of the interval tree data structure, the hash indexing technique and the pre-computation
of the gradients of the field in the vertices of the cells in the case of
unstructured datasets, IT On, compared with a standard Marching Tetrahedra im-
plementation, IT Off (see Table II);
Threshold Facets IT On IT Off
96.5 200,148 1,110 3,090
CTHead - 1,820,728 cells
III
Isosurface extraction times on structured (regular) datasets, in milliseconds.
ffl the use of the forest of interval trees and the chess-board approach in the case of
structured ones, IT On, compared with a standard Marching Cubes implementation,
IT Off (see Table III).
results have been obtained on an SGI Indigo2 (200MHz R4400 CPU, 16K
instruction and 16K data caches, 1MB secondary cache, 32MB RAM, IRIX 5.3).
Table
I reports numeric values on the datasets used and on the associated interval trees:
the type of the grid (Unstr for unstructured or Str for structured); the number m of
intervals, which is equal to the number of tetrahedral cells for the unstructured datasets
and to the number of black cells for the structured ones; the interval tree depth, which
represents the depth of the deepest tree in the case of structured datasets; the number h
of nodes of the interval tree(s) (in the case of structured grids, this field represents the
sum of the nodes of all of the trees); the time (in seconds) required to build the interval
tree data structures.
Figures
refer to the unstructured grids and show the isosurface fitting times
as a function of the number of triangular facets extracted. The reference algorithm is the
Marching Tetrahedra with the use of a hash indexing technique and the pre-computation
of the gradients of the field in the vertices of the grid (see Subsection IV-A).
Similarly, figures 12 and 13 refer to the regular grids. In this case the reference algorithm
is the Marching Cubes with the use of the chess-board approach (see Subsection IV-B).
Figure
14 shows the speedup obtained with the use of the interval tree (the ratio of
the times obtained with and without the use of the interval tree data structures) as a
function of the fraction of volume data that contains the isosurface (ratio between the
number of fitted facets an the number of cells); obviously, greatest speedups are obtained
on isosurfaces of small size, i.e. when the traversal phase dominates the whole extraction
process.
The space complexity can be simply calculated from the unstructured datasets using
the space complexity previously defined, 3h words. The space required to
store the interval tree data structures results therefore 174K memory words for the Fighter
dataset and 522K memory words for the Bluntfin.
The structured datasets Bucky and CTHead required 1,080K and 3,628K memory words,
respectively.
VI. CONCLUSIONS
We have presented and tested a speedup method for isosurface extraction based on the
use of the interval tree data structure. The method considerably improves the performance
of the traversal phase with respect to the standard Marching Tetrahedra and Marching
Cubes algorithms. Optimal output-sensitive time complexity in extracting active cells is
achieved. Memory overhead, according to the general interval tree representation pro-
posed, is 3h with h the number of distinct extremes of intervals and m the
number of cells.
With reference to the alternative proposals in literature, it should be noted that other
range-based methods have comparable (or higher) overheads, and worst computational
efficiency. On the other hand, the methods based on the interval modality present either
memory costs wich are comparable to the cost of the interval tree [7] or lower expected
March 7, 1997
performance [11]. Surface-oriented methods give in general a more compact representation
of the list of seeds intervals, but in the case of unstructured datasets they require encoding
adjacency information which involves a 100% overhead over the minimal volume datasets
representation.
To reduce space occupancy, which becomes a critical factor in the case of high resolution
datasets, we proposed two different strategies, oriented to the two different data classes.
An optimized interval tree representation for unstructured datasets (tetrahedra-based) was
presented; it allows to reduce space occupancy to 3h+m, and therefore enabling less than
25% overhead.
A partial representation of the cell intervals, based on the chess-board approach, was devised
to reduce the number of intervals stored in the interval trees in the case of structured
datasets. All of the active cells not represented directly are here detected by propagation.
Although the reduced number of intervals encoded, the speedups obtained were very similar
to those obtained with the naive interval tree implementation which encodes all of the
intervals, as demonstrated empirically in the graph of Figure 14. It is noteworthy that our
chess-board approach could also be efficiently used together with alternative space-based
speedup approaches based on octrees and pyramids, which exploit the implicit addressing
of regular datasets.
The other phases of the surface fitting process (cell classification, non redundant interpolation
of vertices and normals) were also tackled. In particular, local coherence control
is supported: in the case of unstructured datasets, with the use of dynamically-sized hash
tables; in the case of structured datasets, representing intervals with a forest of interval
trees and using a slice-based order in processing the data, which allows the definition of a
compact auxiliary data structure for maintaining interpolated vertices and normals.
Moreover, a general approach for addressing global coherence has been proposed, which
updates the active interval list produced in the previous isosurface extraction and results
totally equivalent to performing the search by scratch in the worst case.
ACKNOWLEDGEMENTS
This work was partially financed by the Progetto Coordinato "Modelli multirisoluzione
per la visualizzazione di campi scalari multidimensionali" of the Italian National Research
March 7, 1997
Isosurface Facets
Nasa Fighter
IT On
IT Off
Fig. 10. Timing for the Fighter dataset0.10.30.50 5000 10000 15000 20000 25000 30000 35000
Isosurface Facets
Bluntfin
IT On
IT Off
Fig. 11. Timing for the Bluntfin dataset0.51.52.53.54.50 50000 100000 150000 200000 250000 300000
Isosurface Facets
IT On
IT Off
Fig. 12. Timing for the Bucky dataset
March 7, 1997
Isosurface Facets
CT Head
IT On
IT Off
Fig. 13. Timing for the CTHead dataset261014
Volume Fraction
Fighter
Bluntfin
CT Head
Fig. 14. Speedup vs. Volume Fraction (triangles / cells )
Council (CNR).
--R
"Fast Isocontouring for Improved Interactivity"
"Multiresolution Modeling and Visualization of Volume Data"
"Optimal isosurface extraction from irregular volume data"
"DiscMC: an interactive system for fast fitting isosurfaces on volume data"
"Dynamic data structures for orthogonal intersection queries"
"Span filter: an optimization scheme for volume visualization of large finite element models"
"Advanced interactive visualization for CFD"
"Volume Thinning for Automatic Isosurface Propagation"
"Automatic Isosurface Propagation using an Extrema Graph and Sorted Boundary Cell Lists"
"Fast generation and display of iso-surfaces wireframe"
"A near optimal isosurface extraction algorithm for structured and unstructured grids"
"Marching cubes: a high resolution 3D surface construction algorithm"
"Discretized Marching Cubes"
Computational Geometry: an Introduction
"Isosurfacing in Span Space with Utmost Efficiency (ISSUE)"
"Sweeping simplices: a fast iso-surface extraction algorithm for unstructured grids"
"Volume probes: interactive data exploration on arbitrary grids"
"Efficient methods for isoline extraction from a digital elevation model based on Triangulated Irregular Networks"
"Octrees for faster isosurface generation"
"Data structures for soft objects"
--TR
--CTR
Michael Burns , Janek Klawe , Szymon Rusinkiewicz , Adam Finkelstein , Doug DeCarlo, Line drawings from volume data, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Jinzhu Gao , Han-Wei Shen, Parallel view-dependent isosurface extraction using multi-pass occlusion culling, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
Caleb Lyness , Edwin Blake, Real time isosurface browsing, Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation, November 05-07, 2001, Camps Bay, Cape Town, South Africa
Bruno Lvy , Guillaume Caumon , Stphane Conreaux , Xavier Cavin, Circular incident edge lists: a data structure for rendering complex unstructured grids, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Benjamin Vrolijk , Charl P. Botha , Frits H. Post, Fast time-dependent isosurface extraction and rendering, Proceedings of the 20th spring conference on Computer graphics, April 22-24, 2004, Budmerice, Slovakia
Stefan Rttger , Martin Kraus , Thomas Ertl, Hardware-accelerated volume and isosurface rendering based on cell-projection, Proceedings of the conference on Visualization '00, p.109-116, October 2000, Salt Lake City, Utah, United States
Laurent Balmelli , Christopher J. Morris , Gabriel Taubin , Fausto Bernardini, Volume warping for adaptive isosurface extraction, Proceedings of the conference on Visualization '02, October 27-November 01, 2002, Boston, Massachusetts
Klaus Engel , Rdiger Westermann , Thomas Ertl, Isosurface extraction techniques for Web-based volume visualization, Proceedings of the conference on Visualization '99: celebrating ten years, p.139-146, October 1999, San Francisco, California, United States
Reinhard , Charles Hansen , Steve Parker, Interactive ray tracing of time varying data, Proceedings of the Fourth Eurographics Workshop on Parallel Graphics and Visualization, September 09-10, 2002, Blaubeuren, Germany
Philip Sutton , Charles D. Hansen, Isosurface extraction in time-varying fields using a temporal branch-on-need tree (T-BON), Proceedings of the conference on Visualization '99: celebrating ten years, p.147-153, October 1999, San Francisco, California, United States
Chandrajit L. Bajaj , Valerio Pascucci , Daniel R. Schikore, The contour spectrum, Proceedings of the 8th conference on Visualization '97, p.167-ff., October 18-24, 1997, Phoenix, Arizona, United States
Udeepta D. Bordoloi , Han-Wei Shen, Space Efficient Fast Isosurface Extraction for Large Datasets, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.27, October 22-24,
Bong-Soo Sohn , Chandrajit Bajaj , Vinay Siddavanahalli, Volumetric video compression for interactive playback, Computer Vision and Image Understanding, v.96 n.3, p.435-452, December 2004
Bong-Soo Sohn , Chandrajit Bajaj , Vinay Siddavanahalli, Feature based volumetric video compression for interactive playback, Proceedings of the 2002 IEEE symposium on Volume visualization and graphics, October 28-29, 2002, Boston, Massachusetts
C. L. Bajaj , V. Pascucci , D. Thompson , X. Y. Zhang, Parallel accelerated isocontouring for out-of-core visualization, Proceedings of the 1999 IEEE symposium on Parallel visualization and graphics, p.97-104, October 25-26, 1999, San Francisco, California, United States
Steve Bryson , David Kenwright , Michael Cox , David Ellsworth , Robert Haimes, Visually exploring gigabyte data sets in real time, Communications of the ACM, v.42 n.8, p.82-90, Aug. 1999
Bartosz von Rymon-lipinski , Nils Hanssen , Thomas Jansen , Lutz Ritter , Erwin Keeve, Efficient Point-Based Isosurface Exploration Using the Span-Triangle, Proceedings of the conference on Visualization '04, p.441-448, October 10-15, 2004
Paolo Cignoni , Claudio Montani , Enrico Puppo , Roberto Scopigno, Multiresolution Representation and Visualization of Volume Data, IEEE Transactions on Visualization and Computer Graphics, v.3 n.4, p.352-369, October 1997
Jim Cox , D. B. Karron , Nazma Ferdous, Topological Zone Organization of Scalar Volume Data, Journal of Mathematical Imaging and Vision, v.18 n.2, p.95-117, March
Han-Wei Shen, Isosurface extraction in time-varying fields using a temporal hierarchical index tree, Proceedings of the conference on Visualization '98, p.159-166, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Yi-Jen Chiang, Out-of-Core Isosurface Extraction of Time-Varying Fields over Irregular Grids, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.29, October 22-24,
Naveen Kumar Polapally , Raghu Machiraju , Dhabhaleshwar Panda, Feature estimation for efficient streaming, Proceedings of the 2002 IEEE symposium on Volume visualization and graphics, October 28-29, 2002, Boston, Massachusetts
Yi-Jen Chiang , Ricardo Farias , Cludio T. Silva , Bin Wei, A unified infrastructure for parallel out-of-core isosurface extraction and volume rendering of unstructured grids, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
Kai Li , Han Chen , Yuqun Chen , Douglas W. Clark , Perry Cook , Stefanos Damianakis , Georg Essl , Adam Finkelstein , Thomas Funkhouser , Timothy Housel , Allison Klein , Zhiyan Liu , Emil Praun , Rudrajit Samanta , Ben Shedd , Jaswinder Pal Singh , George Tzanetakis , Jiannan Zheng, Building and Using A Scalable Display Wall System, IEEE Computer Graphics and Applications, v.20 n.4, p.29-37, July 2000
Takayuki Itoh , Yasushi Yamaguchi , Koji Koyamada, Fast Isosurface Generation Using the Volume Thinning Algorithm, IEEE Transactions on Visualization and Computer Graphics, v.7 n.1, p.32-46, January 2001
Adriano Lopes , Ken Brodlie, Improving the Robustness and Accuracy of the Marching Cubes Algorithm for Isosurfacing, IEEE Transactions on Visualization and Computer Graphics, v.9 n.1, p.16-29, January
Philip M. Sutton , Charles D. Hansen, Accelerated Isosurface Extraction in Time-Varying Fields, IEEE Transactions on Visualization and Computer Graphics, v.6 n.2, p.98-107, April 2000
Lutz Kettner , Jarek Rossignac , Jack Snoeyink, The safari interface for visualizing time-dependent volume data using iso-surfaces and contour spectra, Computational Geometry: Theory and Applications, v.25 n.1-2, p.97-116, May
Ingo Wald , Heiko Friedrich , Gerd Marmitt , Philipp Slusallek , Hans-Peter Seidel, Faster Isosurface Ray Tracing Using Implicit KD-Trees, IEEE Transactions on Visualization and Computer Graphics, v.11 n.5, p.562-572, September 2005
George J. Grevera , Jayaram K. Udupa , Dewey Odhner, An Order of Magnitude Faster Isosurface Rendering in Software on a PC than Using Dedicated, General Purpose Rendering Hardware, IEEE Transactions on Visualization and Computer Graphics, v.6 n.4, p.335-345, October 2000 | marching cubes;isosurface extraction;volume visualization;interval tree |
614386 | Multiresolution Representation and Visualization of Volume Data. | AbstractA system to represent and visualize scalar volume data at multiple resolution is presented. The system is built on a multiresolution model based on tetrahedral meshes with scattered vertices that can be obtained from any initial dataset. The model is built off-line through data simplification techniques, and stored in a compact data structure that supports fast on-line access. The system supports interactive visualization of a representation at an arbitrary level of resolution through isosurface and projective methods. The user can interactively adapt the quality of visualization to requirements of a specific application task and to the performance of a specific hardware platform. Representations at different resolutions can be used together to further enhance interaction and performance through progressive and multiresolution rendering. | Introduction
Volume datasets used in current applications have different characteristics, but a common problem: a
portant with curvilinear and irregular datasets, where the mesh topology must be stored explicitly for
visualization purposes [40]. Therefore, in some cases interactive image generation from very large datasets
may not be feasible, even with the use of fast graphic hardware and parallelism.
In recent years, some efforts have been devoted in the literature towards improving performance of rendering
algorithms, but few proposals are based on data simplification, which, on the other hand, has
produced successful results in managing surface data complexity (e.g. free-form and topographic surfaces
representation).
In this paper, we describe our experience in designing and developing a volume visualization system that
can handle data at different resolutions, and that is based on a data simplification approach.
A. Related work
In the literature, dataset complexity has been carefully taken into account to reduce expected visualization
times. Performance has been improved through different methods: ad hoc data organizations permit
to speedup operations that visit the dataset during rendering [11], [23], [41], [6]; simplification of the
rendering process can be achieved either by approximation techniques [43], [40], [45], or by reducing the
size of the graphic output [37], [27], [10], [19].
On a different perspective, it is also possible to manage data complexity by adopting an approximated
representation of the dataset. Such an approach is more general because, given a suitable strategy to
reduce the size of the dataset, it remains totally independent of the rendering system. The methodology
in this case is therefore to work on data simplification rather than on graphics output simplification.
A naive subsampling from a regular dataset has several drawbacks: there is no control on the accuracy
of the simplified mesh; the technique is not adaptive, i.e. density of data cannot be variable over different
regions of the domain; and it is not easily extensible to datasets that are not regular. In fact, an irregular
distribution of samples makes the construction of a simplified dataset a non-trivial problem in general.
Adaptive methods have been developed in 2D for the simplification of irregular meshes representing free-form
and topographic surfaces: effective solutions have been obtained through incremental techniques,
based on either refinement or simplification (see, e.g., [10], [14], [17], [21], [24], [37]). Some of such
techniques can be extended to the 3D case to simplify volume data [5], [18], [34].
The iterative application of a simplification technique with different approximation parameters produces
a collection of representations at different accuracies. A data structure that holds a constant (and usually
small) number of different representations of the dataset, at different levels of accuracy, is called a level of
detail (LoD) representation. LoD representations of surfaces and solid objects are widely used in a number
of leading edge applications (e.g., virtual reality based on VRML). An evolution of a LoD representation
is a multiresolution representation, which supports the compact storage of a number m (usually large)
of representations at different levels of detail, where m is a monotone function of the size of the input
dataset (i.e., the more data, the more representations).
Multiresolution or LoD can greatly improve the efficiency of data rendering, e.g., through suitable progressive
visualization algorithms. The multiresolution approach improves over the LoD one with valuable
characteristics. For instance, the user or the application have much more flexibility in selecting the "best"
level of detail, depending on their specific needs in terms of accuracy, memory, and time performance: in
many cases, it is better to leave that choice at run time, instead to force it in the preprocessing, when
simplification occurs. Many approaches have been recently proposed for the multiresolution management
of surfaces (see, e.g., [33] for a survey), while multiresolution volume data management is still in a not
sufficiently developed stage.
An approach to the representation of regular volume datasets based on the use of a hierarchical recursive
partition (an octree-like scheme) has been proposed in [42]. Each node is obtained by recursive subdivision:
it holds a basis function to reconstruct the field, as well as a measure of both error and importance factors,
which are used for selective traversal of the tree. The method cannot be extended to irregularly distributed
data. Using such a structure as a LoD representation, by considering each tree level as a separate layer,
is equivalent to use subsampling. A multiresolution representation is also possible, by selecting nodes
at different levels, but the field may result discontinuous across different levels, thus causing unpleasant
effects (e.g., aliasing in direct volume rendering, and cracks in isosurfaces).
In a previous paper [5], we proposed a LoD representation based on tetrahedral decomposition: independent
simplified representations of a volume dataset at different levels of approximation were built by
a refinement technique. Such a work can be considered preliminary to that presented in this paper, and
it is extended here in several aspects.
Finally, some approaches for the hierarchical representation of regular tetrahedral decompositons have
been recently proposed [15], [29], [47].
Wavelet theory plays an important role in the multiresolution analysis of signals, and approaches based
on wavelets have been proposed also to manage volume data [16], [28], [39]. The approach to data
simplification based on wavelets is much different from the geometric approach we follow. Data are
considered as samples from a signal that is decomposed into wavelets [26]: the coefficients of the wavelet
decomposition represent the dataset at full resolution, while approximated (LoD-style) representations
may be used in rendering by considering only subsets of the coefficients. The wavelet decomposition may
also be used in a multiresolution manner by using higher resolution coefficients in limited locations of the
3D space only. Times for wavelet-based rendering are generally higher than those of standard cell/voxel-
based techniques and, moreover, generality is limited because the wavelet approach has been applied to
regular datasets only.
B.
Summary
The paper consists essentially of two parts. In the first part (Sections II-IV) we show how a multiresolution
model for volume data based on tetrahedral meshes can be built and stored. In the second part,
(Sections V-VI) we describe a volume visualization system built on top of such a model, and we present
experimental results.
Our approach to multiresolution, is based on data simplification, which is described in Section II: an
approximated representation of volume data at reduced resolution is given by a tetrahedral mesh, having
smaller size with respect to an initial mesh defined on the whole dataset. Data values are approximated
by a linear function over each tetrahedron. Tetrahedral meshes are used because of their adaptivity (local
refinement) and for the simplicity of linear interpolation.
In Section III, two methods for building approximated meshes are described: a top-down method that
refines a coarse initial mesh by iteratively inserting vertices, and a bottom-up method that simplifies an
initial mesh at the highest resolution by iteratively discarding vertices. The top-down method extends
a previous result that we presented for convex datasets in [5], to handle also curvilinear (possibly non-
convex) datasets. The bottom-up method extends simplification methods in 2D [19], [37], and it can be
applied also to irregular non-convex datasets.
Since both methods are based on iterative local modifications of a mesh, each of them produces a fine-grained
sequence of meshes at increasingly finer (respectively, coarser) resolution. In other words, a high
number of different tetrahedral meshes at different resolutions are obtained on the basis of a moderate
number of tetrahedra, namely all tetrahedra that appear during successive updates. Such tetrahedra can
be stored in a compact representation of a multiresolution model, described in Section IV, which supports
fast on-line extraction of a mesh at arbitrary resolution.
In Section V we describe the multiresolution visualization system TAn (Tetrahedra Analyser), whose
prototype is available in the public domain. Besides supporting the off-line construction of the multiresolution
model, TAn has direct on-line access to the model itself: it allows the user to interactively
G
Fig. 1. A visualization of the terminology used, in a two-dimensional example.
select the resolution of representation, and the transfer function; it supports multiple isosurface fitting,
direct volume rendering through projection, and approximated hybrid rendering; moreover, it supports
interactive manipulation of huge volume data through progressive rendering, which is obtained by using
representations at different resolutions from the multiresolution model.
Experimental results on the construction of the multiresolution model, on multiresolution visualization,
and on the use of TAn are reported in Section VI.
In Section VII, concluding remarks are drawn, and current and future work on this subject is summarized.
II. Volume data approximation
A scalar volume dataset is given by the values of a scalar field / taken at a finite set of sample points
V in IR 3 . A
volume\Omega ae IR 3 spanned by the points of V is called the domain of the
dataset:\Omega is
usually a polyhedron, it can be either convex or non-convex, possibly with cavities. In most cases, a
three-dimensional mesh \Gamma is also given, which covers the
domain\Omega\Gamma and has its vertices at the points
the scalar field / is estimated
over\Omega by a function f that interpolates all data values at points
of V , and is defined piecewise on the cells of \Gamma. The terminology introduced is visually represented in
Figure
1 where we present, for the sake of simplicity, a 2D example: in this
case,\Omega is a square region, \Gamma
is a triangulation, V is the set of vertices of \Gamma, the graph of / is a surface in 3D, and the graph of f is a
corresponding triangulated approximation.
A. Volume data classification
Volume data can be classified through the characteristic structure of the underlying grid.
ffl In regular datasets, sample points are distributed regularly in 3D
space:\Omega is a block (parallelepiped)
and \Gamma is a regular hexahedral mesh.
ffl In curvilinear datasets, sample points lie on a regular grid in a computational space, while the grid
is warped to become curvilinear in physical
space:\Omega is a polyhedron (usually non-convex), and \Gamma
has the connective topology of a hexahedral mesh, while its cells are irregular convex hexahedra.
ffl In irregular datasets, sample points are irregularly distributed in 3D
space:\Omega can be either convex
or non-convex, and \Gamma is usually a tetrahedral mesh, or a hybrid mesh made of tetrahedra and
irregular hexahedra.
ffl In scattered datasets (sometimes also said unstructured), only sample points of V are known, which
are irregularly distributed in 3D space, while \Gamma must be reconstructed. In the simplest
case,\Omega can be
assumed coincident with the convex hull of V , therefore \Gamma may be obtained as a tetrahedrization of
the points of V . A more general non-convex situation may require specific reconstruction techniques
that are beyond the scope of this work.
Hereafter, we will always assume that \Gamma is given, and we will use the following (non-standard) classification
of datasets, which is suitable to our purpose: convex (i.e., having a convex domain, disregarding any further
classification of data distribution and type of mesh); non-convex curvilinear; and non-convex irregular.
B. Tetrahedral meshes
A tetrahedral mesh is a collection of tetrahedra such that for any pair of tetrahedra either they are
disjoint, or they meet at a common vertex, or edge, or triangular face. This establishes topological
relationships, essentially incidences and adjacencies, among the vertices, edges, triangular faces, and
tetrahedra that form the mesh. As a convention, a tetrahedral mesh will be usually denoted by \Sigma, and a
generic tetrahedron by oe.
Given a set of points V , a tetrahedral mesh \Sigma having its vertices at the points of V and covering the convex
hull of V is called a tetrahedrization of V . Many different tetrahedrizations of V exist. In particular, the
Delaunay tetrahedrization has the property that the circumsphere of each tetrahedron does not contain any
point of V in its interior. The Delaunay tetrahedrization has some nice properties ("fat" cells, acyclicity
in depth sort [13]), which make it a suitable mesh in the applications [44].
Given a
polyhedron\Omega\Gamma a tetrahedral mesh \Sigma covering it is also called a tetrahedrization of \Omega\Gamma
If\Omega is
non-convex, a tetrahedrization
of\Omega having vertices only at the vertices
of\Omega does not necessarily exist.
Moreover, deciding whether such a tetrahedrization exists or not is NP-complete [35]. This suggests how
the non-convex case is more difficult to handle, and it justifies the application of heuristics.
Given a tetrahedral mesh \Sigma with data values at its vertices, it is easy to interpolate such data by using
a linear function within each tetrahedron. Therefore, piecewise-linear interpolation is most commonly
used on tetrahedral meshes. Higher order interpolation would be necessary to achieve smoothness across
different tetrahedra, but this involves high numerical effort which makes it hardly applicable to volume
data. Discontinuities of the field represented by a tetrahedral mesh may be modeled by assigning different
values to the same vertex for different tetrahedra incident into it.
C. Approximated meshes
Let V be a volume dataset, and let \Gamma be a given mesh over V , covering a
domain\Omega\Gamma and having all points
of V as vertices. The pair (V; \Gamma) is called a reference model for the volume dataset. An approximated model
of such volume data is given by a pair (V 0 ; \Sigma), with \Sigma a tetrahedral mesh having vertices at the subset
covering a domain
~\Omega that
linear function is given for each tetrahedron
of \Sigma. The accuracy of approximation is given by the difference between the reference model and the
approximated model, which depends essentially on two factors:
ffl the warping of the domain, i.e., the difference
between\Omega and its approximation ~
ffl the error made in approximating values at the points of V through the piecewise-linear function
defined on \Sigma.
For convex datasets, we assume that
there is no warping, because convex datasets usually
have a small number of vertices on their convex hull (e.g., the domain of a regular dataset is defined by
six vertices).
For non-convex curvilinear datasets, we consider a
parallelepiped\Omega c , called the computational domain,
and a regular hexahedral mesh \Gamma c
covering\Omega c , and isomorphic to \Gamma. This is always possible because \Gamma is
a deformed hexahedral mesh. The one-to-one correspondence (isomorphism) between vertices of \Gamma c and
\Gamma will be called a lifting from computational to physical domain (see Figure 2a). Since \Sigma has vertices at a
subset of vertices of \Gamma, we can use lifting to back-project \Sigma into a corresponding tetrahedral mesh \Sigma c in
computational domain (see Figure 2b). Meshes \Gamma c and \Sigma c both
cover\Omega c , provided that \Sigma c has at least
the eight corners
of\Omega c as vertices. Therefore, each vertex v c of \Gamma c is contained into some tetrahedron oe c of
\Sigma c . We express the position of v c in baricentric coordinates with respect to oe c , and we consider the point
~
v in physical space having the same baricentric coordinates as v c with respect to tetrahedron oe, image of
oe c through lifting. Point ~ v is called the warped image of v (where v is the image of v c through lifting). The
warping at v is the distance between v and ~
v (see Figure 2c). The maximum distance over all vertices of
whose back-projection lies inside oe c estimates the warping of its lifted image oe; the maximum warping
over all tetrahedra of \Sigma defines the warping of the whole approximated model.
For non-convex irregular datasets, we estimate the actual difference between the boundaries
of\Omega and
~
Such a difference is measured by computing at each boundary vertex of \Gamma its minimum (Hausdorff)
distance from the boundary of \Sigma (see Figure 3).
The warping of a boundary face oe of \Sigma is the maximum among all distances corresponding to boundary
vertices of \Gamma that are projected onto oe; the warping of \Sigma is the maximum among warping of its boundary
is measured similarly. In a convex dataset, the error at a datum v contained in a tetrahedron oe
is given by the absolute value of the difference between the field value at v, and the value of the linear
function associated to oe computed at v.
For a non-convex curvilinear dataset, the error is measured by computing the same difference in computational
domain: this is equivalent to measuring the difference between the field at a datum v and the
estimated value at its corresponding warped point ~ v defined above.
For non-convex irregular datasets there are two possible situations: if v is inside ~
\Omega\Gamma then we compute
the difference as in the convex case; if v lies outside ~
\Omega\Gamma we compute first the projection v p of v on the
boundary of ~
\Omega\Gamma then we measure the difference between the field at v and the linear interpolation at v p .
Backprojection
c Lifting W G
a
c
Fig. 2. Lifting and warping for curvilinear datasets (example in 2D): (a) lifting maps a regular mesh \Gamma c into a curvilinear
mesh \Gamma; (b) the triangular mesh \Sigma approximating \Gamma is back-projected in computational space into mesh \Sigma c ; (c) warping
at a point v is equal to the distance from v to the warped point ~ v.
G
Fig. 3. For non-convex irregular datasets, we estimate the actual difference between the boundaries by computing at each
boundary vertex of \Gamma its minimum distance from the boundary of \Sigma.
In this case, v is said related to the tetrahedron oe having v p on its boundary (see Figure 3).
The error of a tetrahedron oe is the maximum among the error of all vertices v i such that: for the convex
case, v i lies inside oe; for the non-convex curvilinear case, the point corresponding to v i in computational
space lies inside oe c ; for the non-convex irregular case, v i is either inside oe, or related to oe. The error of
the mesh \Sigma is the maximum among all errors of its tetrahedra.
Hereafter, warping and error will be denoted by functions W () and E(), respectively, which can be
evaluated at a point v, at a tetrahedron oe, or at a mesh \Sigma. Warping and error at data points can also
be weighted by suitable functions that may vary over \Omega\Gamma Weights can be useful to obtain a space-based
measure of accuracy. For example, let us assume that for applicative needs accuracy is relevant in the
proximity of a selected point p. We can select weights that decrease with distance from p. Similarly,
range-based error can be used to require more accuracy where data assume a given value q: in this case,
a weight for error can be obtained by composing the value function / with a real univariate function
decreasing with distance from q.
III. Building an approximated model
Given a reference model (V; \Gamma), and a threshold pair "), we face the problem of building an
approximated model that represents the volume dataset with accuracy -, i.e., with a warping
smaller than ffi , and an error smaller than ". A key issue is that the size of \Sigma should be as small as
possible. A result in 2D suggests that the problem of minimising the size of the mesh for a given accuracy
is intractable (NP-hard); also, approximated algorithms that warrant a bound on the size of the solution
with respect to the optimal one are hard to find, and hardly applicable in practice [2], [1]. Hence, heuristics
can be adopted, which try to obtain a mesh of reduced size by following data simplification strategies.
There are two basic classes of strategies for simplifying a
ffl Refinement heuristics start from a mesh whose vertices are a very small subset of vertices of \Gamma. The
mesh is iteratively refined by inserting other vertices of \Gamma into it. Refinement continues until the
accuracy of the mesh satisfies the required threshold. Selection strategies can be adopted to insert
at each step a vertex that is likely to improve the approximation better.
ffl Decimation heuristics start from the reference model \Gamma and iteratively modify it by eliminating
vertices. As many vertices as possible are discarded, while maintaining the required accuracy. Also
in this case, points are selected at each iteration in order to cause the least possible increase in
warping and error.
Although in 2D several heuristics have been proposed, experiences in this case show a substantial equivalence
of most of them in the quality of results. Since the three-dimensional case is almost unexplored,
extending 2D techniques that seem most suitable to 3D is a reasonable approach.
In the following subsections, we present two simplification methods: the first method is based on refinement
and Delaunay tetrahedrization, and it can be applied to convex datasets, and to non-convex
curvilinear datasets; the second method is based on decimation, and it can be applied to any dataset,
provided that the reference mesh \Gamma is a tetrahedral mesh, but it is especially well suited to non-convex
irregular meshes.
A. A method based on refinement
A refinement method that we proposed in [5] for convex datasets is extended here to deal also with
non-convex curvilinear datasets. The basic idea comes from an early technique developed in the two-dimensional
case, and widely used for approximating natural terrains [14]. An on-line algorithm for
Delaunay tetrahedrization is used together with a selection criterion to refine an existing Delaunay mesh
by inserting one vertex at a time. In the case of curvilinear datasets, a Delaunay tetrahedrization is
computed in the computational domain, while its image through lifting gives the corresponding mesh in
the physical domain.
In both cases, the selection strategy at each iteration is aimed to split the tetrahedron that causes the
maximum warping/error in the current approximation: this is obtained by selecting the datum v max
corresponding to the maximum warping/error as a new vertex. The description of the algorithm is
general, while specific aspects of either the convex or the curvilinear case are explained when necessary.
Given a dataset V , an initial mesh \Sigma is created first. If V is a convex dataset, then \Sigma is a tetrahedrization
of the convex hull of V . If V is a non-convex curvilinear dataset, then a tetrahedrization \Sigma c of the
computational
domain\Omega c is considered:
since\Omega c is a block, \Sigma c has only the eight corners
of\Omega c as
vertices, and it
subdivides\Omega c into five tetrahedra; \Sigma is obtained by lifting \Sigma c into physical domain. Given
a threshold - for the accuracy, the following refinement procedure is applied:
procedure REFINEMENT(V; \Sigma; -);
while not (\Sigma satisfies -) do
return (\Sigma)
This refinement procedure always converges since the number of points in V is finite, and total accuracy
is warranted when all of them are inserted as vertices of \Sigma. In summary, three tasks are accomplished at
each iteration of the refinement procedure:
1. test the accuracy of \Sigma against -: this requires evaluating E(\Sigma) and, in the curvilinear case, W (\Sigma),
and comparing them with " and ffi , respectively. This can be done efficiently by using a bucketing
structure similar to that proposed in [22] for dynamic triangulation in 2D, which maintains for each
tetrahedron a list of data points of V contained inside it;
2. select a new vertex v max from the points of V by SELECT POINT: for the convex case, the point
of V that maximises E() is selected; for the curvilinear case, the point of V that maximises either
W () or E() is selected, depending on whether W (\Sigma)=E(\Sigma) is larger or smaller than ffi =", respectively.
This can be done efficiently by the joint use of the bucketing structure, and of a priority queue,
maintaining tetrahedra according to their error/warping;
3. update \Sigma by inserting v max by ADD VERTEX: this is done by using an algorithm for on-line
Delaunay triangulation that was proposed in [20]: in the curvilinear case, update is always made
on the tetrahedral mesh in computational domain, and \Sigma is obtained through lifting.
Further details on the implementation of the refinement procedure for convex datasets can be found
in [5]. Such a procedure can be adapted to the case of curvilinear datasets on the basis of the previous
discussion.
A further remark is necessary, though, for the case of curvilinear datasets. During the initial stages
of refinement, mesh \Sigma might result geometrically inconsistent because of the warping caused by lifting.
Indeed, while mesh \Sigma c is a Delaunay tetrahedrization of the computational domain, hence consistent,
some tetrahedra might "flip over" during lifting, hence changing their orientation and causing geometric
inconsistencies in \Sigma. See Figure 4 for a two-dimensional example. Consistency can be tested by verifying
c
c
Fig. 4. Inconsistency in curvilinear mesh (2D example): mesh \Sigma c is geometrically consistent, while its lifted image \Sigma is not.
whether each tetrahedron maintains its orientation both in computational and in physical domain.
We assign infinite warping to each tetrahedron that has an inconsistent lifting. In this way, inconsistent
tetrahedra are refined first. We are warranted that the mesh in physical space will converge to a consistent
one in a finite number of steps, although, in the worst case, it might be necessary to insert all data points.
Indeed, let us consider the Delaunay mesh containing all data points in computational space: such a
mesh is obtained by splitting each hexahedron of the original mesh into five tetrahedra. We know from
the consistency of the original mesh that the lifting of each hexahedron in physical space is a convex
polyhedron, and that no two such polyhedra overlap in physical space. Convexity warrants that when
lifting the five tetrahedra covering a hexahedron we will obtain a consistent sub-mesh covering the lifted
hexahedron exactly. Non-overlapping of hexahedra warrants that sub-meshes corresponding to different
hexahedra will not overlap.
Experimental results show that in practice the mesh rapidly converges to a consistent one.
The time complexity of the refinement procedure is not crucial to our application, as long as it remains
into reasonable bounds, because the algorithm is applied off-line to the volume dataset in order to build a
multiresolution model (see Section IV). However, time analysis in case all n points of V must be inserted
into \Sigma shows a bound of O(n 3 ) in the worst case [5], while experiments show a subquadratic behaviour
in practice. On the other hand, the space occupancy of this algorithm is quite high, because of the need
Fig. 5. Two adjacent blocks \Sigma 1
and \Sigma 2
, and the coincident triangulations T 1
and T 2
of their common face.
of maintaining both a bucketing structure and a priority queue (see empirical evaluations in Section VI,
Tables
I and II).
A.1 Refinement of large datasets by block-decomposition
For datasets having a regular structure (either in physical or in computational domain) it is possible to
bring space complexity into more manageable bounds, by splitting the dataset into blocks, and running
the algorithm separately on each block. Assume, for instance, that a regular dataset of size m \Theta n \Theta p is
given: we can subdivide it, e.g., into k 3 blocks of size (m=k process them
separately, with the same threshold - in all cases. Then, the resulting meshes are joined to form a mesh
of the whole domain.
In order to warrant the correctness of such a procedure, we must be sure that the structure obtained by
joining all results is indeed a tetrahedrization of the whole domain. This can be proved by showing that
given two blocks sharing a common face, the refinement algorithm will triangulate such a face in the same
way while refining each block (see Figure 5). Let \Sigma 1 and \Sigma 2 be the meshes of the two blocks, and let T 1
and T 2 be the triangulations of the face r common to both blocks in \Sigma 1 and \Sigma 2 , respectively. We may
assume that, upon suitable initialization of the meshes, T 1 and T 2 are initially coincident. Let us consider
a generic step of the algorithm that refines \Sigma 1 : if the vertex inserted does not lie on r, update will change
neither T 1 nor the error and warping of data points lying on on the contrary, if the vertex inserted lies
on r, it must be in particular the point maximising error/warping among all data points lying on r. This
means that the sequence of vertices refining T 1 is independent of the refinement that occurs in the rest of
Since the same situation occurs for the refinement of \Sigma 2 , we can conclude that the same sequence of
vertices will be selected for T 2 , hence the two triangulations for a given accuracy will be coincident. Note
that, however, the result will not be the same that we would obtain by running the refinement algorithm
on the whole dataset, since the resulting tetrahedrization might not be globally Delaunay: the Delaunay
property is verified only locally to each block.
B. A method based on decimation
The refinement method described above is hardly adaptable to the case of non-convex irregular datasets.
Major difficulties arise in finding an initial coarse mesh to approximate the
domain\Omega\Gamma and in the estimation
of warping. Moreover, the Delaunay triangulation is not applicable to non-convex polyhedra, since it is
undefined in the constrained case.
Experiences in the approximation of non-convex objects through 2D triangular meshes suggest that a
decimation technique might be more adequate to the case of non-convex irregular datasets (see, e.g., [37],
[19], [4]). In the following, we describe an algorithm that extends such heuristics to volume data: starting
from the reference mesh \Gamma, vertices are iteratively discarded until possible. Given a threshold - for the
accuracy, the following refinement procedure is applied:
procedure DECIMATION(V; \Gamma; -);
while \Sigma satisfies - do
vmin / SELECT MIN VERTEX(V; \Sigma; -);
return (\Sigma)
The test of accuracy is simpler in this case than in the refinement procedure. Indeed, at each iteration,
accuracy may worsen only because of local changes. Therefore, it is sufficient to maintain a variable
storing the current accuracy, which is updated after each iteration by testing whether the accuracy in the
changed portion of the mesh has become worse than the current one.
On the contrary, procedures SELECT MIN VERTEX and REMOVE VERTEX are somehow more delicate
than their respective counterparts SELECT MAX POINT and ADD VERTEX.
Selecting a vertex to be removed involves an estimation of how much error and warping of the mesh
may increase because of removal: the criterion adopted is that the vertex causing the smallest increase in
error/warping should be selected at each iteration. An exact estimation of the change in error and warping
can be obtained by simulating deletion of all vertices in the current mesh. This would be computationally
expensive, since each vertex has 24 incident tetrahedra on average, and it may involve relocating many
points lying inside such tetrahedra. We rather use heuristics to estimate apriori how much a vertex removal
affects error and warping. Such an estimation is computed at all vertices before decimation starts, and it
is updated at a vertex each time some of its incident tetrahedra change.
In order to estimate error increase, we pre-compute the field gradient r v at each vertex v of the reference
model: this can be done by calculating the weighted average of gradients in all tetrahedra incident at v,
where weight for the contribution of a tetrahedron oe is given by the solid angle of oe at v. Then, for each
vertex v in the mesh, we search the vertex w, among those adjacent to v, such that the difference \Deltar v;w
between r v and rw is minimum. Value \Deltar v;w gives a rough estimate of how far from linear is the field
Fig. 6. An apriori estimate of warping increase caused by removing a boundary vertex v is obtained by measuring the
distance of v from an average plane fitting its adjacent vertices on the boundary of \Sigma.
in the neighbourhood of v: the smaller \Deltar v;w , the smaller the expected error increase if v is removed.
Value \Deltar v;w , and a pointer to w are stored together with v.
Warping changes only if a vertex lying on the boundary of \Sigma is removed. Therefore, for each such vertex
v, we estimate apriori warping increase caused by removing v on the basis of the local geometry of the
boundary of \Sigma in the neighbourhood of v. We adopt a criterion based on the distance d v between v and
a plane that best fits all vertices lying around v on the boundary of \Sigma (see Figure 6): the smaller d v , the
smaller the expected warping increase if v is removed. Therefore, d v is stored together with v.
Vertices of \Sigma are maintained in a priority queue that supports efficient selection. In this framework, the
selection criterion adopted in procedure SELECT MIN VERTEX is symmetrical to the one used in the
refinement algorithm: the vertex of \Sigma is selected which is expected to produce the smallest increase in
either warping or error, depending on whether W (\Sigma)=E(\Sigma) is larger or smaller than ffi =".
Once a vertex v has been selected, we need to tetrahedrize the polyhedron resulting from the elimination
of all the tetrahedra incident on v. Therefore, removing it from the mesh is not necessarily possible: this
difficulty is related to the fact that it may not be possible to tetrahedrize a non-convex polyhedron. Since
deciding whether this is possible or not is NP-complete, we use heuristics to try to remove a vertex by
collapsing one of its incident edges to its other endpoint. In particular, given a vertex v, we try to remove
it by collapsing the edge e that joins v to vertex w having the smallest difference \Deltar v;w from v in its
surface normal: recall that w had been selected while estimating the cost of removing v in terms of error.
Edge collapse is a simple operation: all tetrahedra incident at e are deleted, while all other tetrahedra that
have a vertex at v are modified by moving such a vertex at w. All adjacencies are updated accordingly: if
two tetrahedra oe 1 and oe 2 were both adjacent to a tetrahedron oe 0 that is deleted, then oe 1 and oe 2 become
mutually adjacent (see Figure 7a for an example in 2D). Geometrical consistency of the mesh may be
violated if some tetrahedron "flips over", i.e., it changes its orientation, because of edge collapsing (see
Figure 7b for an example in 2D). Consistency can be tested simply by checking the orientation of each
a
Fig. 7. Edge collapse in 2D: (a) a valid collapse; (b) an inconsistent collapse.
Fig. 8. Points that fall outside the mesh are assigned to tetrahedra by projecting them on the boundary faces.
tetrahedron incident at v before and after collapse. If collapse results impossible, then no mesh update
occurs, while v is temporary tagged as non-removable, by setting its error and warping estimate at infinity.
In this way, a different vertex will be selected at the next cycle.
After a successful edge collapse, a precise evaluation of the current accuracy must be obtained. As
in the refinement method, we adopt a bucketing structure to maintain the relation between tetrahedra
and data points they contain. Updating this structure involves only the portion of mesh covered by the
"old" tetrahedra that were adjacent to v. All removed points (including v) that belong to such a volume
are relocated with respect to the "new" tetrahedra. Note that, in case v was a boundary vertex, some
points may fall outside the mesh: such points (including v) are assigned to tetrahedra by considering their
projections on the "new" boundary faces of the mesh (see Figure 8). Changes in accuracy are computed
for each point on the basis of its new location. Finally, the apriori estimate of error and warping increase
is recomputed at each vertex that was adjacent to v, and the priority queue is updated accordingly.
IV. A multiresolution model
Each one of the algorithms described in the previous section can be regarded as producing a "historical"
sequence of tetrahedra, namely all tetrahedra that appear in the current mesh \Sigma during its construction.
Based on such an observation, we extend here to the three-dimensional case a simple idea to manage mul-
7tiresolution, which we proposed in [9] in the two-dimensional case, for the multiresolution representation
of terrains.
Each tetrahedron of the sequence is marked with two accuracies called its
birth and death, and corresponding to the worst and best accuracy of a mesh containing it, respectively.
Therefore, we have
Referring to a historical sequence generated by the refinement algorithm, we have that birth and death
are the accuracy of the current mesh when the tetrahedron was inserted into it, and when it was discarded
form it, respectively. The two values are swapped in case the historical sequence is built by decimation.
A. Querying the model
Given a query accuracy we have that a mesh at accuracy - will be formed by all tetrahedra
that are -alive, i.e., such that - d - b . Based on this fact, we use birth and death as filters to
retrieve tetrahedra that either form a given mesh, or cover a given range of accuracies, from the historical
sequence. Such a filter can also be combined with a spatial filter to perform windowing operations, i.e.,
to retrieve only tetrahedra that belong to a given query region.
Since a multiresolution model contains a huge number of tetrahedra, we have adopted a minimalist data
structure, which is suitable to maintain the multiresolution model on a sequential file.
For each site in the dataset, we store its coordinates and field value, while for each tetrahedron in the
historical sequence, we store its vertex indexes and the birth and death accuracies. Therefore, space
occupancy only depends on the number of sites, and on the number of tetrahedra in the historical sequence.
Sites and tetrahedra are stored in two different files. Both sites and tetrahedra are sorted in the order
they appear in the mesh during construction through refinement (in the inverse order, if the model is built
through decimation). Therefore, tetrahedra result in a non-increasing order of birth.
In this case, the sequence of tetrahedra belonging to a model at a given resolution - is obtained by
sequentially scanning the file, while selecting tetrahedra according to their birth and death: only tetrahedra
that are -alive are accepted, and the search stops as soon as a tetrahedron having a birth accuracy
better than - is found. Tetrahedra covering a given range of accuracies are obtained similarly. Vertices
of such tetrahedra are obtained by scanning the sequence of sites up to the highest element indexed by a
tetrahedron in the set extracted.
Note that performing a combined windowing operation would require a subsequent filter to scan all
tetrahedra after their extraction.
Search efficiency might be improved by adopting data structures for range queries, such as the interval
tree [31], or the sequence of lists of simplices [3]. However, such data structures might introduce a relevant
memory overhead. In particular, adopting the sequence of lists of simplices would make sense only if
the list of all accuracies spanned by the multiresolution model (which might be as large as the number
of tetrahedra forming it) can be maintained in the main memory. The interval tree gives optimal time
performance, but its application would be effective only if the whole model can be maintained in the main
memory.
On a different perspective, spatial indexes [36] might be adopted to improve the performance of windowing
operations, but also such structures involve some memory overhead.
B. Transmitting the model through the network
If a multiresolution model must be transferred from a server to a client over the network, it is important
to compress information further.
Conciseness can be achieved by avoiding the explicit transmission of tetrahedra forming the historical
sequence, while providing an implicit encoding that allows the client to make the structure explicit efficiently
If the model is built through procedure REFINEMENT, by exploiting the properties of Delaunay
tetrahedrizations, we can transmit only the vertices of the final mesh \Sigma in the order they were inserted
during refinement (i.e., in the order we store them on file). For each vertex, we send to the client its
coordinates, its field value, and the accuracy of the mesh just after its insertion. This allows the client to
reconstruct the whole historical sequence in the right order, by applying a procedure for on-line Delaunay
tetrahedrization [20] while vertices are received. Note that this is much a cheaper task than rebuilding the
model from the initial dataset, since the selection of vertices now comes free from the sequence. Moreover,
the on-line construction performed by the client directly results in a progressive representation (and,
possibly, rendering) of the mesh at the highest resolution.
If the model is built through procedure DECIMATION, a similar technique may be adopted, following
Hoppe [19]. In this case, the coarsest mesh is transmitted explicitly, while the remaining vertices are
listed in inverse order of decimation (i.e., in the order we store them on file). For each vertex, we send to
the client its coordinates, its field value, the accuracy of the mesh just before its deletion, and the vertex
it was collapsed on. This last information permits to perform a vertex-split operation that inverts the
edge-collapse performed by the decimation algorithm [19].
[* ENRICO: mi sembra che cosi' non basti - ci vorrebbero anche le facce che vengono duplicate per
un po' noioso da spiegare: che si fa? Si potrebbero eliminare i dettagli
riferimento semplicemente al metodo di Hoppe e alla possibilita' di estenderlo in 3D *]
Therefore, the client can generate the whole historical sequence in the right order, by using a sequence
of vertex splits. Similarly to the previous case, mesh reconstruction is performed by the client efficiently,
and progressive transmission and rendering are supported. Note that, in this case, operations performed
by the client at each vertex split are much simpler than those required by a Delaunay procedure, while,
on the other hand, the amount of information transmitted is larger.
The size of data transmitted can be reduced further by using geometric compression [12].
GUI
Isosurf.
rendering
Hybrid
rendering
DVR
Isosurface
extraction
Transfer
function
Rendering
Multiresolution extractor
Visualization
Modeling
Multires.
model
Data
Raw
construction
Refinement
algorithm
(convex or
curvilinear
data)
construction
Decimation
algorithm
(irregular
data)
Manager
Fig. 9. The architecture of the TAn system.
V. The TAn system
On the basis of the multiresolution model and algorithms described in the previous sections, we have
designed a volume visualization system, called TAn (Tetrahedra Analyzer), which is able to manage
multiresolution based on approximated tetrahedral representations of volume data.
A. System architecture
The architecture of TAn is depicted in Figure 9. The system is essentially composed of two modules,
the modeling module and the visualization module, which communicate with each other through the
multiresolution data structure, while each of them can communicate with the user through a Graphical
User Interface.
ffl The modeling module contains the algorithms for building a multiresolution model, starting from
a volume dataset: either the refinement or the decimation algorithm is used to build the model,
depending on the type of the dataset in input. The user selects an input dataset, and construction
parameters through the GUI; then, the system reads the corresponding data file, and it runs a
construction algorithm. The resulting multiresolution models is stored by using the data structure
described in Section IV.
The modeling module is essentially intended to run off-line, during a phase in which the multiresolution
model is prepared, and stored on the file system for subsequent visualization.
ffl Once a multiresolution model has been built, the visualization module can access it through a
submodule called the multiresolution extractor, which contains query processing routines that access
the multiresolution data structure, as explained in Section IV-A.
ffl Tetrahedra extracted from the multiresolution model are piped to two independent submodules:
one that manages a transfer function, and one that performs isosurface extraction.
A transfer function is applied to the range covered by the extracted data, in order to provide color
and opacity for each vertex used in direct volume rendering. The user can load, edit, and store
transfer functions through the GUI.
Isosurfaces are obtained through a method called the Marching Tetrahedra (MT), which is a straight-forward
adaptation of the Marching Cubes (MC) [25] to tetrahedral meshes: each tetrahedron is
classified in terms of the value of its four vertices, and triangular patches are obtained by using
linear interpolation along each edge intersected by the isosurface. Isosurface patches are extracted
from all tetrahedra loaded in memory, and for either one or more isovalues provided by the user
through the GUI. The user can also define color and opacity for each isovalue independently.
This stage essentially provides geometries, namely a set of tetrahedra prepared for direct volume
rendering, and a set of isosurface patches prepared for surface rendering, respectively.
ffl Geometries are piped to the Rendering Manager submodule that controls visualization on the basis
of the data currently loaded in memory and of parameters provided by the user. This submodule
essentially is aimed to filter the geometries (triangles and/or tetrahedra) that should be visualized,
at each time, and in each location of space. In this way, we are able to implement mechanisms such
as progressive rendering - where a low-level mesh can be used during interactive phases, while a high
level mesh is used when the user can wait longer for visualization; and multiresolution rendering
- where different LoDs are used in different portions of space, e.g. to either enhance quality or
magnify a selected portion of the dataset [8]. Filtering is again performed on the basis of the birth,
death, and location of each tetrahedron or triangle.
In order to improve performance, the user is allowed to ask for further interactive extraction of
isosurfaces only from tetrahedra of interest. In this case, the Rendering Manager module pipes
back to the isosurface extractor only a pointer to the current tetrahedra list, and collects more
isosurface patches.
ffl The geometries selected for visualization can be piped to one among three different modules, depending
on the rendering modality selected by the user.
If only isosurface rendering is enabled, then a proper module that visualizes them through standard
surface graphics is invoked, which is passed the set of isosurface patches of interest. Note that,
if translucent surfaces are used, it is necessary to sort isosurface patches in depth order prior to
visualization.
If only direct volume rendering is enabled, then the selected set of tetrahedra is passed to a Projected
1Tetrahedra (PT) algorithm [38], whose main phases are a depth sort of the tetrahedral mesh
and then, for each cell in depth order, a split-and-compositing action that produces translucent
triangles, visualized through standard surface graphics.
If both isosurfaces and direct volume rendering are used, then both tetrahedra and isosurface triangles
are passed to a module that manages hybrid rendering. In this case, blending conflicts among
tetrahedra and isosurface patches must be resolved. In order to do this, each tetrahedron which
contains a surface patch splits into two parts, each of which is further tetrahedrized. The resulting
set of tetrahedra and isosurface patches are then sorted in depth order, the PT algorithm is applied
to tetrahedra, and the results are visualized in depth order through standard surface graphics.
It is easy to change this architecture into a network architecture based on a client/server model, by
using the data transmission method described in Section IV-B. In this case, the server would contain
the modeling module, plus a query processing module that provides, upon request from the client, a
compressed data structure of the extracted mesh or set of meshes.
The client would incorporate the visualization module, where the multiresolution extractor would be
simply a module that schedules requests to the server, collects answers, and decompress the data structure.
B. Prototype implementation
The architecture described in the previous section has been partially implemented. A first version of
the TAn system has been released in the public domain in the first quarter of 1996, and it is available
(SGI executables only) at our Internet site http://miles.cnuce.cnr.it/cg/swOnTheWeb.html. The
system works on SGI workstations and uses OpenGL to manage graphics data output. Its GUI has been
implemented by XForms [46], a portable and easy-to-use user interface toolkit available in the public
domain (see at http://bragg.phys.uwm.edu/xforms).
We implemented the refinement construction algorithm both for convex and non-convex curvilinear
data, but only the convex version is included in the first release of the system (experiments on curvilinear
data shown in the next section were obtained with a stand-alone version of the algorithm). The decimation
construction algorithm for irregular datasets is currently under implementation.
The multiresolution extractor provides a function for extracting a mesh at any LoD provided by the user.
Two meshes can be loaded into main memory, one at a high LoD, and the other at a low LoD, and used
for interactive rendering.
Figure
shows snapshot of the two GUI windows that allow the user to build a multiresolution model,
and to extract LoD representations from it. The system provides statistics on the size of meshes at different
LoDs: the user can therefore make his choice for the approximated models by taking into account the
performances of the workstation used, the frame rate required, and the image quality degradation which
may be accepted.
The following visualization features were implemented:
ffl loading and interactive editing of the transfer function;
ffl multiple isosurface extraction through the MT method;
isosurface rendering with user defined color and opacity;
ffl direct volume rendering through the PT method;
ffl approximated hybrid rendering;
interactive modification of view parameters;
ffl a progressive rendering modality.
A snapshot of the graphic output window, and of GUI windows related to rendering is presented in
Figure
13. The window in the upper left corner is the main menu of the system; the window in the upper
right corner allows the user to extract an isosurface and to assign it a given color and opacity; the other
two windows on the right side are related to visualization and editing of the transfer function; the window
in the lower left corner allows the user to interactively adjust view parameters; the window in the middle
is used to select the rendering modality (isosurfaces, or DVR, or both).
The approximated hybrid rendering is implemented as follows. For each tetrahedron, the system explicitly
stores its related isosurface facets. At rendering time all cells are depth-sorted and, for each cell, both
the volume contribution (obtained with the PT algorithm), and the isosurface facets possibly contained
into it are projected. Since tetrahedra are not split prior to depth-sort, the result is only approximated
because of different parcels of a single tetrahedron cannot be sorted correctly with respect to its related
isosurface patches. The degradation in image quality may be relevant when low resolution approximations
are used, but it is highly reduced with the increase of resolution (i.e., the smaller the single cell, the
smaller the visual error introduced by the approximated hybrid rendering). An example of approximated
hybrid rendering is shown in Figure 15. The exact method for hybrid rendering, described in the previous
section, is currently under implementation.
The progressive rendering modality can be selected by the user to improve interactivity. The mesh at
low LoD is visualized during the highly interactive phases (e.g., while the user interactively modifies the
current view), while the mesh at high LoD is automatically visualized when interaction does not occur for
a given time period (i.e., during non-interactive phases). While in the current implementation the low
LoD is set by the user, in a more sophisticate version it could be selected automatically by the system,
depending on the graphics performance of the current platform, in order to ensure real time frame rate.
VI. Experimental Results
The performances of the system were evaluated on four datasets, representative of the two classes of
regular and non-convex curvilinear datasets. Datasets were chosen as they are commonly used in the
volume rendering field, in order to facilitate comparisons with other proposals:
ffl BluntFin, a 40 \Theta 32 \Theta 32 curvilinear dataset, was built by running a fluid-flow simulation of an
air flow over a blunt fin and a plate 1 ;
ffl Post, a 38 \Theta 76 \Theta 38 curvilinear dataset which represents the result of a numerical study of a 3D
incompressible flow around multiple posts;
ffl SOD, a subset 32 \Theta 32 \Theta 32 (not a subsampling) of a regular rectilinear dataset which represents
the electron density map of an enzyme 2 ;
ffl BuckyBall, a 128 \Theta 128 \Theta 128 regular rectilinear dataset which represents the electron density
around a molecule of C 60 . Some experiments are presented on either 32 \Theta 32 \Theta 32 or 64 \Theta 64 \Theta 64
subsampling of such a dataset 3 .
Multiresolution models of such datasets were built through the refinement construction algorithm, and
the various visualization features of TAn were experimented on such models.
A. Multiresolution modeling features evaluation
Tables
I and II report results on the construction of a multiresolution model from curvilinear and regular
datasets, respectively. Each table reports: the complexity of the multiresolution model (total number of
sites and cells, maximal RAM space occupancy during construction); computation times required to build
the model; and some information on a number of approximated meshes extracted from it. The accuracy
of each approximation is measured as follows: warping is a percentage of the length of the diagonal of a
minimum bounding box containing the dataset, while error is a percentage of the range spanned by data
values. Times are CPU seconds of an SGI Indigo workstation (MIPS R4000 100MHz).
The graph of Figure 10 shows the the number of vertices of the mesh through refinement, depicted
as a function of approximation error. Note how rapidly the size of the mesh decreases with the increase
of error. These results give a quantitative estimate of the advantage of founding approximate volume
visualization on data simplification techniques.
Figure
11 shows the spatial distribution of sites of the BluntFin dataset, compared with the spatial
distribution of vertices of an approximated model at accuracy (2:%; 2:%)
As you may notice, the experiment reported in Table II for the BuckyBall dataset were run on a
subsampling, because of limitations in the available RAM. A multiresolution model on the whole dataset,
and on two subsampled datasets, were also obtained by using the block-decomposition refinement described
in Section III-A.1. Results are presented in Table III. By adopting this method we can overcome the
intrinsic limitations of RAM of a specific platform, because for any dataset we can always have a partition
such that the refinement of each block becomes a tractable problem with the available resources.
In particular, we can compare the results obtained for the subsampled dataset refined as a whole
Both BluntFin and Post are produced and distributed by NASA-Ames Research Center.
produced by D. McRee, Scripps Clinic, La Jolla (CA), and kindly distributed by the University of North
Carolina at Chapel Hill.
3 BuckyBall is available courtesy of AVS International Center.
Curvilinear Datasets no. tetra. no. sites % of sites
BluntFin (40x32x32) 40,960
Multires Model:
tot. tetra.= 590,831
Levels of Detail:
Post (38x76x38) 109,744
Multires Model:
tot. tetra.= 1,620,935
Levels of Detail:
I
Measures on multiresolution models built from curvilinear datasets (the Post triangulation times are higher than
expected due to page swapping: the RAM size of the workstation used was only 64MB).
(lower part of Table II) and refined as 64 independent blocks (upper part of Table III). Note that, with
the block decomposition refinement, total computation time reduces from 1,318 sec. to 532 sec., while
we have only a small increase in the number of vertices necessary to achieve a given accuracy. Such an
increase is due to the spatial constraints introduced by the block boundaries.
Note also how the performance of data simplification, in terms of data needed to achieve a given accuracies,
improves with the resolution of the input dataset. If we consider, for example, the LoD meshes at accuracy
1.0 % from the multiresolution models of BuckyBall, the percentage of sites needed
to build each approximated mesh decreases respectively from 45.2% to 22.1% down to 6.8% of the total
number of sites of the dataset. In absolute values, the ratio between the 128 3 and the datasets is 64:1
at full resolution, while reduces to 10:1 at accuracy 1.0%.
Regular Datasets no. tetra. no. sites % of sites
Multires Model:
tot. tetra.= 177,588
Levels of Detail:
BuckyBall (32x32x32) 32,768
Multires Model:
tot. tetra.=
Levels of Detail:
II
Measures on multiresolution models built on two regular datasets.10305070900 0.1
%No.
of
points
BuckyBall
Bluntfin
%No.
of
points
BuckyBall
Bluntfin
Post
Fig. 10. Number of points in the simplicial model expressed as a function of the approximation error.
no. tetra. no. sites % of sites
BuckyBall (32x32x32) 32,768
Multires Model:
tot. tetra.= 467,261
Levels of Detail:
BuckyBall (64x64x64) 262,144
Multires Model:
tot. tetra.= 3,927,793
Levels of Detail:
BuckyBall (128x128x128) 2,097,152
Multires Model:
tot. tetra.=
Levels of Detail:
III
Tetrahedrization of the BuckyBall dataset using the block-decomposition refinement: 128 3 dataset is the
original one, while 64 3 and datasets are obtained by subsampling. Decompositions:
blocks of size 8 3 ; 64 3 divided into 64 blocks of size blocks of size 16 3 .
Fig. 11. Distribution of vertices of the BluntFin dataset: original dataset (40,960 sites) on the left, approximated mesh
with on the right.
Accuracy no. vertices no. tetra no. iso. triangles DVR time
(0.0%,0.0%) 40,960 222,528 19,499 44.1
IV
Isosurface rendering (with threshold value 1.244), and direct volume rendering of the same dataset at
different accuracies. Times are in seconds on an SGI Indigo XS24 R4000 ws.
B. Rendering features evaluation
Figure
14 presents visual results related to isosurface and direct volume rendering of three representations
of the BluntFin dataset. The top images refer to the mesh at full resolution, the middle images refer
to an approximated mesh at accuracy (1:0%; 1:0%), while the bottom images refer to an approximated
mesh at accuracy (4:0%; 4:0%).
Numerical results on the size of the meshes, of the extracted isosurfaces, as well as times for DVR, are
summarized in Table IV. The images provide evidence that the image degradation is almost un-perceivable
when passing from full accuracy to (1:0%; 1:0%) accuracy, while it is still small at (4:0%; 4:0%), while the
output sizes (and times) are highly reduced.
Visualization results obtained with TAn, which are essentially based on the concept of data simplifica-
tion, can be also compared with results obtained with approximation methods that are based on graphics
output simplification.
In case of isosurface rendering, the size and number of the facets extracted from a simplified mesh depend
essentially on the variation of the field function (namely, few large facets are fitted on subvolumes
where the gradient is constant or nearly constant). On the contrary, a geometry-based simplification of
an isosurface extracted from the mesh at full resolution would be driven by isosurface curvature ([37],
[19]). An obvious computational advantage of the approach based on data simplification is that the major
effort is taken in a preprocessing stage (i.e., when the either simplified or multiresolution model is built),
while standard simplification approaches are implemented as a post-processing phase, therefore reducing
throughput in interactive applications.
Moreover, standard geometry-based methods may produce anomalies if the surface has curvature variations
which are small in size, but reflect significant variations of the field (e.g., a sinusoidal function, having
amplitude lower than the simplification threshold), and, worse than this, intersections between surfaces at
different isovalues may occur because of simplification. These problems do not arise with methods based
on data simplification.
In a previous paper [7], we also compared the performance of DRV through the standard PT algorithm
applied to a simplified mesh, to the performance of approximated versions of the PT algorithm [43] applied
to a mesh at full resolution. Experiments showed evidence that images with visual degradations similar to
those obtained using the approximated PT are produced using highly simplified datasets, thus obtaining
much shorter processing times (about five times shorter).
The large difference in speedups is because standard approximated PT techniques only act on the pure
rendering phase, thus achieving a reduction in overall time up to a maximum of 50%. On the contrary,
the speedup in overall time achieved by using a data simplification approach is linearly proportional to
the simplification operated on data (this means that not only pure rendering is affected, but depth sorting
and cell classification and splitting as well).
VII. Conclusions
TAn is currently the only volume visualization system distributed in the public domain that offers
multiresolution features, at least to our knowledge. Our experience with it provides evidence that the
visualization of volume data can be managed effectively and efficiently by using multiresolution features
based on the concept of data simplification.
The experimental results show that managing multiresolution involves a limited increase in the space
complexity: the ratio between the size of the multiresolution model is in the average case about 2.5 times
the size of the mesh at maximal accuracy.
Moreover, the proposed representation supports the design of fast approximated, progressive or multiresolution
visualization algorithms, which are aimed at providing significant speedups in rendering, and at
increasing the acceptance of visualization as a useful working tool.
Critical points for the usability of our approach are in the high requirements in memory and processing
time needed to build the multiresolution model. With the current implementation, the tetrahedrization
of high resolution datasets (e.g., with more than 100K sites) may require a memory size beyond that
available on current low-level workstations. This problem may be solved by building the multiresolution
model on high-level workstations/supercomputers, or by redesigning this process in order to reduce its
memory and processing requirements. For instance, our strategy based on block decomposition has given
good results for regular and curvilinear datasets.
A possible extension of the proposed multiresolution model is to structure data to allow the extraction
of approximated representations whose accuracy is variable through data domain. This is especially useful
for multiresolution visualization, when different accuracy levels must be used inside a single image. In this
context, it may be extremely useful to supply the user with tools to set a "focus region", and render data
according to that selection [30].
Unfortunately, extracting meshes at variable resolution from our current model may originate consistency
problems (i.e., possible discontinuities of the field, with consequent "cracks" in the isosurfaces, and aliasing
in DVR). In a previous paper [8], we implemented multiresolution rendering by using two different
meshes, at high and low resolution, respectively: the high resolution mesh is rendered inside a region of
interest, while the other is used outside such a region. Topological inconsistencies that occur between the
two meshes at the boundary of the region of interest were overcome by visualizing cells of both meshes
that cross such a boundary, and using blending on such cells.
A more rigourous solution of such a problem should be given at the level of the multiresolution extractor
module, by providing a mechanism for extracting a mesh whose accuracy varies "smoothly" and consistently
through domain. In recent works [9], [32] we proposed alternative multiresolution data structures
that provide efficient solutions to this problem, and that produce effective results in the two-dimensional
case, e.g. for visualizing terrain models in the context of flight simulators. However, such structures
may require a relevant overhead in terms of storage, which make them not easily extensible to the three-dimensional
case.
We are currently working on the second release of the TAn system. TAn v.2 is based on OpenInventor,
and its GUI is under development using the SGI RapidApp tool. We plan to distribute it in Q2 1998.
The system has been redesigned quite from scratch, in order to improve performances, usability, and
visual quality, while maintaining the same architecture described in Figure 9. The Modeling tool and the
Visualization tool have been clearly separated, and related through the multiresolution data structure.
The Modeling tool is designed to manage all kinds of datasets. The simplification algorithm for irregular
datasets is currently under implementation, and it will be completed and tested in short time. Experiences
in 2D [19], [4], and with similar decimation techniques in 3D [34] suggest that the method should result at
least as effective as that based on refinement. However, its performances (both in terms of time, and data
simplification rate) will be compared with those of the refinement algorithm on convex and curvilinear
datasets. Upon the results of this comparative evaluation, we will decide whether both algorithms, or
only the decimation algorithm will be incorporated into TAn v.2 Modeling subcomponent.
The Rendering subcomponent has been substantially improved, in order to provide: faster DVR (TAN v.2
rendering speed is approximately under OpenInventor on an SGI Indigo2 XZ R4400 200MHz
ws, preliminary results); a new rendering approach which encompasses both exact hybrid rendering and
exact management of transfer function discontinuities, based on cell slicing; a simplified GUI.
In conclusion, our goal is to found the rendering modules of our architecture on a new concept of tetrahedral
graphics, where tetrahedra are treated as atomic graphics primitives, just like triangles, and are efficiently
processed by low-level functions provided by the graphics library, and possibly hardware-assisted. In
this way, we would clearly separate the geometric aspects of volume visualization, which are treated by
application programs/modules, from the purely graphical aspects, which should be standardized, and
treated at library and hardware level.
Acknowledgements
We wish to thank Leila De Floriani, for her participation in the early stages of this project, and for many
useful discussions; this work is part of a continuing collaboration between her group at the University of
Genova, and the authors. Thanks are also due to Donatella Sarti, Pierluigi Sbaiz and Marco Servettini
for their help in implementing the algorithms described in the paper.
--R
An efficient algorithm for terrain simplification.
Surface approximation and geometric partitions.
Pyramidal simplicial complexes.
Multiresolution decimation based on global error.
Multiresolution Modeling and Rendering of Volume
Optimal isosurface extraction from irregular Volume
On the optimization of projective Volume
MagicSphere: an insight tool for 3D data visualization.
Representation and visualization of terrain surfaces at variable resolution.
Simplification envelopes.
Fast Algorithms for Volume
Geometry compression.
An acyclicity theorem for cell complexes in d dimensions.
Automatic extraction of irregular network digital terrain models.
The multilevel finite element method for adaptive mesh optimization and visualization of Volume
A multiscale model for structured-based Volume <Volume>rendering</Volume>
A data reduction scheme for triangulated surfaces.
Data point selection for piecewise trilinear approximation.
Progressive meshes.
Construction of three-dimensional Delaunay triangulations using local transformations
Poligonal mesh simplification with bounded error.
Dynamic maintenance of delaunay triangulations.
Hierachical splatting: a progressive refinement algorithm for Volume
Comparison of existing methods for building triangular irregular network models of terrain from grid digital elevation models.
Marching cubes: a high resolution 3D surface construction algorithm.
A theory for multiresolution signal decomposition: The wavelet representation.
Discretized Marching Cubes.
Multiscale Volume
Efficient visualization of large-scale data on hierarchical meshes
Spray rendering.
Computational Geometry: an Introduction.
Variable resolution terrain surfaces.
Generalized unstructured decimation.
On the difficulty of tetrahedralizing 3-dimensional non-convex polyhedra
The design and Analysis of Spatial Data Structures.
Decimation of triangle mesh.
A polygonal approximation to direct scalar Volume
A multiresolution Framework for Volume
Pursuing interactive visualization of irregular grids.
Octrees for faster isosurface generation.
Interactive splatting of nonrectilinear volumes.
Visibility ordering meshed polyhedra.
Forms Library - a graphical user interface toolkit for X
Multiresolution tetrahedral framework for visualizing Volume
--TR
--CTR
Oliver G. Staadt , Markus H. Gross, Progressive tetrahedralizations, Proceedings of the conference on Visualization '98, p.397-402, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Stefan Gumhold , Stefan Guthe , Wolfgang Straer, Tetrahedral mesh compression with the cut-border machine, Proceedings of the conference on Visualization '99: celebrating ten years, p.51-58, October 1999, San Francisco, California, United States
Kwan-Liu Ma , Thomas W. Crockett, Parallel visualization of large-scale aerodynamics calculations: a case study on the Cray T3E, Proceedings of the 1999 IEEE symposium on Parallel visualization and graphics, p.15-20, October 25-26, 1999, San Francisco, California, United States
Jeremy Meredith , Kwan-Liu Ma, Multiresolution view-dependent splat based volume rendering of large irregular data, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
B. Sauvage , S. Hahmann , G.-P. Bonneau, Length preserving multiresolution editing of curves, Computing, v.72 n.1-2, p.161-170, April 2004
Shyh-Kuang Ueng , Yan-Jen Su , Chi-Tang Chang, LoD Volume Rendering of FEA Data, Proceedings of the conference on Visualization '04, p.417-424, October 10-15, 2004
Wei Hong , Arie Kaufman, Feature preserved volume simplification, Proceedings of the eighth ACM symposium on Solid modeling and applications, June 16-20, 2003, Seattle, Washington, USA
Christopher S. Co , Bjoern Heckel , Hans Hagen , Bernd Hamann , Kenneth I. Joy, Hierarchical Clustering for Unstructured Volumetric Scalar Fields, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.43, October 22-24,
David F. Wiley , Martin Bertram , Bernd Hamann, On a Construction of a Hierarchy of Best Linear Spline Approximations Using a Finite Element Approach, IEEE Transactions on Visualization and Computer Graphics, v.10 n.5, p.548-563, September 2004
Paolo Cignoni , Leila De Floriani , Paola Magillo , Enrico Puppo , Roberto Scopigno, Selective Refinement Queries for Volume Visualization of Unstructured Tetrahedral Meshes, IEEE Transactions on Visualization and Computer Graphics, v.10 n.1, p.29-45, January 2004
P. Cignoni , D. Constanza , C. Montani , C. Rocchini , R. Scopigno, Simplification of Tetrahedral meshes with accurate error evaluation, Proceedings of the conference on Visualization '00, p.85-92, October 2000, Salt Lake City, Utah, United States
Hamish Carr , Jack Snoeyink , Ulrike Axen, Computing contour trees in all dimensions, Computational Geometry: Theory and Applications, v.24 n.2, p.75-94, February
Issac J. Trotts , Bernd Hamann , Kenneth I. Joy, Simplification of Tetrahedral Meshes with Error Bounds, IEEE Transactions on Visualization and Computer Graphics, v.5 n.3, p.224-237, July 1999 | multiresolution representation;volume data visualization;tetrahedral meshes |
614387 | Out-of-Core Streamline Visualization on Large Unstructured Meshes. | AbstractThis paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that, during the streamline construction, only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-15 megabytes. We also demonstrate that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms. | Introduction
Most visualization software tools have been designed for data that can fit into the main memory
of a single workstation. For many scientific applications, data at the desirable accuracy
overwhelm the memory capacity of the scientist's desk-top workstation. This is particularly
true for data obtained from three dimensional aerodynamics calculations, where very
fine unstructured tetrahedral meshes are needed to model arbitrarily complex configurations
such as an airplane. Although adaptive meshing techniques can be applied to reduce the
resolution of the meshes, the resulting meshes may contain tens of millions of tetrahedral
cells.
Rapidly increasing CPU performance and memory capacity are beginning to allow scientist
to study data at such resolutions. Many scientists now have access to workstations
with 500 megabytes to one gigabyte of memory which are capable of visualizing millions
of tetrahedral cells. But the same capability also allows scientists to model problems at
even greater resolution. Moreover, not every scientist has constant access to such high-end
workstations.
To solve this problem, previous research has mainly focused on the use of parallel and distributed
computers, and multiresolution data representations. For example, pV3 [3] (parallel
breaks up the problem domain in space and places each partition on an individual
workstation; streamlines are then calculated in a distributed, interactive manner. In partic-
ular, pV3 can couple visualization calculations with the simulation. This approach is very
attractive to scientists in an open, distributed computing environment and has also been
shown to work well on a distributed-memory parallel computer like the IBM SP2 [4].
Another popular approach is to make use of a supercomputer like a CRAY for visualization
calculations and a high-end graphics workstation for displaying the streamlines. For
streamline visualization, this approach is preferable to the distributed approach since streamline
calculations do not parallelize well. Finally, multiresolution data representations allow
the user to explore the data at a lower resolution according to the computer's performance,
but they are still memory-limited at the highest resolutions.
More recently, visualization software companies [1] as well as corporate research laboratories
[11] have begun to look into this problem and attempted to provide viable solutions
for their software products. While their solutions might be for more advanced graphics
workstations and more general visualization purpose, ours, an out-of-core approach, intends
to enable interactive streamline visualization of large unstructured grid data on mid-range
workstations or even PC-class machines with only a moderate amount of main memory.
1.1 Why Out-Of-Core?
Out-of-core processing is not new and in fact has long been used to cope with large data.
Many computational problems in engineering and science involve the solution of an extremely
large linear system that does not fit into a computer's main memory. Using an out-of-core
method is the only solution in the absence of large memory space and parallel computers.
Another example is from database applications; a large database can only be constructed
with an out-of-core approach.
Will an operating system be smart enough to handle memory contention caused by using
brute-force algorithms for data visualization, solving linear systems, or database construc-
tion? The answer is no. Modern operating systems are good at managing multiple jobs
and providing time sharing via paging and swapping. But they cannot make more memory
appear out of nowhere. In particular, when data access is random and irregular, typical in
unstructured data visualization, poor locality of referencing leads to thrashing in the virtual
memory.
For example, unstructured-grid data generally store coordinates and solution values for
each node (a grid point), and node indices for both triangular faces and tetrahedral cells.
As shown in Figure 1, these node, face and cell data are not stored contiguously in disk
space according to their spatial relationship. During visualization calculations, accessing two
neighboring cells may invoke references farther apart in disk space. Consequently, constant
paging is forced to fetch disk resident data and this memory overload eventually becomes
I/O overload.
While moderate paging is common, desperation swapping is often intolerable. It has
been evident that many commercial and free visualization software packages fail to handle
large data sets on an average workstation. This research has been motivated by our local
scientists' need of an interactive visualization mechanism to study their data at the desirable
resolution, and particle tracing is one of the most important capabilities requested.
1.2 An Out-Of-Core Streamline Visualization Algorithm
Streamlines are the paths of massless particles released in steady flow fields [15]. Plotting
streamlines is a fundamental technique for visualizing vector field data sets generated from
scientific computations [7, 9, 14]. Streamlines can be extended to construct other types of
objects, like streamtubes and streamribbons [2, 5, 14]. A streamline is usually constructed
by using stepwise numerical integration. The integration involves the following steps:
1. Selecting of an initial point.
2. Locating the cell containing the point.
node
node
node id3 =1051
node
Cell N
y, z
coordiates
Node 34
y, z
coordiates
y, z
coordiates
Node 1051
y, z
coordiates
Node 1052
y, z
coordiates
Node 36
y, z
coordiates
Node 37
y, z
coordiates
Node
boundary face 1
boundary face 2
boundary face F
solution data
Node 277
solution data
Node 278
solution data
Node 279
solution data
Node 280
solution data
Node 281
solution data
Node 282
solution data
Node 283
solution data
node
node
node
node
node
node
node
node
node
node
node
node
node
node
node
node
node
Figure
1: A typical data structure for unstructured meshes. Normally, node, face, and cell
data are stored as separate chunks. Accessing two cells next to each other in the spatial
domain may invoke references to the corresponding data items scattering across the disk
space.
3. Interpolating the vector field and calculating a new point by using a numerical integration
method.
4. If the termination condition is not met, go to step 2.
Our out-of-core algorithm has been designed based on the following observations:
ffl Streamline calculations are incremental and local. Each integration step only needs a
very small amount of data, one or two tetrahedral cells.
ffl Calculating multiple streamlines concurrently is cheaper than calculating one streamline
after the other. This maximizes locality of reference which increases memory
performance dramatically.
ffl Data packing is essential to reduce the number of disk reads. Data should be packed
in such a way that fetching cells in a small neighborhood can be done with one disk
read.
ffl It is much more efficient to read small chunks of data from disk. Moving a larger chunk
of data from disk would likely disrupt the interactivity when a streamline is ready to
enter a neighboring chunk.
The resulting algorithm contains two steps: preprocessing and interactive streamline con-
struction. The preprocessing step determines connectivity, calculates additional quantities
such as interpolation functions and coordinates transformation functions, restructures the
raw data, and stores all the information into a more compact octree representation on disk.
This step needs to be done only once. The second step requires a graphical user interface to
facilitate picking of seed points where tracing of streamlines begins. The interactive streamline
construction step does not rely on the operating system to fetch the required data.
Instead, a memory management policy is designed to efficiently utilize a minimum memory
space and fetch data from disk. Streamlines are integrated from octants to octants based on
the principles of preemption and time-sharing. In this way, streamlines can be constructed
interactively by using only a few megabytes of memory space on a mid-range workstation
like a Sun SPARC-20.
The rest of the paper is organized as follows. Section 2 illustrates the data preprocessing
step. The streamline construction algorithm is described in Section 3 and the memory
management policy is explained in Section 4. Tests are performed to compare virtual memory
against our algorithm; to study the performance of the memory management policy, local
disk access and non-local disk access; and to measure average cost and overhead. The test
results are presented and discussed in Section 5, followed by some concluding remarks and
future research directions.
2 Data Preprocessing
Efficient visualization operations on unstructured-grid data can only be obtained with pre-processing
because of the irregularity of the mesh topology. To perform streamline visual-
ization, the two most important operations are:
1. identifying the tetrahedra cell containing the user specified seed point.
2. computing velocity at locations other than the node points.
Fast cell searching methods like the one presented in [9] need additional data such as cell connectivity
information and coordinate transformation functions. These data, which are also
needed by the integration step, could be computed on the fly during streamline construction,
but the computational cost would then be too high for interactive visualization.
As flow solutions are only defined at node locations, interpolation must be used to compute
flow variable values at other locations. To attain maximum efficiency at run time,
an interpolation function for each cell is also precomputed and stored with the data. For
tetrahedral cells, we use the linear basis function interpolation [6, 14]. In summary, our
preprocessing step first determines cell connectivity; then computes the coordinate transformation
and interpolation functions for each cell; and finally partitions and reorganizes the
raw data with the computed data using an octree structure to facilitate fast data retrieval.
To achieve interactive visualization, we cannot avoid precomputing and storing some of these
data. The additional storage space required actually makes the out-of-core approach even
more attractive. The issues and techniques for calculating transformation and interpolation
functions as well as connectivity information can be found in previous research [13, 14].
2.1 Data Partitioning
There are two approaches for partitioning unstructured data sets. The first approach is to
divide cells into totally disjoint groups. Since the data sets are unstructured, the geometric
shapes of the resulting groups are generally irregular. The advantage of this approach is
that no data redundancy is introduced. However, one of the disadvantages is that the
spatial relationship between groups is difficult to determine. Specifically, it becomes difficult
to verify whether two groups are adjacent, and to identify the group where a specified point
is located.
The second approach is to partition the data set by superimposing a regular framework
on it. A subset is formed by grouping the cells which intersect or are contained within a
region of the framework. The framework could be a regular 3-D mesh, a k-way tree or an
octree [10]. Since the data sets are unstructured, a cell may intersect with several regions of
a regular framework and thus data redundancy is inevitable with this approach.
Partition of the Physical Domain The Corresponding Octree
Figure
2: Octree Data Partitioning.
A major advantage of the second approach is that the spatial information of subsets can
be easily obtained. For example, if an octree is employed as the framework of data partition,
the octant containing the seed point can be identified by searching the octree from the root
to the leaves within O(log N) steps, where N is the number of the octants. The neighbors of
an octant can also be found by applying this technique and one of these neighboring octants
contains the next point on the current streamline.
In our out-of-core setting, octrees are used as the framework for the data partitioning
since unstructured grids are highly adaptive in both shape and resolution. Octrees have been
widely employed by many computer graphics and visualization applications. It allows us to
refine the data partitioning in the regions where the grids are dense such that subsets are
relatively equal in data size (i.e. in terms of the number of tetrahedral cells). In Figure 2, a
simple example of octree is shown. Note if the framework is a regular 3-D mesh, the above
searches may be completed in constant time. But a very high resolution regular mesh must
be used to accommodate the original mesh's irregularity.
Based on an octree structure, the data partitioning is carried out in a top-down manner.
First, the whole data set is considered as one octant. Then this octant is decomposed into
eight child octants by using three cutting planes perpendicular to the x, y, and z axes. If the
number of tetrahedral cells in a child octant exceeds a pre-defined limit, the maximum octant
size, this child octant is partitioned further. The above procedure is performed recursively
until all octants contain fewer cells than the maximum octant size. The cells of an octant
are stored in a file in our current implementation. This enables very straightforward access
to an octant, though a large number of files may be created when the maximum octant size
is small. An alternative way is to store all octants in a single file. This method must employ
an indexing algorithm if the sizes of octants are different and the number of octants is large.
Each octant stores the bounding box of the octant, the number of cells in the octant, the
center of the octant, and the ID of the file used for storing cells in that octant as shown in
Figure
3. The center of the octant is where the three cutting planes intersect. The position
of this point is set to be the arithmetic average of the centers of all cells in the octant, where
Number of Cells File ID
Pointers to Child Octants
(X, Y, Z)
Center of Octant Bounding Box
child0 child1 child7
Figure
3: Data Structure of an Octree Node
the center of a cell is defined as the arithmetic average of its vertices. This choice keeps the
size of the eight child octants at each level of the tree about the same.
The octree created represents the structure of the data partitions and is stored in a file
after the partitioning is completed. This file is read in first at the beginning of a streamline
visualization session. A typical octree requires under one megabyte of storage space. The
structure of the octree nodes allows an efficient, systematic way of retrieving the needed data
for calculating the streamlines.
2.2 Out-Of-Core Data Preprocessing
For the sizes of data we consider, the data preprocessing step must also be performed in an
out-of-core manner. This is done by allocating eight buffers in memory and opening eight
disk files to store cells read from the input data file. At the top level, these eight buffers
and disk files correspond to the root's eight child octants and their bounding regions. Then
cells are read into memory incrementally and a cell is assigned to a buffer if it intersects
the corresponding bounding region. As mentioned previously, a cell may be assigned to
more than one buffer. Whenever a buffer is full, the cells in the buffers are dumped to the
corresponding disk file. After all cells are processed, each octant size is examined. If an
octant has more cells than the maximum octant size, another round of partitioning proceeds.
Eight more buffers and disk files are created.
After the octree is completely built, the next step is to find cell connectivities and calculate
the coefficients of the coordinate transformation as well as interpolation functions.
One octant file is processed at a time. Note that the maximum octant size determines the
number of octants generated. A larger octant size implies less data redundancy and thus
less disk space used. But the problem with keeping large octants in the main memory is
that it is then harder to achieve consistent performance. Remember that for streamline
visualization moving many smaller data chunks is generally less expensive than moving a
few larger pieces since normally only a small portion of each data chunk is accessed by the
streamline calculation. Moreover, a higher hit rate would be achieved with many smaller
octants in core. On the other hand, if the maximum octant size is relatively small, a larger
number of octants are generated. The preprocessing step would become more expensive.
The data redundancy becomes higher, and more disk space is required to store the data.
However, more octants can be resident in the main memory during streamline constructions
to attain more consistent performance. Test results will be provided in section 5 to show how
selections of the maximum octant size influence the performance of the out-of-core method.
3 Streamline Construction
Two operations are repeatedly performed during integrating streamlines. The first one is to
compute new positions of streamlines and the second one is to move the required data from
disk into main memory if it is not already there. Compared with CPU speed and memory
access time, disk I/O is relatively slow. In order to narrow the gap, the computation and the
data-fetching have to be carefully scheduled. Furthermore, the memory space is a limited
resource. It is important to fully utilize the memory space to store more information for
calculation such that computation can be carried out with minimum interruption.
In order to achieve these goals, the out-of-core streamline construction algorithm is based
on two fundamental operating system concepts: preemption and time-sharing [12]. Based on
the availability of the data, a streamline under construction may be in any of the following
three states: waiting, ready, or tracing. When the needed octant is in the main memory, the
streamline is in the ready state, and it can enter the tracing state; that is, its next positions
can be calculated. Otherwise, the streamline is in the waiting state, waiting for the needed
octant to be brought in from the disk. When the memory space occupied by an octant is no
longer involved in computing new streamline positions, it can be released and reused.
In short, the out-of-core program following the preprocessing stage consists of the following
steps:
ffl Initialization:
- Read the octree created in the data partitioning step from the disk.
Allocate memory space for holding octants.
- Create data structures needed in the streamline construction.
ffl Construction of the streamlines:
1. Get the initial positions selected by the user.
2. Identify the octants where the streamlines enter.
Flag Octant
ID
Memory Block
Size
Octant
Pointer
Used 006 500K
Free ___ ___ ___N-1
Figure
4: An Octant Table
3. Fetch octants into the main memory.
4. Integrate all streamlines with their octants in the main memory until all of them
leave the octants.
5. Go back to 2, if a termination condition for any of the streamlines is not met.
3.1 Initialization
The initialization step first reads the octree from the disk and creates the following data
structures:
ffl an octant table to keep track of the octants in the main memory,
ffl three queues for scheduling computations.
The octant table is used to store information about octants which are resident in main
memory. One octant is associated with each entry in the octant table. Each entry contains
four fields. Figure 4 shows a table of N entries. The first field is a flag indicating whether
this entry is allocated to an octant or not. The second field contains the ID of the octant.
The third field stores the size of the memory space allocated to the octant. The last one is
a pointer to the starting position of the octant.
Three queues are created to keep data about the streamlines under construction. These
queues are named the waiting queue, the ready queue, and the finished queue. A streamline
is kept in the waiting queue if it enters an octant which is not resident in the main memory.
Otherwise, it is in the ready queue. Once the streamline is completely integrated, it is stored
in the finished queue. These three queues and the octant table are employed to schedule
streamline construction and octant-fetching such that more streamlines can be processed at
the same time by using less memory space.
Head
Tail
(X, Y, Z)
Velocity
Other Data
Streamline
Streamline Object
To Next Object
Segment
Point Record
Streamline
Number of Points
Octant ID
Figure
5: Data Structures of a Streamline
3.2 Construction
Given an initial seed point, a streamline object is created. The streamline object stores a
list of streamline positions, the number of points in the streamline and the ID of the octant
containing the most recently integrated position of the streamline. For each streamline point,
the coordinates, the velocity magnitude, the angular rotation rate of the flow, and the local
flow expansion rate [14] are recorded. The data structure of a streamline object is depicted
in
Figure
5. Initially, the ID's of the octants which contain the initial positions are identified
and entered into the streamline objects, and all streamline objects are kept in the waiting
queue.
In the next step of the streamline construction, the streamline objects in the waiting
queue are examined one by one. As long as there is still space in the pre-allocated main
memory, the octant identified by a streamline object is read into the octant table. Once the
octant of a streamline object has been read, the streamline object is moved from the waiting
queue to the ready queue.
Subsequently, the streamline objects in the ready queue are processed one by one. The
fourth order Runge-Kutta method [8] is used to calculate new streamline points. At the
same time, the angular rotational rate and the local flow expansion rate are computed if
streamribbons and streamtubes are to be constructed [13]. When a streamline leaves an
octant, the octant containing the new position of the streamline is identified. Then the
octant table is searched to check whether this octant is already in the main memory or not.
If it is, the data cell containing the current streamline position is found, and the streamline
construction continues. If it is not, the streamline object is moved to the end of the waiting
queue, and another streamline object is selected from the ready queue for processing. If the
Ready Queue
Waiting Queue
CPU
Streamline
Construction
Finished Queue
Figure
Object Scheduling
streamline reaches a physical domain boundary or its current time step exceeds a pre-defined
limit, the streamline object is deleted from the ready queue and stored in the finished queue.
Once the ready queue is empty, the octants in the main memory are no longer involved
in the streamline construction. The memory space occupied by them is released to a free
space pool. Their octant table entries are marked as free. The waiting queue is searched,
and a new set of octants is fetched into the main memory. Then another round of streamline
construction begins. The streamline integration is completed when all streamline objects are
in the finished queue. An example illustrating the migration of streamline objects during
the streamline construction is depicted in Figure 6.
4 Memory Management
The octants produced in the preprocessing stage may have different sizes. It is therefore
unwise to use a fixed size for memory blocks, each of which holds an octant. For efficient
utilization of the memory space, a memory management policy is designed to support the
out-of-core streamline visualization program. First, the size of the memory space dedicated
to the out-of-core program is selected by the user. Presently, this size is measured in number
of cells, and it should be greater than the maximum octant size. This memory space is
decomposed into memory blocks of different sizes. The size of the memory blocks created is
determined by two other parameters: the maximum octant size and a parameter called the
block size level. These two values can be either controlled by the user or set automatically
based on information obtained from the preprocessing step. The block size level represents
the number of different block sizes. For example, if its value is one, then all blocks are of
the same size, which is equal to the maximum octant size. If it is set to k, the sizes of blocks
are s
where s is the maximum octant size and
The blocks are created in a descending order of sizes; that is, the largest block is generated
first, then a block of the second largest size is created, and so on. During the creation process,
6 Mbytes
3 Mbytes
Mbytes
Free Space Table Free Memory Block
Figure
7: Free Memory Space Pool
if the remaining memory space is too small for creating a block of a particular size, this size
is skipped, and a block of the next smaller size is to be created. However, if the remaining
memory space is smaller than the smallest block size, then the process stops. If the smallest
block is created, then we re-run the process if the remaining memory space is large enough
for creating any blocks. All memory blocks created are then put into a free space pool. In
this pool, a table is created for book-keeping. The number of entries in this table equals
to the block size level. In each entry, a list of blocks of the same size is maintained. An
example of a free space pool is shown in Figure 7, in which the block size level is three.
Before an octant is fetched into the main memory, the size of the octant is retrieved from
the octree. The free space pool is searched to find a memory block that is large enough to
hold the octant. This searching starts at the list of the smallest blocks such that a best-fit
block may be found. Once a block is assigned to an octant, it is removed from the free space
pool. When this octant is no longer involved in computations, the memory block is released
to the free space pool.
The block size level is an important parameter determining memory utilization efficiency.
It cannot be too small or too large; while the former results in a few large blocks which are
space inefficient to store smaller octants and would cause excessive octant fetching, the latter
results in many smaller blocks which might be too small and therefore never used. Some
tests have been run to study the effects of this parameter upon the out-of-core program. The
results will be presented in the next section.
Test
We tested the out-of-core visualization algorithm on an IBM RS6000 workstation with 128
megabytes of main memory as well as a Sun SPARC-20 workstation with 64 megabytes of
main memory. Note that our algorithms only need about 5-20 megabytes out of the 64/128
megabytes to achieve interactive visualization. The IBM workstation with larger memory
space allows us to compare the performance of the out-of-core method with programs relying
on virtual memory management. In addition, three sets of tests are conducted on the Sun
workstation. The first set of tests are used to reveal how the maximum octant size, the
memory space size and the block size level affect the overall performance of the out-of-core
program. In the second set of tests, the overhead produced by fetching data and scheduling
computations is recorded and analyzed. The third set studies the effect caused by storing
data in a non-local disk.
In all tests, wall clock time is used to measure the cost. All tests were run in batch mode
and rendering and display time is not included. Currently, rendering is done in software but
the fast streamline construction rate and incremental software rendering make interactive
viewing of streamline formation possible.
5.1 The Out-Of-Core Method versus Virtual Memory
In order to reveal the strength of the out-of-core method, two streamline construction methods
that rely on virtual memory are implemented for testing. All three programs use the
same numerical method to integrate streamlines. The two virtual-memory-based methods
attempt to store as much data as possible in the main memory. In the first program, a
cell record contains four vertex indices and four neighboring cell indices. The size of a cell
record is bytes. The four neighboring cell indices of each cell record are calculated in
a preprocessing stage, though the coefficients of the coordinate transformation function are
computed on the fly during streamline construction. In the second program, a cell record
stores four vertex indices, four neighboring cell indices and a 3 \Theta 4 matrix in which the
coordinate transformation function coefficients are stored. That is, eight integers and 12
floats are kept in a cell record so the size of a cell record is 80 bytes. The neighboring
cell indices and the coordinate transformation function coefficients are pre-computed in the
preprocessing stage.
In the out-of-core program, each cell record holds the same information as the second
program. The maximum octant size is set to 20,000 cells, and a memory space that is equal
to six times of the maximum octant size is dedicated to the program. The block size level is
set to three.
The data sets of the tests are artificially created by dividing a cube into one, two, three,
four, and five million tetrahedral cells. For all these sets, the memory requirement for storing
streamlines, vertices, and cell records is larger than the user space of the main memory.
For example, four million tetrahedral cells would require at least 128 megabytes while the
dedicated user space is under ten megabytes.
To generate the artificial data sets, the three components of the vector field are determined
by the following formula:
w(x;
Figure
8: Streamline visualization of the artificial data set.
Table
1: VM-Based Method 1
data size initiate construct total
Streamline visualization of this data set is shown in Figure 8. Data sets are stored on disk
in binary format. For each data set, one hundred streamlines are constructed by using the
three programs. The maximum number of time steps for each streamline is set to 5,000.
An IBM RS6000 Model 560 workstation was used for the tests. This machine has 128
megabytes of main memory and 512 megabytes of paging space. Two costs are measured by
using wall clock time in seconds. The first one is the initialization cost which is mainly the
time to read in the test data. The second one is the cost of constructing 100 streamlines.
The total cost is then calculated by adding these two. The tests results are summarized in
Figure
9 in which logarithmic scale is used for the y-axis so that very large and small numbers
can be plotted in the window. The time breakdown of each case is listed in Table 1, 2 and
3.
Compared with the two virtual-memory-based programs, the performance of the out-
5total
time
data size (millions of tetrahedral cells)
out-of-core vs. in-core methods
in-core
in-core, precomputed
out-of-core, precomputed
Figure
9: Out-of-core versus vm-based methods.
Table
2: VM-Based Method 2
data size initiate construct total
Table
3: Out-Of-Core Method
data size initiate construct total
of-core program is up to almost two orders of magnitude better. Its initialization cost
grows more slowly with the data size. Its streamline construction cost is small and about
constant. The virtual-memory-based programs try to keep as much as data in the main
memory during streamline construction. Our test results show a lot of time devoted to
allocating memory space and reading in data sets. When the data size is equal to two
million cells, the initialization cost grows dramatically, since the size of required memory
space already exceeds the size of the physical main memory space. The operating system
has to swap out data to the paging space to create memory space for the input data. This
situation becomes worse when the data size is increased to three million cells. The second
program can not handle the data set with four million cells. The operating system signals a
system error and quits the program before the initialization stage is completed. Therefore,
shown in Table 2.
The first virtual-memory-based program requires less memory space so it can handle
larger data sets. However, the coefficients of the coordinate transformation functions are
computed on the fly during streamline construction. Consequently the cost of constructing
streamlines for this program is very high compared with the other two programs. The
initialization costs of this program are tolerable when the data size is under three million
cells. Once the data size reaches four million cells, the initialization cost becomes too high.
The total cost is equal to 49 minutes and 47 seconds in this case. For the data set with
five millions of cells, this program needs totally 57 minutes and 14 seconds to construct 100
streamlines, while the out-of-core program consumes less than 41 seconds to perform the
same operation. Therefore, the performance of the first and the second programs is not
acceptable for interactive visualization.
From the above test results, two important findings are: First, the virtual memory system
of the operating system is not very helpful for this visualization application. Second, the
speed of constructing streamlines is severely degraded if the coefficients of the coordinate
transformation functions are not pre-computed. To achieve interactive visualization, these
is no doubt that we must trade space for time; in this case, we use less expensive disk space
and employ a memory management policy tailered to the streamline calculations.
5.2 The Maximum Octant Size, Size of the Memory Space, and
The Block Size Level
Three parameters influence the performance of the out-of-core program. They are the maximum
octant size, the size of the memory space, and the block size level. Tests are conducted
on the Sun workstation to explore how these three parameters affect the performance of the
out-of-core program and to find an optimal combination of the three parameters. In the
tests, the maximum octant size is set to 10,000, 20,000, 30,000, and 40,000 cells respectively,
where each cell is represented by 80 bytes of information as explained in Section 5.1. The
sizes of memory space are set to 4, 6, 8, and 10 times the maximum octant size. The block
size level varies from 1 to 8. The tests are performed as follows:
For each value of the maximum octant size do:
ffl Subdivide the data set based on the maximum octant size.
ffl For each memory space size do:
Create memory space.
- For each block size level do:
based on the block size level.
construct 100 streamlines.
3 measure and report the cost.
For convenience, a smaller data set of 1.78 million cells is used in these tests. This data
set comes from a wind tunnel simulation. Visualization results are shown in Figure 10.
Note that the streamtubes are software rendered. The computational cost for the data
partitioning and the preprocessing together is about 20 minutes on the same workstation.
Note that this cost depends on the maximum octant size. The data is stored in a local disk
of the workstation. The initial points of the 100 streamlines are randomly selected. The
maximum number of time steps of a streamline is 5,000.
The test results are shown in Figures 11, 12, 13 and 14. The costs of constructing
streamlines by using the same maximum octant size are shown in each individual figure.
The curves plotted in each figure represent the costs of using different sizes of main memory
space while varying the block size level.
By comparing the test results, we can conclude that (for this dataset) the maximum
octant size is the most essential parameter in the out-of-core program. The performance
of the program is significantly improved when this parameter is reduced from 30,000 cells
to 20,000 cells. The out-of-core program favors smaller octant size. It is obvious that the
costs decline when the memory space is increased from 4 times to 6 times of the maximum
octant size, no matter what the maximum octant size is. However, further increasing the
memory space does not improve the performance. If the memory space is just 4 or 6 times of
the maximum octant size, the out-of-core program performs better when the block size level
increases. No significant improvement can be obtained by changing the block size level if
the memory space is larger. With our current setting, the best performance is thus obtained
when the maximum octant size is set to 10,000 cells and the memory space used is 6.4
megabytes, which is equivalent to 6 times the maximum octant size, and the block size level
is 3. The cost of constructing 100 streamlines is below 25 seconds.
Figure
10: Streamline visualization of the wind-tunnel data set.
In summary, the out-of-core program performs better when the maximum octant size is
smaller, the allocated memory space is larger, and the block size level is higher. Nevertheless,
the improvement made by changing these three parameters has its limits. The reasons can
be described as follows. In streamline construction, only a small portion of cells are visited
by the streamlines in an octant or even in the whole data set. The performance can not
be improved by just loading a larger number of cells into the main memory. Instead, it is
improved by loading those cells which are actually used in the integration of a streamline. By
using smaller maximum octant size, higher block size level and larger memory space, more
octants can stay in the main memory and the percentage of cells which are directly involved
in the integration becomes higher. Then more computation can be accomplished between
two consecutive octant-fetchings. The overhead of fetching-data is reduced. However, if
too many octants are read, the overhead of octant-fetching becomes high. The increase in
overhead then cancels out the gain from more local computations, and the performance will
reach its limit.
5.3 Average Cost and Overhead
Another set of tests are conducted on the Sun workstation to measure the overhead caused by
data-fetching and streamline scheduling and to study how the overhead affects the behavior
of the program. A data set of 4.8 millions of tetrahedral cells is used. This data set is
Time
The Block Size Level
Maximum Octant Size:10,000 Cells(=0.8MB)
3.2 Mbytes(= 4 Times)
6.4 Mbytes(= 8 Times)
Figure
11: Timing of Program, Maximum Octant Size=10,000 Cells30507090
Time
The Block Size Level
Maximum Octant Size:20,000 Cells(=1.6MB)
6.4 Mbytes(= 4 Times)
Figure
12: Timing of Program, Maximum Octant Size=20,000 Cells
Time
The Block Size Level
Maximum Octant Size=30,000 Cells(=2.4MB)
9.6 Mbytes(= 4 Times)
19.2 Mbytes(= 8 Times)
Figure
13: Timing of Program, Maximum Octant Size=30,000 Cells30507090
Time
The Block Size Level
Maximum Octant Size=40,000 Cells(=3.2MB)
19.2 Mbytes(= 6 Times)
Figure
14: Timing of Program, Maximum Octant Size=40,000 Cells
Figure
15: Streamline visualization of the airplane data set.
obtained from a computational fluid dynamics simulation for the flow passing around an
airplane body. Visualization results are shown in Figure 15. Note that only a portion of the
airplane is modeled. About 407 megabytes of memory are required to store all the vertex
and the cell records of this data set. The maximum octant size, the memory space size and
the block size level are fixed in the tests. The maximum octant size is set to 40,000 cells.
The memory space size is four times of the maximum octant size, and the block size level is
three.
Test are performed for calculating 10-100 streamlines. Again, the maximum number of
time steps of a streamline is limited to 5,000. Both the total cost and the overhead are
measured in each test. The total cost includes the overhead and the cost of integrating the
streamlines. Test results are presented in Figure 16. The overhead includes the costs of
searching and fetching octants, selecting memory blocks, and scheduling streamline objects.
The total cost and overhead are divided by the total number of time steps used in the
streamline construction to obtain the average cost and the average overhead for a single step
computation. The average costs for a single step computation are shown in Figure 17.
Note that the average cost fluctuates in the test cases. This is because the seed points
are randomly selected and therefore the length of each streamline varies. Also note that the
average cost does not decrease much when more streamlines are constructed concurrently.
The increasing overhead due to streamline scheduling and octant searching cancels out most
Time
Number of Streamlines
Execution Time (Wall Clock)
Figure
Total Cost of Constructing Streamlines
of the benefit from octant sharing.
Finally, the average overhead is divided by the average cost to produce the percentage
of cost due to the overhead. The percentages of cost due to overhead for a single step
calculation are depicted in Figure 18. According to the test results, the overhead can be as
high as 40 percent of the overall cost.
We also measure the difference in cost of constructing one streamline at a time and multiple
streamlines. In one test, one hundred streamlines are constructed one by one. Thus,
no streamline scheduling or octant searching is required, and memory allocation is trivial
since only one octant and one streamline are resident in the main memory at any time.
The average cost of tracing a streamline is about 0.76 seconds under these circumstances.
On the other hand, the other test reveals the average cost of constructing 100 streamlines
concurrently is about 0.56 seconds per streamline. We observe a 26.3% improvement in performance
due to octant sharing, even though overhead is introduced in the multi-streamline
execution.
5.4 Local Disk versus Non-Local Disk
In our previous tests, all data is stored in a local disk of the workstations. In some environ-
ments, the data may be stored in a non-local disk of a file-server, which is connected with
the workstations via a network. In order to explore the effect of storing data in the non-local
Execution
Time
(micro-seconds)
Number of Streamlines
Average Cost of One Step
Figure
17: Average Cost of a Single Step Computation2832364010 20
Number of Streamlines
Percentage of Overhead at One Step
Figure
Average Overhead of a Single Step Computation
Time
The Block Size Level
Maximum Octant Size=10,000 Cells(=0.8MB)
3.2 Mbytes(= 4 Times)
6.4 Mbytes(= 8 Times)
Figure
19: Timing of Constructing Streamlines by Using Non-Local Disk
disk, we set up another set of tests. We repeat the tests described in Section 5.2 by using
the same data set. However the data is stored in a non-local disk. Two sets of test results
are presented in Figures 19 and 20 and the penalty of using a non-local disk is apparent.
The latency of the network significantly affects the overall performance. The percentage
of the cost resulting from the network latency is about 59 to 65% when the maximum octant
size is 10,000 cells. It increases to 68 to 76% when the maximum octant size is 40,000 cells.
The total cost is increased by at least 100%. Since the network is shared by several computers,
the program performance is very unstable. In general cases, the program performs better
when the maximum octant size is smaller. This is similar to the results reported in previous
tests. Again, the performance does not improve when more memory space is allocated. This
is because octant fetching becomes more frequent in order to fill the additional memory
space, which triggers more non-local disk I/O.
6 Conclusions
We have presented an efficient out-of-core algorithm for visualizing very large unstructured
vector field data sets on a single workstation with only moderate size of main memory. Using
an octree structure, the data sets are partitioned into subsets and stored in disk files. These
subsets are read into the main memory on demand and a memory management policy is
Time
The Block Size Level
Maximum Octant Size=40,000 Cells(=3.2MB)
19.2 Mbytes(= 6 Times)
Figure
20: Timing of Constructing Streamlines by Using Non-Local Disk
designed to allocate memory space for storing them. Tests are conducted to explore the
performance of the algorithm and its implementation.
Test results demonstrate that the out-of-core algorithm enables interactive streamline
visualization of data sets of several millions of tetrahedral cells on an average workstation.
For example, for a data set with 1.78 million cells, the computational cost for constructing
100 streamlines concurrently, each of them with as many as 5,000 integration points, is below
25 seconds on a Sun SPARC-20 while only using 6.4 megabytes of its main memory space.
We also show that the same visualization requirements cannot be achieved using virtual
memory.
The use of a high-end workstation like a Sun Ultra SPARC or an SGI Indigo2 would
further increase the interactivity. The test results reveal that the performance of our program
is better when the data division is finer, block size level is higher, and the memory space
used is larger. We also show that the out-of-core program runs much faster when the data
are stored in a local disk.
Future work includes making use of the hardware rendering capability on a graphics
workstation, optimizing the preprocessing step and designing out-of-core algorithms for other
types of visualization operations, such as surface and volume rendering.
Acknowledgment
This research was supported in part by the National Aeronautics and Space Administration
under NASA contract NAS1-19480, and by the National Science Foundation under the
ACERC Center. Thanks to Tom Crockett for very constructive suggestions.
--R
Final Progress Report for Phase I SBIR: A Software Architecture for Efficient Visualization of Large Unsteady CFD Results
Visualization of 3-D Vector Fields: Variations on a Stream
pV3: A Distributed System for Large-Scale Unsteady CFD Visualization
Application of the pV3 Co-Processing Visualization Environment to 3-D Unstructured Mesh Calculations on the IBM SP2 Parallel Computer
Constructing stream surface in steady 3d vector fields.
Interactive Time Dependent Particle Tracing Using Tetrahedral Decomposition.
Numerical Analysis.
A Vectorized Particle Tracer for Unstructured Grids.
The Design and Analysis of Spatial Data Structures.
Streaming Pipeline
Operating System Concepts.
Elementary Fluid Mechanics.
--TR
--CTR
Douglass Davis , William Ribarsky , Nickolas Faust , T. Y. Jiang, Intent, perception, and out-of-core visualization applied to terrain, Proceedings of the conference on Visualization '98, p.455-458, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Ralph Bruckschen , Falko Kuester , Bernd Hamann , Kenneth I. Joy, Real-time out-of-core visualization of particle traces, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
Ricardo Farias , Cludio T. Silva, Out-Of-Core Rendering of Large, Unstructured Grids, IEEE Computer Graphics and Applications, v.21 n.4, p.42-50, July 2001
Falko Kuester , Ralph Bruckschen , Bernd Hamann , Kenneth I. Joy, Visualization of particle traces in virtual environments, Proceedings of the ACM symposium on Virtual reality software and technology, November 15-17, 2001, Baniff, Alberta, Canada
Tahsin Kurc , mit atalyrek , Chialin Chang , Alan Sussman , Joel Saltz, Visualization of Large Data Sets with the Active Data Repository, IEEE Computer Graphics and Applications, v.21 n.4, p.24-33, July 2001
Naveen Kumar Polapally , Raghu Machiraju , Dhabhaleshwar Panda, Feature estimation for efficient streaming, Proceedings of the 2002 IEEE symposium on Volume visualization and graphics, October 28-29, 2002, Boston, Massachusetts
Douglas Davis , William Ribarsky , T. Y. Jiang , Nickolas Faust , Sean Ho, Real-time visualization of scalably large collections of heterogeneous objects (case study), Proceedings of the conference on Visualization '99: celebrating ten years, p.437-440, October 1999, San Francisco, California, United States
Tahsin Kurc , mit atalyrek , Chialin Chang , Alan Sussman , Joel Saltz, Visualization of Large Data Sets with the Active Data Repository, IEEE Computer Graphics and Applications, v.21 n.4, p.24-33, July 2001
James Ahrens , Kristi Brislawn , Ken Martin , Berk Geveci , C. Charles Law , Michael Papka, Large-Scale Data Visualization Using Parallel Data Streaming, IEEE Computer Graphics and Applications, v.21 n.4, p.34-41, July 2001
Shyh-Kuang Ueng, Out-of-core encoding of large tetrahedral meshes, Proceedings of the 2003 Eurographics/IEEE TVCG Workshop on Volume graphics, July 07-08, 2003, Tokyo, Japan
David Ellsworth , Bryan Green , Patrick Moran, Interactive Terascale Particle Visualization, Proceedings of the conference on Visualization '04, p.353-360, October 10-15, 2004
Vijay S. Kumar , Benjamin Rutt , Tahsin Kurc , Umit Catalyurek , Joel Saltz , Sunny Chow , Stephan Lamont , Maryann Martone, Imaging and visual analysis---Large image correction and warping in a cluster environment, Proceedings of the 2006 ACM/IEEE conference on Supercomputing, November 11-17, 2006, Tampa, Florida
Shyh-Kuang Ueng , K. Sikorski, An out-of-core method for computing connectivities of large unstructured meshes, Proceedings of the Fourth Eurographics Workshop on Parallel Graphics and Visualization, September 09-10, 2002, Blaubeuren, Germany
C. Charles Law , William J. Schroeder , Kenneth M. Martin , Joshua Temkin, A multi-threaded streaming pipeline architecture for large structured data sets, Proceedings of the conference on Visualization '99: celebrating ten years, p.225-232, October 1999, San Francisco, California, United States
Sara McMains , Joseph M. Hellerstein , Carlo H. Squin, Out-of-core build of a topological data structure from polygon soup, Proceedings of the sixth ACM symposium on Solid modeling and applications, p.171-182, May 2001, Ann Arbor, Michigan, United States
Yi-Jen Chiang , Cludio T. Silva , William J. Schroeder, Interactive out-of-core isosurface extraction, Proceedings of the conference on Visualization '98, p.167-174, October 18-23, 1998, Research Triangle Park, North Carolina, United States
Shyh-Kuang Ueng , Yan-Jen Su , Chi-Tang Chang, LoD Volume Rendering of FEA Data, Proceedings of the conference on Visualization '04, p.417-424, October 10-15, 2004
Paolo Cignoni , Claudio Montani , Claudio Rocchini , Roberto Scopigno, External Memory Management and Simplification of Huge Meshes, IEEE Transactions on Visualization and Computer Graphics, v.9 n.4, p.525-537, October
David Ellsworth , Patrick J. Moran, Accelerating Large Data Analysis By Exploiting Regularities, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.74, October 22-24,
Wagner T. Correa , James T. Klosowski , Claudio T. Silva, Visibility-Based Prefetching for Interactive Out-Of-Core Rendering, Proceedings of the IEEE Symposium on Parallel and Large-Data Visualization and Graphics, p.2, October 20-21,
Peter Lindstrom , Cludio T. Silva, A memory insensitive technique for large model simplification, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Ricardo Farias , Cludio T. Silva, Out-Of-Core Rendering of Large, Unstructured Grids, IEEE Computer Graphics and Applications, v.21 n.4, p.42-50, July 2001
Papadomanolakis , Anastassia Ailamaki , Julio C. Lopez , Tiankai Tu , David R. O'Hallaron , Gerd Heber, Efficient query processing on unstructured tetrahedral meshes, Proceedings of the 2006 ACM SIGMOD international conference on Management of data, June 27-29, 2006, Chicago, IL, USA
Gokul Varadhan , Dinesh Manocha, Out-of-core rendering of massive geometric environments, Proceedings of the conference on Visualization '02, October 27-November 01, 2002, Boston, Massachusetts
Brent Woods , Bradley Clymer , Joel Saltz , Tahsin Kurc, A Parallel Implementation of 4-Dimensional Haralick Texture Analysis for Disk-Resident Image Datasets, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.48, November 06-12, 2004
Yi-Jen Chiang , Ricardo Farias , Cludio T. Silva , Bin Wei, A unified infrastructure for parallel out-of-core isosurface extraction and volume rendering of unstructured grids, Proceedings of the IEEE 2001 symposium on parallel and large-data visualization and graphics, October 22-23, 2001, San Diego, California
Michael Beynon , Chialin Chang , Umit Catalyurek , Tahsin Kurc , Alan Sussman , Henrique Andrade , Renato Ferreira , Joel Saltz, Processing large-scale multi-dimensional data in parallel and distributed environments, Parallel Computing, v.28 n.5, p.827-859, May 2002 | interactive techniques;streamline visualization;disk management;computational fluid dynamics;memory management;out-of-core algorithms;unstructured meshes |
614396 | A New Line Integral Convolution Algorithm for Visualizing Time-Varying Flow Fields. | AbstractNew challenges on vector field visualization emerge as time-dependent numerical simulations become ubiquitous in the field of computational fluid dynamics (CFD). To visualize data generated from these simulations, traditional techniques, such as displaying particle traces, can only reveal flow phenomena in preselected local regions and, thus, are unable to track the evolution of global flow features over time. This paper presents a new algorithm, called UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Our algorithm extends a texture synthesis technique, called Line Integral Convolution (LIC), by devising a new convolution algorithm that uses a time-accurate value scattering scheme to model the texture advection. In addition, our algorithm maintains the coherence of the flow animation by successively updating the convolution results over time. Furthermore, we propose a parallel UFLIC algorithm that can achieve high load-balancing for multiprocessor computers with shared memory architecture. We demonstrate the effectiveness of our new algorithm by presenting image snapshots from several CFD case studies. | Introduction
Vector field data arise from computer simulations in a variety of disciplines such as computational
fluid dynamics (CFD), global climate modeling, and electromagnetism. Visualizing these vector
data effectively is a challenging problem due to the difficulties in finding suitable graphical icons
to represent and display vectors on two-dimensional computer displays. At present, new challenges
emerge as time-dependent simulations become ubiquitous. These simulations produce large-scale
solutions of multiple time steps, which carry complex dynamic information about the underlying
simulation model. To visualize these time-varying data, two types of methods are generally used.
One can be referred to as the instantaneous method where visualizations are created based on an
instance of the data field in time. The visualization results, such as streamlines and vector plots,
from a sequence of discrete time steps are then animated together. The instantaneous method often
suffers from the problem of lacking coherence between animation frames. This incoherent animation
will interfere with the understanding of a flow field's unsteady phenomena. In addition, the
streamline shown at any given instance in time does not correspond to the path that a particle will
travel because the flow is changing its direction constantly. To amend these problems, researchers
have developed a different method, called time-dependent method, which can better characterize the
Han-Wei Shen is with MRJ Technology Solutions at NASA Ames Research Center
David Kao is with NASA Ames Research Center
evolution of the flow field by continuously tracking visualization objects such as particles over time.
Examples are numerical streaklines and pathlines [1], [2].
This paper presents a time-dependent method for visualizing vector data in unsteady flow fields.
Using the Line Integral Convolution (LIC) [3] as the underlying approach, we propose a new convolution
algorithm, called UFLIC (Unsteady Flow LIC), to accurately model the unsteady flow
advection. The Line Integral Convolution method, originally proposed by Cabral and Leedom [3],
is a visualization technique that can produce continuous flow textures which resemble the surface oil
patterns produced in wind-tunnel experiments. The synthesized textures can effectively illustrate
global flow directions of a very dense flow field. However, because LIC convolution is computed
following the traces of streamlines in a steady flow field, it can not readily be used for visualizing
unsteady flow data. A simple extension is to compute the LIC at every time step of the flow field
and then animate the results together. Unfortunately, this approach suffers the same problems as
the instantaneous method that we mentioned above. Forssell and Cohen [4] proposed an extension
by changing the convolution path from streamlines to pathlines for visualizing time-varying
vector fields. While their method produces a better visualization of unsteady flow fields, the resulting
animation can lack coherence between consecutive frames when the underlying flows are fairly
unsteady.
Our algorithm extends the LIC method by devising a new convolution algorithm that simulates
the advection of flow traces globally in unsteady flow fields. As the regular LIC, our algorithm
takes a white noise image as the input texture. This input texture is then advected over time to
create directional patterns of the flow at every time step. The advection is performed by using a
new convolution method, called time-accurate value scattering scheme. In the time-accurate value
scattering scheme, the image value at every pixel is scattered following the flow's pathline trace,
which can be computed using numerical integration methods. At every integration step of the
pathline, the image value from the source pixel is coupled with a timestamp corresponding to a
physical time and then deposited to the pixel on the path. Once every pixel completes its scattering,
the convolution value for every pixel is computed by collecting the deposits that have timestamps
matching the time corresponding to the current animation frame. To track the flow patterns over
time and to maintain the coherence between animation frames, we devise a process, called successive
feed-forward, that drives the convolutions over time. In the process, we repeat the time-accurate
value scattering at every time step. Instead of using the white noise image as the texture input
every time, we take the resulting texture from the previous convolution step, perform high-pass
filtering, and then use it as the texture input to compute the new convolution.
Based on our preliminary work in [5], this paper provides a precise description of the UFLIC
algorithm and its important implementation details. In addition, to improve the algorithm's inter-
activity, we present a parallel implementation of our algorithm for multiprocessor machines with
shared-memory architectures. In our parallel algorithm, we distribute the convolution workload
among available processors by subdividing the texture space into subregions. By carefully choosing
the shapes of subregions, we can achieve high load balancing.
In the following, we first give an overview of the LIC method. Next, we describe and analyze an
existing algorithm for unsteady flows. We then present our UFLIC algorithm in detail, followed by
our parallel algorithm. We conclude this paper by presenting performance results and case studies
by applying our method to several unsteady flow data sets from CFD simulations.
II. Background and Related Work
In this section, we briefly review the LIC method proposed by Cabral and Leedom [3]. We then
describe and analyze the method proposed by Forssell and Cohen [4] for unsteady flow field data.
A. Line Integral Convolution
The Line Integral Convolution method is a texture synthesis technique that can be used to
visualize vector field data. Taking a vector field and a white noise image as the input, the algorithm
uses a low pass filter to perform one-dimensional convolution on the noise image. The convolution
kernel follows the paths of streamlines originating from each pixel in both positive and negative
directions. The resulting intensity values of the LIC pixels along each streamline are strongly
correlated so the directional patterns of the flow field can be easily visualized. To perform the
convolution, different periodic filter kernels can be used. Examples are the Hanning filter [3] and
the box filter [6], [7]. Fig. 1 illustrates the process of LIC convolution.
Recently, several extensions to the original LIC algorithm have been proposed. Forssell and Cohen
[4] adapt the LIC method for curvilinear grid data. Stalling and Hege [6] propose an efficient convolution
method to speed up the LIC computation. Shen, Johnson, and Ma [7] combine dye advection
Fig. 1. The process of LIC convolution
with three-dimensional LIC to visualize global and local flow features at the same time. Okada and
Kao [8] use post-filtering techniques to sharpen the LIC output and highlight flow features such
as flow separations and reattachments. Kiu and Banks [9] propose to use multi-frequency noise
input for LIC to enhance the contrasts among regions with different velocity magnitudes. Recently,
Jobard and Lefer [10] devised a Motion Map data structure that can encode the motion information
of a flow field and produce a visual effect that is very similar to the LIC image.
The texture outputs from the Line Integral Convolution method provide an excellent visual representation
of the flow field. This effectiveness generally comes from two types of coherence. The
first is spatial coherence, which is used to highlight the flow lines of the field in the output image.
The LIC method establishes this coherence by correlating the pixel values along a streamline as
the result of the line integral convolution. The second type of coherence is temporal coherence.
This coherence is required for animating the flow motion. The LIC method achieves this temporal
coherence by shifting the filter phase used in the convolution so that the convolved noise texture
can periodically move along the streamlines in time.
B. Line Integral Convolution for Unsteady Flows
The LIC technique proposed originally is primarily intended for visualizing data in steady flow
fields. To visualize unsteady flow data, an extension was proposed by Forssell and Cohen[4]. In
contrast with convolving streamlines in the steady flow field, the extension convolves forward and
A
Fig. 2. The convolution values of pixels A and B are uncorrelated because pixels A and B have different convolution
paths, represented by P1 and P2, respectively
backward pathlines originating from each pixel at every time step. A pathline is the path that a
particle travels through the unsteady flow field in time. A mathematical definition of a pathline
is given in the next section. To animate the flow motion, the algorithm shifts the filter's phase at
every time step to generate the animation sequence.
The purpose of convolving along pathlines is to show the traces of particles moving in unsteady
flows thus to reveal the dynamic features of the underlying field. However, there are several problems
associated with the pathline convolution when the flow is rather unsteady. First, the coherence of
the convolution values along a pathline is difficult to establish. We illustrate this problem in
Fig. 2. A pathline P 1 that starts from pixel A at time T 1 and passes through pixel B at time
T 2 is the convolution path for pixel A. Similarly, pathline P 2 starting from B at time T 1 is the
convolution path for pixel B. Since pathlines P 1 and P 2 pass through B at different times (T 2
and T 1 , respectively), they have different traces. As a result, the convolution values of A and B
are uncorrelated because two different sets of pixel values are used. Hence, image value coherence
along neither pathline P 1 or P 2 is established. Forssell and Cohen in [4] reported that flow lines in
the output images become ambiguous when the convolution window is set too wide. The problem
mentioned here can explain this phenomenon.
The other drawback of using pathline convolution comes from the difficulties of establishing
temporal coherence by using the phase-shift method as proposed in the regular LIC method. The
reason lies in the variation, over time, of the pathlines originating from the same seed point when
the flow is unsteady. As a result, in the algorithm the same filter with shifted phases is applied to
different convolution paths. However, the effectiveness of using the phase-shift method to create
Fig. 3. A convolution image generated using the pathline convolution algorithm
artificial motion effects mainly relies on the fact that the convolution path is fixed over time. As a
result, the temporal coherence between consecutive frames to represent the flow motion using this
phase-shift method becomes rather obscure for unsteady flow data.
In addition to the above problems, we have discussed some other issues of the pathline convolution
in [5]. These problems in practice limit the effectiveness of using pathline convolution to visualize
flow fields that are rather unsteady. Fig. 3 shows a sample image produced by the pathline
convolution method. The obscure coherence in the flow texture is clearly noticeable.
In the following section, we propose a new algorithm, UFLIC, for visualizing unsteady flow fields.
Instead of relying on shifting convolution kernel phase and gathering pixel values from a flow path
for every pixel to create animated textures, we devise a new convolution algorithm to simulates the
advection of textures based on the underlying flow fields. The new algorithm consists of a time-accurate
value scattering scheme and a successive feed-forward method. Our algorithm provides a
time-accurate, highly coherent solution to highlight dynamic global features in unsteady flow fields.
III. New Algorithm
Image convolution can be thought of as gathering values from many input samples to an output
sample (value gathering), or as scattering one input sample to many output samples (value scat-
tering). The LIC method proposed originally by Cabral and Leedom [3] uses a value gathering
scheme. In this algorithm, each pixel in the field travels in both positive and negative streamline
directions to gather pixel values to compute the convolution. This convolution can be implemented
in a different way by first letting every pixel scatter its image intensity value along its streamline
path; the convolution result of each pixel in the resulting image is then computed by averaging
the contributions that were previously made by other pixels. We refer to this method as a value
scattering scheme. When the convolution path is a streamline in a steady state vector field, value
gathering and value scattering are equivalent. However, for time-varying vector fields, value gathering
and value scattering can produce different results if a pathline is used as the convolution path.
To illustrate this, again in Fig. 2, the pathline from pixel A to pixel B enables B to receive a
contribution from A if the scattering scheme is used. However, when using the gathering scheme,
B does not gather A's value because the pathline P2 starting from B does not pass through A. To
accurately reflect the physical phenomena in unsteady flow fields, the method of value scattering
convolution is more appropriate than the value gathering method. The reason lies in the nature
of value scattering which corresponds to flow advections where every pixel in the image plane can
be thought of as a particle. The particles move along the flow traces and leave their footprints to
create a convolution image.
In this section, we present a new algorithm, UFLIC, which uses a value scattering scheme to
perform line integral convolution for visualizing unsteady flow fields. Starting from a white noise
image as the input texture, our UFLIC algorithm successively advects the texture to create a
sequence of flow images. To achieve this, we propose a new convolution method called the Time-Accurate
Value Scattering scheme which incorporates time into the convolution. This time-accurate
value scattering convolution scheme, driven by a Successive Feed-Forward process, iteratively takes
the convolution output from the previous step, after high-pass filtering, as the texture input for
the next convolution to produce new flow textures. Our UFLIC algorithm can effectively produce
animations with spatial and temporal coherence for tracking dynamic flow features over time. In
the following, we first present the time-accurate value scattering scheme. We then describe the
successive feed-forward process.
A. Time-Accurate Value Scattering
In a time-varying simulation, two different notions of time are frequently used. One is the Physical
Time and the other is the Computational Time. Physical time is used to describe a continuous
measurable quantity in a physical world, such as seconds or days. Let the physical
between any two consecutive time steps in an unsteady flow be denoted by dt. Then, t
where t i is the physical time at the i th time step. Computational time is a non-physical quantity
in computational space. At the i th time step, let - i be the corresponding computational time, then
i. The computational step size is one. In the following, both types of time are involved and
carefully distinguished.
We propose a time-accurate value scattering scheme to compute the line integral convolution. This
value scattering scheme computes a convolution image for each step of the time-varying flow data
by advecting the input texture over time. The input texture can be either a white noise image or a
convolution result output from the preceding step. We delay the discussion of the choice of an input
until the next section. In the following discussion, we assume that the current computational time
step is - and its corresponding physical time is t and we explain our value-scattering convolution
method.
Given an input texture, every pixel in the field serves as a seed particle. From its pixel position
at the starting physical time t, the seed particle advects forward in the field both in space and time
following a pathline that can be defined as:
Z t+\Deltat
where p(t) is the position of the particle at physical time t, p(t + \Deltat) is the new position after
time \Deltat, and v(p(t); t) is the velocity of the particle at p(t) at physical time t. To evaluate the
above expression and generate particle traces, numerical methods such as the Runge-Kutta second-
or fourth-order integration scheme can be used.
At every integration step, the input image value, I p , of the pixel from which the particle originates
is normalized and scattered to the pixels along the pathline. The normalization is determined by
two factors:
ffl The length of the current integration step.
ffl The "age" of the particle.
Assuming that a box kernel function used, the line integral convolution can then be
computed by multiplying the image value by the distance between two consecutive integration
steps. This explains the first normalization factor. Assume that the particle is at its n th integration
step; then the distance between the particle positions of the current and preceding integration is
defined as ! and can be expressed as:
The second normalization factor simulates the effect of fading, over time, of the seed particle's
intensity. To do this, we define a particle's ``age'' at its n th integration step as:
\Deltat i
where \Deltat i is the i th time increment in the pathline integration. Based on a particle's age, we define
a normalization variable / which has a value that decreases as the "age" of the particle increases.
Assuming that the expected life span of a particle is T , then:
A n
and /, the overall normalization weight W is:
Then, the normalized scattering value at the n th integration step becomes:
I normalized = I p \Theta W
In our data scattering scheme, the normalized pixel value at every integration step is associated
with a timestamp. Given that the pathline starts from its seed pixel at physical time t, the
corresponding physical time at the n th integration step is then:
\Deltat i
We compute the convolution frame only at every integer computational time step corresponding to
the original data. Therefore, we use a rounded-up computational time corresponding to its physical
time as the timestamp associated with the normalized image value. This can be computed by:
To receive the scattering image values, each pixel keeps a buffer, called the Convolution Buffer (C-
Buffer). Within the C-Buffer, there are several buckets corresponding to different computational
times. Each bucket has a field of accumulated image values, I accum , and a field of accumulated
weights, W accum . The scattering at the n th integration step is done by adding the normalized image
value I normalized and its weight W to the bucket of the pixel, at the n th integration step, that
corresponds to the computational time
I normalized
For each seed particle, the distance that it can travel is defined as the convolution length. We
determine this convolution length indirectly by specifying the particle's life span. We defined this
life span in computational time, which can be converted to a physical time and used for every
particle in the convolution. The advantage of using this global life span to control the convolution
length is that the lengths of different pathlines are automatically scaled to be proportional to the
particle's velocity magnitude, which is a desirable effect as described in [3], [4]. In addition, this
life span gives the number of time steps for which data must be loaded into main memory so that
a particle may complete its advection.
Based on the life span specified for the seed particle advection, the number of buckets in a C-
Buffer structure that is required can actually be pre-determined. Assuming that the life span of a
pathline is N in computational time, i.e., the particle starts at - i and ends at
N buckets in the buffer are needed because no particle will travel longer than N computational
time steps after it is born. In our implementation, the C-Buffer is a one-dimensional ring buffer
structure. The integer \Phi is an index which points to the bucket that corresponds to the current
computational time, - , when every pixel starts the advection. The value of \Phi can be computed as:
Hence, assuming that the particle is at a computational time - 0 at its current integration step, then
the corresponding bucket in the ring buffer of the destination pixel that it should deposit to has
index:
Fig. 4 depicts the structure of the ring buffer.
mod
Fig. 4. In this example, the life span of a seed particle (N) is 5 computational time steps, and the current
computational time (-) at which a pixel starts the advection is 6; then the value of \Phi pointing to the bucket in
the Convolution Buffer (C-Buffer) is 1
In the time-accurate value scattering scheme, every pixel advects in the field and scatters its image
value based on the scattering method just described. After every pixel completes the scattering
process, we can start computing the convolution to get the resulting texture. We proceed by
including in the convolution only those scattered pixel values that have a timestamp equal to the
current computational time - . Therefore, we go to each pixel's C-Buffer, obtain the accumulated
pixel value I accum and the accumulated weight W accum from the corresponding bucket, which is the
bucket that has the index \Phi. The final convolution value C is then computed:
W accum
The accumulated image values in other buckets with future timestamps will be used when the time
comes. We increment the value \Phi by one so the current bucket in the C-Buffer can be re-used.
In addition, the current computational time - is also incremented by one and the value scattering
convolution will proceed to the next time step of data.
It is worth mentioning that, in our method, each pixel scatters its value along only the forward
pathline direction but not the backward direction. The reason lies in nature: backward scattering
does not correspond to an observable physical phenomenon; flows do not advect backwards. In
addition, the symmetry issue mentioned in [3] does not appear as a problem in our unsteady flow
animations.
B. Successive Feed-Forward
Our value scattering scheme provides a time-accurate model for simulating the flow advection
to create spatial coherence in the output texture. In this section, we describe the process that
successively transports the convolution results over time to maintain the temporal coherence for
visualizing unsteady flows.
As mentioned previously, we define a time-dependent method as one that progressively tracks
the visualization results over time. In this section, we present a time-dependent process, called
Successive Feed-Forward, which drives our time-accurate value scattering scheme to create temporal
coherence. Our algorithm works as follows. Initially, the input to our value scattering convolution
algorithm is a regular white noise texture. Our value scattering scheme advects and convolves the
noise texture to obtain the convolution result at the first time step. For the subsequent convolutions,
instead of using the noise texture again, we use the output from the previous convolution as the
input texture. This input texture, showing patterns that have been formed by previous steps of the
flow field, is then further advected. As a result, the output frames in consecutive time steps are
highly coherent because the flow texture is continuously convolved and advected throughout space
and time.
There is an important issue with the successive feed-forward method that must be addressed. The
line integral convolution method in general, or our value scattering scheme in particular, is really a
low-pass filtering process. Consequently, contrasts among flow lines will gradually be reduced over
time as the low-pass filtering process is repeatedly applied to the input texture. This would cause
problems if one tried to visualize a long sequence of unsteady flow data. To correct this problem,
we first apply a high-pass filter (HPF) to the input texture, which is the result from the previous
convolution, before it is used by the value scattering scheme at the next step. This high-pass filter
helps to enhance the flow lines and maintain the contrast in the input texture. The high-pass filter
used in our method is a Laplacian operator. A two-dimensional Laplacian can be written as a 3 \Theta 3
mask: fi fi fi fi fi fi fi
The result computed from the mask does not have exclusively positive values. In order to display the
result, a common technique used in digital image processing applications is to subtract the Laplacian
Fig. 5. A convolution image generated from an input texture without noise-jittered high-pass filtering
from the original image. The filter mask of overall operations of Laplacian and subtraction can be
derived as:
To prevent the high-pass filter from introducing unnecessary high frequencies which might cause
aliasing in our final image, we jitter the resulting output from the high-pass filter with the original
input noise texture. This is done by masking the least significant seven bits of the output with the
original input noise.
With the high-pass filtering and the noise jittering, we can produce convolution images with
restored contrast and clearer flow traces. Fig. 5 shows a snapshot from an animation sequence
without using the noise-jittered high-pass filter. Fig. 6 is the result with the noise jittered high-pass
filter. The difference is quite dramatic. Note that Fig. 6 can be used to compare with Fig. 3.
Both images are produced from the same time step of data. Fig. 7 gives an overview of our entire
algorithm. Note that the noise-jittered high-pass filtering process applies only to the input texture
for the next iteration of convolution and not to the animation images.
IV. Parallel Implementation
In this section, we present a simple parallel implementation of the UFLIC algorithm for multi-processor
machines with shared-memory architectures. In the following, we first discuss how we
subdivide the convolution workload among processors. We then describe the synchronization steps
among the processors. It is noteworthy that a parallel algorithm for a regular LIC method on
massively parallel distributed memory computers was proposed by Z-ockler, Stalling, and Hege[11].
Both their algorithm for steady LIC and our method for unsteady LIC subdivide the texture image
Fig. 6. A convolution image generated from an input texture with noise-hittered high-pass filtering
scattering convolution
NHPF: Noise-jittered high-pass filter
Current computational time
Display
Fig. 7. Algorithm Flowchart
space to distribute the workload.
A. Workload Subdivision
The nature of the successive feed-forward process requires that the UFLIC algorithm must be
executed sequentially over time because the convolution in one time step has to be completed
before the next convolution can start. In our implementation, we parallelize the time-accurate
value scattering scheme which is executed at every computational time step. To distribute the
workload, we subdivide the image space into subregions and distribute the convolution task in
those subregions among available processors. In choosing the shapes of the subregions, there are
two considerations:
ffl Subregions that are assigned to each processor should be well distributed in the image space.
ffl For each processor, the locality of the data access should be maintained when computing the
convolution.
During the convolution, the workload incurred from the value scattering of each pixel is generally
determined by the length of the pathline. The farther the pathline extends, the more work is needed
Fig. 8. UFLIC is parallelized by subdividing the texture space into tiles and randomly assigning these tiles to
available processing elements (PEs).
because more pixels are encountered along the way and more operations of value accumulation are
involved. This occurs when the seed particle travels through regions that have higher velocity. In
a flow field, the variation of velocities in different regions of a flow field is quite dramatic, and
the distribution of velocities is usually quite uneven. In order to give each processor a balanced
workload, a processor should be assigned subregions that are evenly distributed over the entire field.
The other issue that has to be considered when subdividing the work space is the maintenance
of the locality of data access to avoid the penalty caused by local cache miss or memory page fault
when accessing the flow data. This can happen when two consecutive seed particles that a processor
schedules to advect are located a long distance from each other. In this case, the vector data that
was brought into the cache or main memory for one particle advection has to be flushed out for a
new page of data when the advection of the second particle starts.
Based on the above two considerations, our parallel algorithm divides the image space into rectangular
tiles as in Fig. 8. Given P processors, we first specify the number of tiles, M , that each
processor will receive. We then divide the entire texture space into M \Theta P tiles. We randomly
assign the tiles to each processor by associating each tile with a random number; then we sort the
tiles by these random numbers. After the sorting, each processor takes its turn to grab M tiles
from the sorted tile list.
B. Process Synchronization
Once the work distribution is completed, each processor starts computing the convolution values
for the pixels in its tiles. The parallel convolution process can be divided into three phases which
are the synchronization points among the processors. These three phases are:
Scattering pixel values along pathlines.
ffl Computing the convolution results from the C-Buffers.
Performing noise-jittered high pass filtering.
In the first phase, the value scattering involves writing to the C-Buffers that belong to the pixels
along a pathline. Since all the processors are performing the scattering simultaneously, it is possible
that more than one processor needs to write to the same bucket of a C-Buffer at the same time.
In our implementation, instead of locking the C-buffer when a processor is writing to it, which
will inevitably incur performance overhead, we allocate a separate set of buckets for each processor.
Recall that in the sequential algorithm, the number of buckets in a C-buffer is equal to the pathline's
life span N . Given P available processors, we allocate N buckets for each processor, so the total
number of buckets in a C-Buffer is P \Theta N . In this way, although more memory is needed, there is
no locking mechanism required for the C-Buffer. For most of the currently available multiprocessor,
shared-memory machines, we have found that this overhead is not overwhelming.
In the second phase, the parallelization is fairly straightforward where each processor independently
traverses through the C-Buffers of the pixels in its own tiles to compute the convolution
results. Note that now within each C-Buffer, there are P buckets corresponding to the same computational
time step. The convolution is computed by accumulating and normalizing the scattered
image values from those buckets.
In the third phase, each processor simply applies the noise-jittered, high-pass filtering process
independently to the convolution results of pixels belonging to its own tiles.
V. Results and Discussion
In this section, empirical results for evaluating the UFLIC algorithm are provided. We first
present a performance analysis of our sequential and parallel implementations. We then show case
studies of several CFD applications.
A. Performance Analysis
Data from two unsteady CFD simulations were used in our experiments. As shown in Table I, we
applied the UFLIC algorithm to a two-dimensional curvilinear surface of a delta wing model with
a 287 \Theta 529 texture resolution and to a two-dimensional curvilinear surface of an airfoil model with
a 196 \Theta 389 texture resolution. The machine used to generate the results is an SGI Onyx2 with
I
Texture resolutions on surfaces of two unsteady flow data sets
Data Set Texture Resolution
Wing 287 \Theta 529
Airfoil 196 \Theta 389
II
UFLIC Computation time (in seconds) with one processor
Data Set Scattering Convolution Filtering Total
Wing 71.13 0.23 0.19 71.55
Airfoil 123.85 0.11 0.09 124.05
195 MHZ R10000 processors. Table II shows the breakdown of the average computation time using
one processor for generating a texture image in an animation series. We used three computational
time steps as the seed particle's life span. In the table, we show the time spent at each of the
UFLIC's three phases: scattering pixel values, computing convolutions, and high-pass filtering. We
found that the process of value scattering required more than 99% of the total execution time;
the majority of the calculations provided the pathlines of seed particles. It is noteworthy that the
computation time for data scattering is not necessarily proportional to the texture resolution. The
reason is that seed particles in different flow fields, given the same life span, may travel different
distances due to different flow velocities. This will result in a variation of the workload in the data
scattering process, as we mentioned previously in the section on the parallel algorithm. In Table II,
we can see that the computational time of value scattering for the airfoil data set is longer than
that for the delta wing data set, even though the texture resolution of the airfoil is smaller. This
can be explained by Table III which compares the average pathline lengths of seed particles for the
airfoil and the delta wing data sets.
III
Average pathline length (in pixels)
Data Set Pathline Length
Airfoil
Performance of parallel UFLIC: delta wing
CPUs Time Speedup Efficiency
6 13.17 5.43 0.90
Performance of parallel UFLIC: airfoil
CPUs Time Speedup Efficiency
6 22.79 5.44 0.90
7 20.22 6.13 0.88
In our parallel implementation, given P processors, we assign each processor P \Theta 16 tiles by
dividing the two-dimensional texture space into 4P \Theta 4P tiles. Table IV and Table V show the
total execution times (in seconds), speedup factors, and parallel efficiencies of our parallel algorithm
with up to seven processors. The parallel efficiency is defined by dividing the speedup factor by the
number of processors. From the results, we observe that we achieve about 90% parallel efficiency,
which is a very good load balance. Fig. 9 and Fig. 10 show the parallel speedup graphs.
B. Case Studies
In this section, we show the results of applying our new algorithm to several CFD applications.
Animation is the ideal method to show our results; for the paper, we will show several snapshots
from the animation.
There have been many CFD simulations performed to study flow phenomena about oscillating
wings. Generally, a simulation is done using a two dimensional airfoil. The first example is based
on a simulation of unsteady two-dimensional turbulent flow over an oscillating airfoil, which pitches
Number of Processors26
Parallelized UFLIC
Fig. 9. Parallel speedup of UFLIC algorithm: delta wing
Number of Processors26
Parallelized UFLIC
Fig. 10. Parallel speedup of UFLIC algorithm: airfoil
down and then up eleven degrees. The oscillational motion of the airfoil creates vortex shedding
and vortex formation. Fig.s 11(a) and 11(b) show the formation of the primary vortex rotating
clockwise above the airfoil. The flow texture shown is colored by velocity magnitude. Blue indicates
low velocity and magenta is high velocity. Initially, The velocity is high near the leading edge of
the airfoil. As the airfoil pitches down and then up, the velocity decreases at the leading edge and
increases near the trailing edge, a counter-clockwise secondary vortex forms beyond the trailing
edge of the airfoil, see Fig.s 11(c) and 11(d). Furthermore, the primary vortex gains strength, and
the primary vortex separates from the airfoil.
We have compared the unsteady flows shown in Fig. 11 with the results of steady LIC. For the
comparison, we used steady LIC to compute surface flows at each time step independently; then
we animated the steady LIC over time. The difference between this and the animation of UFLIC is
very apparent. With our new algorithm, the animation reveals the dynamic behavior of the vortices
during vortex formation and vortex shedding much more realistically than steady LIC.
Fig. 12 depicts a snapshot from a time sequence of flow patterns about several spiraling vortices
from an unsteady flow simulation of four vortices. Some of the vortices orbit about other vortices in
the flow. When compared to steady LIC, the spiraling motion of the vortices is not shown; instead,
LIC reveals only the translation of the vortices.
The next example is an unsteady flow simulation of the F/A-18 fighter aircraft. During air
combat, the twin-tailed F/A-18 jet can have a high-angle of attack. The simulation involves the
study of tail buffet, which occurs when the vertical tails are immersed in unsteady flow and bursting
of vortices along the leading edge of the tails is very frequent. This phenomenon can cause heavy
loading on one of the vertical tails of the jet, which is of major safety concern. Fig. 13(a) and (b)
show the first wave of vortex bursting along the leading edge of one of the vertical tails. The figure
shows the outboard of the tail, e.g. the side of the tail that is facing away from the other vertical
tail. Note the movement of the vortex, which occurs near the blue shaded region, from the leading
edge of the tail to the upper tip of the tail. Another wave of vortex bursting is shown in Fig. 13(c)
and 13(d). In the animation, the vortex bursting phenomenon is revealed dramatically.
The last example is taken from an unsteady flow simulation of a 65-degree sweep delta wing at
degrees angle of attack. For this simulation, there are several interesting flow separations and
reattachments occur along the leading edge of the delta wing at a zero degree static roll angle.
Furthermore, vortex breakdown is present in the unsteady flow. Fig. 14 shows a snapshot of the
surface flow at a given time step. The velocity magnitude color contours give some indication of
the change in velocity over time. Along the leading edge of the wing, the flow velocity is relatively
high compared to the velocity near the wing body.
Based on the examples presented in this section, some observations can be made regarding the
surface flows generated by our new algorithm. First, because the flow is unsteady, blurriness is likely
to occur in regions where the flow is changing rapidly. Unlike the surface flow patterns generated
from regular LIC, the flow lines generated using our unsteady LIC may have different line widths.
This could be attributed to changes in the velocity magnitude and the flow direction.
VI. Conclusions and Future Work
We have presented UFLIC, an Unsteady Flow Line Integral Convolution algorithm, for visualizing
vector data in unsteady flow fields. Using the time-accurate value scattering scheme and
the successive feed-forward process, our new convolution algorithm can accurately model the flow
advection and create highly coherent flow animations. The results from several case studies using
our algorithm have shown that the new technique is very effective in capturing dynamic features in
unsteady flow fields.
Future work includes applying our method to three-dimensional unsteady flow data sets. In addi-
tion, we would like to compare our unsteady LIC method with the Spot Noise technique introduced
by van Wijk [12]. Spot Noise is an effective method for creating flow texture patterns. The final
texture image quality is based on the distribution and the shape of spots. de Leeuw and van Wijk
[13] enhanced the spot noise technique by bending spots based on local stream surfaces. The objective
is to produce more accurate flow texture patterns near flow regions with high curvature.
To extend the spot noise technique for unsteady flows, there are a few key issues to be resolved.
For instance, as spots are advected over time the distribution of the spots can change rapidly. A
challenge is to maintain the coherence of the spots over time. Another consideration is that spot
bending assumes the flow is steady over the local stream surfaces; however, for unsteady flows this
assumption may not be true. We plan to look into these issues.
Acknowledgments
This work was supported in part by NASA contract NAS2-14303. We would like to thank Neal
Chaderjian, Ken Gee, Shigeru Obayashi, and Ravi Samtaney for providing their data sets. Special
thanks to Randy Kaemmerer for his meticulous proof-reading of this manuscript, and to Michael
Cox and David Ellsworth for interesting discussions and valuable suggestions in the parallel imple-
mentation. We also thank Tim Sandstrom, Gail Felchle, Chris Henze, and other members in the
Data Analysis Group at NASA Ames Research Center for their helpful comments, suggestions, and
technical support.
--R
Visualization of time-dependent flow fields
Visualizing time-varying phenomena in numerical simulations of unsteady flows
Imaging vector fields using line integral convolution.
Using line integral convolution for flow visualization: Curvilinear grids
Uflic: A line integral convolution algorithm for visualizing unsteady flows.
Fast and resolution independent line integral convolution.
Visualizing vector fields using line integral convolution and dye advection.
Enhanced line integral convolution with flow feature detection.
The motion map: Efficient computation of steady flow animations.
Parallel line integral convolution.
Spot noise: Texture synthesis for data visualization.
Enhanced spot noise for vector field visualization.
--TR
--CTR
Nan Li , Zhiong Huang, A feature-based pencil drawing method, Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia, February 11-14, 2003, Melbourne, Australia
Walter H. Jimenez , Wagner T. Correa , Claudio T. Silva , Baptista Baptista, Visualizing Spatial and Temporal Variability in Coastal Observatories, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.75, October 22-24,
Jarke J. van Wijk, Image based flow visualization, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Bruno Jobard , Gordon Erlebacher , M. Yousuff Hussaini, Lagrangian-Eulerian advection for unsteady flow visualization, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Pak Chung Wong , Harlan Foote , David L. Kao , Ruby Leung , Jim Thomas, Multivariate visualization with data fusion, Information Visualization, v.1 n.3/4, p.182-193, December 2002
Daniel Weiskopf , Gordon Erlebacher , Thomas Ertl, A Texture-Based Framework for Spacetime-Coherent Visualization of Time-Dependent Vector Fields, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.15, October 22-24,
Bruno Jobard , Gordon Erlebacher , M. Yousuff Hussaini, Hardware-accelerated texture advection for unsteady flow visualization, Proceedings of the conference on Visualization '00, p.155-162, October 2000, Salt Lake City, Utah, United States
Bruno Jobard , Gordon Erlebacher , M. Yousuff Hussaini, Lagrangian-Eulerian Advection of Noise and Dye Textures for Unsteady Flow Visualization, IEEE Transactions on Visualization and Computer Graphics, v.8 n.3, p.211-222, July 2002
Anders Helgeland , Oyvind Andreassen, Visualization of Vector Fields Using Seed LIC and Volume Rendering, IEEE Transactions on Visualization and Computer Graphics, v.10 n.6, p.673-682, November 2004
Vivek Verma , David Kao , Alex Pang, PLIC: bridging the gap between streamlines and LIC, Proceedings of the conference on Visualization '99: celebrating ten years, p.341-348, October 1999, San Francisco, California, United States
ZhanPing Liu , Robert James Moorhead, II, AUFLIC: an accelerated algorithm for Unsteady Flow Line Integral Convolution, Proceedings of the symposium on Data Visualisation 2002, May 27-29, 2002, Barcelona, Spain
Guo-Shi Li , Udeepta D. Bordoloi , Han-Wei Shen, Chameleon: An Interactive Texture-based Rendering Framework for Visualizing Three-dimensional Vector Fields, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.32, October 22-24,
Tobias Schafhitzel , Eduardo Tejada , Daniel Weiskopf , Thomas Ertl, Point-based stream surfaces and path surfaces, Proceedings of Graphics Interface 2007, May 28-30, 2007, Montreal, Canada
Jarke J. van Wijk, Image Based Flow Visualization for Curved Surfaces, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.17, October 22-24,
Robert S. Laramee , Jarke J. van Wijk , Bruno Jobard , Helwig Hauser, ISA and IBFVS: Image Space-Based Visualization of Flow on Surfaces, IEEE Transactions on Visualization and Computer Graphics, v.10 n.6, p.637-648, November 2004
Hongfeng Yu , Kwan-Liu Ma , Joel Welling, A Parallel Visualization Pipeline for Terascale Earthquake Simulations, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.49, November 06-12, 2004
Zhanping Liu , Robert J. Moorhead, Accelerated Unsteady Flow Line Integral Convolution, IEEE Transactions on Visualization and Computer Graphics, v.11 n.2, p.113-125, March 2005 | flow visualization;flow animation;unsteady flows;line integral convolution;image convolution;parallel algorithm;texture synthesis;vector field visualization |
614405 | Dynamic Catmull-Clark Subdivision Surfaces. | AbstractRecursive subdivision schemes have been extensively used in computer graphics, computer-aided geometric design, and scientific visualization for modeling smooth surfaces of arbitrary topology. Recursive subdivision generates a visually pleasing smooth surface in the limit from an initial user-specified polygonal mesh through the repeated application of a fixed set of subdivision rules. In this paper, we present a new dynamic surface model based on the Catmull-Clark subdivision scheme, a popular technique for modeling complicated objects of arbitrary genus. Our new dynamic surface model inherits the attractive properties of the Catmull-Clark subdivision scheme, as well as those of the physics-based models. This new model provides a direct and intuitive means of manipulating geometric shapes, and an efficient hierarchical approach for recovering complex shapes from large range and volume data sets using very few degrees of freedom (control vertices). We provide an analytic formulation and introduce the "physical" quantities required to develop the dynamic subdivision surface model which can be interactively deformed by applying synthesized forces. The governing dynamic differential equation is derived using Lagrangian mechanics and the finite element method. Our experiments demonstrate that this new dynamic model has a promising future in computer graphics, geometric shape design, and scientific visualization. | INTRODUCTION
Generating smooth surfaces of arbitrary topology is a grand challenge in geometric modeling, computer
graphics and visualization. The recursive subdivision scheme first introduced by Chaikin [1] is very well
suited for this purpose. During the past two decades, a wide variety of subdivision schemes for modeling
smooth surfaces of arbitrary topology have been derived in geometric modeling after Chaikin's pioneering
work on the curve generation. A recursive subdivision algorithm typically generates a smooth surface
November 10, 1997
which is the limit of a sequence of recursively refined polyhedral surfaces based on a user-defined initial
control mesh. At each step of the subdivision, a finer polyhedral surface with more vertices and faces will
be constructed from the previous one via a refinement process (also called "chopping corners"). In general,
subdivision schemes can be categorized into two distinct classes namely, (1) approximating subdivision
methods and (2) interpolating subdivision methods.
A. Background
Among the approximating schemes, the techniques of Doo and Sabin [2], [3], [4] and Catmull and Clark
[5] generalize the idea of obtaining biquadratic and bicubic B-spline patches from rectangular control
meshes. In [5], Catmull and Clark developed a method for recursively generating a smooth surface from
a polyhedral mesh of arbitrary topology. The Catmull-Clark subdivision surface, defined by an arbitrary
non-rectangular mesh, can be reduced to a set of standard B-spline patches except at a finite number
of extraordinary points, where the in-degree of the vertex in the mesh is not equal to four. Doo and
Sabin [3] further analyzed the smoothness behavior of the limit surface near extraordinary points using
Fourier transforms and an eigenvalue analysis of the subdivision matrix. Ball and Storry [6], [7] and Reif
[8] further extended the prior work on continuity properties of subdivision surfaces by deriving various
necessary and sufficient conditions on smoothness for different subdivision schemes. In [9], Loop presented
a similar subdivision scheme based on the generalization of quartic triangular B-splines for triangular
meshes. Halstead, Kass and Derose [10] proposed an algorithm to construct a Catmull-Clark subdivision
surface that interpolates the vertices of a mesh of arbitrary topology. In [11], Taubin developed a signal
processing-based approach to fair polyhedral surfaces of arbitrary topology.
The most well-known interpolation-based subdivision scheme is the "butterfly" algorithm proposed by
Dyn, Gregory and Levin [12]. Butterfly subdivision method makes use of a small number of neighboring
vertices for subdivision. It requires simple data structures and is extremely easy to implement. However,
it needs a topologically regular setting for the initial polygonal meshes in order to obtain a smooth limit
surface. A variant of this scheme was proposed by Dyn, Hed and Levin [13]. Recently, Zorin, Schroder
and Sweldens [14] further developed an improved interpolatory subdivision scheme that can retain the
November 10, 1997
simplicity of the butterfly scheme and result in much smoother surfaces even from initial polygonal meshes
that are irregular.
B. Motivation
Although recursive subdivision surfaces are extremely powerful to represent smooth geometric shapes of
arbitrary topology, they constitute a purely geometric representation, and furthermore, conventional geometric
modeling with subdivision surfaces may be infeasible for representing highly complicated objects.
For example, modelers are faced with the tedium of indirect shape modification and refinement through
time-consuming operations on a large number of (most often irregular) control vertices when using typical
spline-based modeling schemes. In addition, it may not be enough to obtain the most "fair" surface that
interpolates a set of (ordered or unorganized) data points. A certain number of local features such as
bulges or inflections ("roughness") may be strongly desired while making geometric objects satisfy global
smoothness requirements in geometric modeling and graphics applications. In contrast, physics-based
modeling provides a superior approach to shape modeling that can overcome most of the limitations
associated with traditional geometric modeling approaches. Free-form deformable models governed by
physical laws are of particular interest in this context. These models respond dynamically to applied
forces in a very intuitive manner. The equilibrium state of the model is characterized by a minimum of
the potential energy of the model subject to imposed constraints. The potential energy functionals can
be formulated to satisfy local and global modeling criteria and impose geometric constraints relevant to
shape design.
Free-form deformable models were first introduced to computer graphics and visualization in Terzopoulos
et al. [15] and further developed by Terzopoulos and Fleischer [16], Pentland and Williams [17],
Metaxas and Terzopoulos [18] and Vemuri and Radisavljevic [19]. Celniker and Gossard [20] developed
a system for interactive free-form design based on the finite element optimization of energy functionals
proposed in [16]. Bloor and Wilson [21], [22], Celniker and Welch [23] and Welch and Witkin [24]
proposed deformable B-spline curves and surfaces which can be designed by imposing the shape criteria
via the minimization of the energy functionals subject to hard or soft geometric constraints through
November 10, 1997
Lagrange multipliers or penalty methods. Recently, Qin and Terzopoulos [25], [26], [27] have developed
dynamic NURBS (D-NURBS) which are very sophisticated models suitable for representing a wide variety
of free-form as well as standard analytic shapes. The D-NURBS have the advantage of interactive and
direct manipulation of NURBS curves and surfaces, resulting in physically meaningful hence intuitively
predictable motion and shape variation.
A severe limitation of the existing deformable models, including D-NURBS, is that they are defined
on a parametric domain. Hence, it is almost impossible to model surfaces of arbitrary genus using these
models. In this paper, we develop a dynamic generalization of recursive subdivision schemes based on
Catmull-Clark subdivision surfaces. Our new dynamic model combines the benefits of subdivision surfaces
for modeling arbitrary topology as well as the dynamic splines for direct and interactive manipulation of
shapes by applying simulated forces. Note that, the derivation of our dynamic subdivision surface poses a
significant technical challenge because of the fact that no closed-form parameterization of the limit surface
exists near the extraordinary points. We present the details of our formulation in a later section.
The dynamic Catmull-Clark subdivision surface has been developed primarily for modeling arbitrary
topology. However, another important application of the developed model is in shape recovery. In a
typical shape reconstruction application, we need to recover shapes of arbitrary topology from large data
sets. Physics-based models are often used for this purpose. However, the model used for fitting should
be able to recover the shape accurately. At the same time the number of degrees of freedom for model
representation should be kept low. Another important criterion is that the model initialization should
not be restricted to parameterized input meshes since it is infeasible to parameterize shapes of arbitrary
topology. A physics-based model satisfying the aforementioned criteria is a good candidate for a solution
to the shape recovery problem.
Physics-based deformable models used to solve shape recovery problem involve either fixed size [19], [28],
[29], [30], [31] or adaptive size [32], [33], [34], [35], [36], [37] grids. The models with fixed grid size generally
use less number of degrees of freedom for representation, but the accuracy of the recovered shape is lacking
in many cases. On the other hand, the number of degrees of freedom used for shape representation of
the model is generally very high and computationally expensive ad hoc schemes are used in models with
November 10, 1997
adaptive grid size methods. The recovered shape is however satisfactory in the context of accuracy. The
hierarchical shape representation using locally adaptive finite elements discussed in [34] can efficiently
represent the shape of an object of genus zero with a small number of nodal points. However, this scheme
can not be easily extended to cope with arbitrary shapes. The balloon model for describing the shape
of complex objects [32] also adapts the mesh surface to local surface shapes and is purely driven by an
applied inflation force towards the object surface from the interior of the object. This scheme involves
a large number of nodal points for representing complex shapes. Moreover, all the existing models using
either a fixed or an adaptive grid size require a parameterized mesh as their input.
The proposed model solves the shape recovery problem very efficiently as it can recover shapes from
large range and volume data sets using very few degrees of freedom (control vertices) for its representation
and can cope with any arbitrary input mesh, not necessarily parameterized, with an arbitrary number
of extraordinary points. The initialized model deforms under the influence of synthesized forces to fit
the data set by minimizing its energy. Once the approximate shape is recovered, the model is further
subdivided automatically and a better approximation to the input data set is achieved using more degrees
of freedom. The process of subdivision after achieving an approximate fit is continued till a prescribed
error criteria for fitting the data points is achieved.
In a nutshell, the dynamic Catmull-Clark subdivision surface model has been motivated by its capability
to model arbitrary topology where modelers can directly manipulate the smooth limit surface in an
intuitive fashion and by its applicability to the shape recovery problem.
C.
Overview
The rest of the paper is organized as follows: Section II presents the detailed formulation of the dynamic
Catmull-Clark subdivision Surfaces. The implementation details are provided in Section III. Experimental
results can be found in IV. Finally, we make concluding remarks and point out future directions of research
in Section V.
II. FORMULATION
In this section we present a systematic formulation of our new dynamic model based on Catmull-Clark
subdivisions. First, we briefly review the Catmull-Clark subdivision scheme. Then, we demonstrate how
to assign a bicubic patch in the limit surface to a non-boundary face in a rectangular setting. We further
generalize this idea to assign the infinite number of bicubic patches in the limit surface to faces that are in
the vicinity of an extraordinary point/vertex. Next, we formulate a closed form analytical representation
of the limit smooth surface which can be viewed as a function of its (initial) polyhedral control vertices.
Finally, we introduce physical quantities into our dynamic model in order to derive its motion equation.
A. Catmull-Clark subdivision surfaces
Catmull-Clark subdivision scheme, like any other subdivision scheme, starts with a user-defined mesh
of arbitrary topology. It refines the initial mesh by adding new vertices, edges and faces with each step
of subdivision following a fixed set of subdivision rules. In the limit, a sequence of recursively refined
polyhedral meshes will converge to a smooth surface. The subdivision rules are as follows:
ffl For each face, introduce a new face point which is the average of all the old vertices defining the face.
ffl For each (non-boundary) edge, introduce a new edge point which is the average of the following four
points: two old vertices defining the edge and two new face points of the faces adjacent to the edge.
ffl For each (non-boundary) vertex, introduce a new face point obtained from the average F
where F is the average of the new face points of all faces adjacent to the old vertex point, E is the
average of the midpoints of all edges incident on the old vertex and n is the number of the edges
incident on the vertex.
ffl Form new edges by connecting each new face point to the new edge points of the edges defining the
old face and by connecting each new vertex point to the new edge points of all old edges incident on
the old vertex point.
ffl Define new faces as those enclosed by new edges.
The most important property of Catmull-Clark subdivision surfaces is that the smooth surface can
be generated from control meshes of arbitrary topology. Therefore, this subdivision scheme is extremely
November 10, 1997
valuable for modeling various complicated geometric objects of arbitrary topology. Catmull-Clark subdivision
surfaces include standard bicubic B-spline surfaces as their special case (i.e., the limit surface is a
tensor-product B-spline surface for a rectangular control point mesh). In addition, the aforementioned sub-division
rules generalize the recursive bicubic B-spline patch subdivision algorithm. For non-rectangular
meshes, the limit surface converges to a bicubic B-spline surface except at a finite number of extraordinary
points. Note that, after the first subdivision, all faces are quadrilaterals, hence all new vertices created
subsequently will have four incident edges. The number of extraordinary points on the surfaces remains a
constant which is determined by the refined meshes after one subdivision. The limit surface is curvature-
continuous everywhere except at extraordinary vertices, where only tangent plane continuity is achieved.
In spite of the popularity of Catmull-Clark subdivision surfaces for representing complex geometric shapes
of arbitrary topology, these subdivision surfaces are not parameterizable and lack closed-form analytic for-
mulations. These deficiencies preclude their immediate pointwise manipulation and hence may restrain
the applicability of these schemes. We develop a new dynamic model based on Catmull-Clark subdivision
surfaces which offer modelers a closed-form analytic formulation and allows users to manipulate the model
directly and intuitively.
To develop the dynamic model which treats the limit smooth surface as a function of its control mesh in
a hierarchical fashion, we need to update control vertex positions continually at any given level. However,
all the vertices introduced through subdivision are obtained as an affine combination of control vertex
positions of the initial mesh. Therefore, we can control the dynamic behavior of the limit surface by
formulating the dynamic model on the initial mesh itself, the only exception being the case when the
initial mesh has non-rectangular faces. This problem can be circumvented by taking the mesh obtained
through one step of subdivision as the initial mesh. To define the limit surface using the vertices of the
initial mesh, the enumeration of the bicubic patches in the limit surface is necessary. In the next two
subsections, we present a scheme of assigning the bicubic patches to various faces of the initial mesh. It
may be noted that one additional subdivision step may be needed in some cases to isolate the extraordinary
points and treat the obtained mesh as the initial mesh (one typical example is when the initial mesh is a
tetrahedron).
B. Assigning patches to regular faces
F
F
3Fig. 1. A rectangular mesh and its limit surface consisting of 4 bicubic surface patches.
In Fig.1, a rectangular control mesh is shown along with the bicubic B-spline surface (4 patches) in
the limit after an infinite number of subdivision steps. Note that, each of the bicubic patches in the
limit surface is defined by a rectangular face with each vertex of degree four, thereby accounting for
points (from its 8 connected neighborhood) needed to define a bicubic surface patch in the
limit. Therefore, for each rectangular face in the initial mesh with a valence of 4 at each vertex, the
corresponding bicubic surface patch can be assigned to it in a straight forward way. In Fig.1, the surface
patches are assigned to face F 1 respectively. The 16 control points for
the patch S 1 , corresponding to face F 1 , are highlighted in Fig.1.
F
F
F
F
F
F
F
F
F
S1024681357Fig. 2. A mesh with an extraordinary point of valence 3 and its limit surface.
C. Assigning patches to irregular faces
In Fig.2, a mesh containing an extraordinary point of valence 3 and its limit surface are shown. The
are assigned to bicubic patches S 0 respectively (as they all have vertices
of valence 4) following the aforementioned scheme. However, the central smooth surface enclosed by the
patches consists of infinite number of bicubic patches converging to a point in the limit. We
need to develop a recursive way of enumerating these bicubic patches and assigning them to various faces
at different levels in order to develop the dynamic subdivision surface model.
The idea of enumerating the bicubic patches corresponding to faces having an extraordinary vertex
is shown in Fig.3 where a local subdivision of the mesh consisting of faces F
not the other boundary faces) of Fig.2 is carried out. Topologically, the resulting local subdivision mesh
(shown as dotted mesh) is exactly the same as the mesh in Fig.2 and hence exactly the same number of
November 10, 1997
selected mesh for
local subdivision
Fig. 3. Local subdivision around the extraordinary point and the limit surface.
bicubic patches can be assigned to its faces with vertices of valence 4 as is evident from Fig.3 (the new faces
and the corresponding patches are marked by "p" and "n" respectively). This process of local subdivision
and assignment of bicubic patches around an extraordinary point can be carried out recursively and in
the limit, the enclosed patch corresponding to faces sharing the extraordinary point will converge to a
point. However, there is no need to carry out an infinite number of subdivision steps. This description is
for formulation purposes only and the exact implementation will be detailed in a later section.
D. Kinematics of the limit surface
In this section we develop the mathematics for the kinematics of the limit surface via illustrative
examples and then present the generalized formulas. We start the illustration with a single bicubic B-spline
2patch which is obtained as the limiting process of the Catmull-Clark subdivision algorithm applied
to an initial 4 by 4 rectangular control mesh. Let s p (u; v), where (u; v) 2 [0;
B-spline patch which can be expressed analytically as
where d i;j represents a 3-dimensional position vector at the (i; j)th control point location and B i;4 (u) and
are the cubic B-spline basis functions. The subscript p on s denotes the patch under consideration.
Expressing Eqn.1 in a generalized coordinate system we have
where J p is the standard Jacobian matrix of a bicubic B-spline patch, and is of size (3; 48). Vector q p is the
concatenation of all control points defining a B-spline patch in 3D. Note that in the concatenation of the
control points, each control point has an (x; component. For example, the (x; components of the
control point (i; correspond to positions 3k; 3k +1; 3k +2 - where, respectively in the vector
. We can express the entries of J p explicitly in the following way: J p (0;
D.1 Limit surface with many bicubic patches from a rectangular initial mesh
Now let's consider a limit surface consisting of many bicubic surface patches obtained after applying
an infinite number of subdivision steps to a rectangular initial mesh. For example, let the limit surface of
Fig.1 be s m which can be written as
where s m1 (2u;
2 , and 0 otherwise. Similarly, s m2 ; s m3 and s m4 are also equal
to s m (u; v) for an appropriate range of values of u; v and 0 outside. It may be noted that s
correspond to patches S 1 respectively in Fig.1. Rewriting Eqn.3 in generalized coordinates we
have
are the Jacobian matrices of size (3; 48) and q i s are the (x,y,z) component concatenation of a
subset of the control points of s m defining s m i
4. A more general expression for s m is
Where, qm is the 75-component vector of 3D positions of the 25 vertex control mesh defining the limit
surface s m . Matrices A are of size (48; 75) , each row consisting of a single nonzero entry
(= 1) and the (3; 75)-sized matrix
D.2 Limit surface with many bicubic patches from an arbitrary initial mesh
The stage is now set to define the limit surface s using the vertices of initial mesh M for any arbitrary
topology, assuming all faces are rectangular and no face contains more than one extraordinary point as
its vertex (i.e., extraordinary points are isolated). As mentioned earlier, if these assumptions are not
satisfied, one or two steps of global subdivision may be required and the resulting mesh can be treated
as the initial mesh. Let the number of vertices in the initial mesh M be a, and let l of these be the
extraordinary vertices. Let us assume that the number of faces in the initial mesh are b, and that k of
these have vertices with valence 4 (henceforth termed a "normal face") and each of the remaining (b \Gamma
faces have one of the l extraordinary vertices (henceforth termed a "special face"). Let p be the
dimensional vector containing the control vertex positions in 3D. Using the formulations in subsections
II-B and II-C, the smooth limit surface can be expressed as
l
where n i is a single bicubic patch assigned to each of the normal faces and s j is a collection of infinite number
of bicubic patches corresponding to each of the extraordinary points. Employing the same approach
taken before to derive Eqn.5, it can be shown that
are the equivalent of J in Eqn.4 and A i in Eqn.5 respectively. The pre-
superscript n is used to indicate that these mathematical quantities describe bicubic patch in the limit
surface corresponding to normal faces.
We will use the following notational convention for describing various mathematical quantities used
November 10, 1997
in the derivation of the expression for a collection of infinite number of bicubic patches around an extraordinary
vertex. The pre-superscript s is used to represent a collection of bicubic patches around an
extraordinary vertex, the subscript j is used to indicate the j-th extraordinary point, the post-superscript
represents the exponent of a mathematical quantity and the level indicator (to represent various levels of
subdivision in the local control mesh around an extraordinary vertex) is depicted via subscripts on the
curly braces.
The expression for s j is derived using the recursive nature of local subdivision around an extraordinary
vertex as shown in subsection II-C. First, s j can be expressed as
where the first term of Eqn.8 is the generalized coordinate representation of the bicubic B-spline patches
corresponding to the normal faces of the new local subdivision mesh obtained after one subdivision step
on the local control mesh (similar to those patches marked n in Fig.3). fs j g 1
represents the rest of the
infinite bicubic B-spline patches surrounding the extraordinary point (similar to the central patch enclosed
by patches marked n in Fig.3). The vertices in the newly obtained local subdivision mesh f s
can be
expressed as a linear combination of a subset of the vertices of the initial mesh M (which will contribute to
the local subdivision) following the subdivision rules. We can name this subset of initial control vertices
. Furthermore, there exists a matrix f s
of size (3c; 3d), such that f s
are vectors of dimension 3c and 3d respectively. Applying the idea of recursive
local subdivision again on fs j g 1 , s j can be further expanded as
In the above derivation, f s ~ p j g 1 is a vector of dimension 3d, comprising of a subset of the vertices defining
the 3c dimensional vector f s
. Note that, f s ~
has the same structure as f s
, therefore, there
exists a (3d; 3d) matrix f s C
such that f s C
. Each subdivision of a local mesh with
d vertices creates a new local mesh with c vertices which contributes a fixed number of bicubic B-spline
patches. So, if we proceed one step further, we obtain
Because of the intrinsic property of the local recursive subdivision around the extraordinary point, we
November 10, 1997
have f s J
In addition, the subdivision rules remain the same
throughout the refinement process, we also have f s
we can further simplify the above equations leading to
{
{
s
s
{
{ j
{ j
Fig. 4. Local subdivision around the extraordinary point and the corresponding patches in the limit
surface from different levels of subdivision.
We can rewrite s j as
. The idea of local recursive subdivision
around an extraordinary point is illustrated in Fig.4. Note that, each vertex position in the subdivided
mesh is obtained by an affine combination of some vertices in the previous level and hence any row of
sums to 1. The largest eigenvalue of such a matrix is 1 and it can be shown that the corresponding
infinite series is convergent following a similar approach as in [10]. The rest of the derivation leading to
an expression for s is relatively straight forward. Using the same approach used to derive the Eqn.7, it
can be shown that
l
l
l
From Eqn.6,7 and 13,
We now treat the control point positions (alternatively, the vertex positions in the initial mesh) defining
the limit surface s as a function of time in order to develop our new dynamic model. The velocity of the
surface model can be expressed as
where an overstruck dot denotes a time derivative. The physics of the dynamic subdivision surface model
is based on the work-energy version of Lagrangian dynamics [38] and is formulated in an analogous way
to that in [27].
In an abstract physical system, let p i (t) be a set of generalized coordinates which are functions of time
and are assembled into the vector p. Let f i (t) be the generalized force assembled into the vector f p and
acting on p i . The Lagrangian equation of motion can then be expressed as
Let -(u; v) be the mass density function of the surface. Then
Z Z
is an N \Theta N mass matrix. Similarly the expression for damping matrix is
Z Z
where fl(u; v) is the damping density.
A thin-plate-under-tension energy model [39] is used to compute the elastic potential energy of the
dynamic subdivision surface. The corresponding expression for the stiffness matrix K is
Z Z
where the subscripts on J denote the parametric partial derivatives. The ff ii (u; v) and fi ij (u; v)s are
elasticity functions controlling local tension and rigidity in the two parametric coordinate directions. The
generalized force vector f p can be obtained through the principle of virtual work [38] done by the applied
force distribution f (u; v; t) and can be expressed as
Z Z
Multilevel Dynamics
Our dynamic Catmull-Clark surface model can be subdivided globally to increase the number of vertices
(control points) of the model. For example, after one step of global subdivision, the initial degrees of
(refer to Eqn.15 and Eqn.16) in the dynamic system will be replaced by a larger number of
degrees of freedom q, where Ap. A is a global subdivision matrix of size (M; N) whose entries are
uniquely determined by Catmull-Clark subdivision rules (see Section II-A for the details about the rules).
Thus, p, expressed as a function of q, can be written as
A T
Therefore, we can rewrite Eqn.15 and Eqn.16 as
and
respectively. Now we need to derive the equation of motion for this new subdivided model involving a larger
number of control vertices namely q. We need to recompute the mass, damping and stiffness matrices
for this "finer" level. The structure of the motion equation as given by Eqn.17 remains unchanged, but
the dimensionality and the entries of M;D;K;p and f p change correspondingly in this newly obtained
subdivided level. In particular the motion equation, explicitly expressed as a function of q, can be written
as
R R -B T J T JBdudv and the derivation of D q , K q and f q follow suit.
It may be noted that further subdivision, if necessary, can be carried out in a similar fashion. Therefore,
multilevel dynamics is achieved through recursive subdivision on the initial set of control vertices. Users
can interactively choose the level of detail representation of the dynamic model as appropriate for their
modeling and design requirements. Alternatively, the system can automatically determine the level of
subdivision most suitable for an application depending on some application-specific criteria.
III.
The evolution of the generalized coordinates for our new dynamic surface model can be determined
by the second-order differential equation as given by Eqn.17. An analytical solution of the governing
differential equation can not be obtained in general. However, an efficient numerical implementation can
be obtained using finite element analysis techniques [40]. For the dynamic subdivision surface model, two
types of finite elements are considered - normal elements (bicubic patches assigned to the normal faces of
the initial mesh) and special elements (collection of infinite number of bicubic patches assigned to each
extraordinary vertex of the initial mesh). In the current implementation, the M, D and K matrices for
each individual normal and special elements are calculated and they can be assembled into the global M,
D and K matrices that appear in the corresponding discrete equation of motion. In practice, we never
assemble the global matrices explicitly in the interest of time performance. The detailed implementation
is explained in the following subsections.
A. Data Structures
A subdivision surface defined by a control mesh at any level is designed as a class which has a pointer
to its parent mesh, a set of pointers to its offspring meshes (arising out of local subdivision around the
extraordinary vertices at that level), a list of faces, edges, vertices and normal elements . Face, edge,
vertex and normal elements are, in turn, classes which store all the connectivity and other information
needed to either enumerate all the patches or locally subdivide around an extraordinary vertex in that
level. The implementation takes the initial mesh as the base subdivision surface object (with its parent
pointer set to NULL) and locally subdivides the initial mesh upto a user-defined maximum level around
each extraordinary vertex to create offspring objects at different levels. At this point, let's take a closer
look at the normal and special element data structures and computation of the corresponding local M;D
and K matrices.
A.1 Normal Elements
Each normal element is a bicubic surface patch and is hence defined by 16 vertices (from the 8-connected
neighborhood of the corresponding normal face). Each normal element keeps a set of pointers to those
vertices of the initial mesh which act as control points for the given element. For a normal element, the
mass, damping and stiffness matrices are of size (16; 16) and can be computed exactly by carrying out
the necessary integrations analytically. The matrix J in Eqn.18,19 and 20 needs to be replaced by J p
(of Eqn.2) for computation of the local M;D and K matrices respectively of the corresponding normal
element.
A.2 Special Elements
Each special element consists of an infinite number of bicubic patches in the limit. We have already
described a recursive enumeration of the bicubic patches of a special element in Section II-C. Let us now
consider an arbitrary bicubic patch of the special element in some level j. The mass matrix M s of this
patch can be written as
where M p is the normal element mass matrix (scaled by a factor of 1
to take into account of the area
shrinkage in bicubic patches at higher level of subdivision)
and\Omega s is the transformation matrix of the
control points of that arbitrary patch from the corresponding control points in the initial mesh. The
damping and stiffness matrices for the given bicubic patch can be derived in an exactly similar fashion.
these mass, damping and stiffness matrices can be assembled to form the mass, damping and
stiffness matrices of the special element. As mentioned in Section II-D.2, the infinite series summation is
convergent. However, it has been found that the contribution from bicubic patches of a special element
at a higher level of subdivision to the mass, damping and stiffness matrices becomes negligible and in the
implementation, the local subdivision is carried out until the contribution is small enough to be ignored.
B. Force Application
The force f (u; v; t) in Eqn.21 represents the net effect of all applied forces. The current implementation
supports spring, inflation as well as image-based forces. However, other types of forces like repulsion
forces, gravitational forces etc. can easily be implemented.
To apply spring forces, a spring of stiffness k can be connected from a point d 0 to a point
the limit surface, the net applied spring force being
Z Z
where ffi is the unit impulse function implying f vanishes elsewhere in
the surface. However, the ffi function can be replaced with a smooth kernel to spread the force over a
greater portion on the surface. The spring forces can be applied interactively using a mouse button or
the points from which forces need to be applied can be read in from a file.
To recover shapes from 3D image data, we synthesize image-based forces. A 3D edge detection is
performed on a Gaussian smoothed volume data set using the 3D Monga-Deriche(MD) operator [41] to
produce a 3D potential field P (x; which we use as an external potential for the model. The force
distribution is then computed as
where k controls the strength of the force. The applied force on each element is computed using Gaussian
November 10, 1997
quadrature for evaluating Eqn.21 in Cartesian coordinates. It may be noted that we can apply spring
forces in addition with the image-based forces by placing points near the region of interest in the slices of
the 3D image data.
C. Discrete Dynamic Equation
The differential equation given by Eqn.17 is integrated through time by discretizing the time derivative
of p over time steps \Deltat. The state of the dynamic subdivision surface at time t + \Deltat is integrated using
prior states at time t and t\Gamma\Deltat. An implicit time integration method is used in the current implementation
where discrete derivatives of p are calculated using
\Deltat 2
and
2\Deltat
Using Eqn.17,29 and 30, the discrete equation of motion is obtained as
This linear system of equations is solved iteratively between each time step using the conjugate gradient
method. For a first order system with no mass, the above equation reduces to
(D
which gives a faster convergence.
D. Model Subdivision
The initialized model grows dynamically according to the equation of motion (Eqn.17) and when an
equilibrium is achieved at a given level of subdivision, the model can be subdivided, if necessary, according
to the Catmull-Clark subdivision rules to increase the number of vertices (control points) and a better fit
to the data can be achieved. Currently the error of fit criteria is based on distance between the data points
and the points on the limit surface where the corresponding springs are attached. However, other types
of error criterion can also be defined and used in this context. For example, in the context of image-based
forces, if the model energy does not change between successive iterations indicating an equilibrium for the
given resolution, the model can be subdivided further until the model energy is sufficiently small and the
November 10, 1997
change in energy between successive iterations becomes less than a pre-specified tolerance.
IV. RESULTS
The proposed dynamic subdivision surface can be used to represent a wide variety of shapes with
arbitrary genus. In this section we demonstrate the power of our modeling scheme via model fitting
examples to a variety of data sets of varying degree of complexity. In all the experiments, normal elements
are shaded yellow, while special elements are colored green.
In Fig.5(a) an open limit surface defined by an initial mesh of 61 vertices and 45 faces is shown. The
mesh has one extraordinary point of valence 5. The limit surface is acted upon by spring forces as shown
in Fig.5(b). The evolving model and its control mesh is shown in Fig.5(c) and(d). The final fitted model
is depicted in Fig.5(e) and (f). It may be noted that the model controlled by the initial mesh reached local
minimum without fitting the points exactly. In order to obtain an exact fit (Fig.5(f)), the control mesh
is subdivided once thereby increasing the degrees of freedom (control vertices) of the underlying model.
Thus the dynamics can be applied in a hierarchical fashion. The developed model can be used to obtain a
very fast approximate fitting with fewer number of vertices and an exact fit after more subdivision steps
as needed.
In the next experiment, we show the fitting process using spring forces with a closed surface of genus
two(Fig.6). The smooth surface is controlled by an initial mesh of 544 faces and 542 vertices, 8 of them
being extraordinary points of valence 5. In this experiment, the model has sufficient degrees of freedom
and fitted the data points exactly without needing further subdivision of its control mesh.
In all the experiments to follow, the initialized model had 96 faces and vertices, 8 of them being
extraordinary vertices of valence 3. The final fitted model, obtained through one step of subdivision, has
a control polygon of 384 faces with 386 vertices. The tolerance level of the error in fit, which is defined
as the maximum distance between a data point and the nearest point on surface as a percentage of the
object diameter, was set to be 1%.
In Fig.7, we demonstrate the model fitting algorithm applied to laser range data acquired from multiple
views of a light bulb. Prior to applying our algorithm, the data were transformed into a single reference
November 10, 1997
coordinate system. The model was initialized inside the 1000 range data points on the surface of the bulb.
In the next experiment, the shape of a human head is recovered from a range data set as shown in
Fig.8. The range data set has 1779 points in 3D. It may be noted that the final shape with a very low
error tolerance is recovered using very few number of control points in comparison to the number of data
points present in the original range data set. Fitting example with an anvil data set is shown in Fig.9.
The anvil data set has 2031 data points.
We show the application of our model to anatomical shape recovery from 3D volumetric MRI data in
the last two experiments. First, we fit the model to a cerebellum (a cortical structure in brain) given an
input of sagittal slices from a MR brain scan. Fig.10(a) depicts a slice from this MRI scan and the
model initialization is shown in Fig.10(b). Continuous image based forces are applied to the model and
the model deforms under the influence of these forces until maximum conformation to the boundaries of
the desired cerebellum shape. Fig.10(c) depicts an intermediate stage of the model evolution during the
fitting process and the final fitted model is shown in Fig.10(d). Arbitrary 3D views of the fitted model
from different viewing angles are depicted in Fig.10(e) and (f).
In the last experiment, we present the shape extraction of a caudate nucleus (another cortical structure
in human brain) from 64 MRI slices, each of size (256; 256). Fig.11(a) depicts a slice from this MRI scan
along with the points placed by an expert neuroscientist on the boundary of the shape of interest. Fig.11(b)
depicts the data points (placed in each of the slices depicting the boundary of the shape of interest) in 3D.
Note that points had to be placed on the boundary of the caudate nucleus due to lack of image gradients
delineating the caudate from the surrounding tissue in parts of the image. Fig.11(c) depicts the initialized
model and the data points. Continuous image based forces as well as spring forces are applied to the
model and the model deforms under the influence of these forces until maximum conformation to the
boundaries of the desired caudate shape. Fig.11(d) depicts an intermediate stage of the model evolution
during the fitting process and two arbitrary views of the final fitted model in 3D is shown in Fig.11(e)
and (f).
V. CONCLUSIONS
In this paper, a dynamic generalization of the Catmull-Clark subdivision surfaces is presented which
has numerous applications in geometric modeling, computer graphics and visualization. Apart from
providing a direct and intuitive way of manipulating shapes, it facilitates the modeling and shape analysis
of objects contained in range and volume data sets using very few degrees of freedom. We have presented
an analytic formulation of the subdivision scheme, incorporated the advantages of free-form deformable
models in subdivision scheme, introduced hierarchical dynamic control and shown the advantages of our
model via experiments. However, the current scheme can not recover very sharp edges in the data. Also,
the initialization is interactive; ideally, initialization should be done automatically on the basis of the
input data set. Our future efforts will be focussed toward addressing these issues.
VI.
ACKNOWLEDGEMENTS
This research was supported in part by the NSF grant ECS-9210648 and the NIH grant RO1-LM05944
to BCV, the NSF CAREER award CCR-9702103 and DMI-9700129 to HQ. We also wish to acknowledge
Dr. H. Hoppe and Dr. K. Pulli for the range data and Dr. C.M. Leonard for the brain MRI data.
--R
"An algorithm for high speed curve generation,"
"A subdivision algorithm for smoothing down irregularly shaped polyhedrons,"
"Analysis of the behavior of recursive division surfaces near extraordinary points,"
The use of piecewise forms for the numerical representation of shape
"Recursively generated B-spline surfaces on arbitrary topological meshes,"
"Conditions for tangent plane continuity over recursively generated B-spline surfaces,"
"An investigation of curvature variations over recursively generated B-spline surfaces,"
"A unified approach to subdivision algorithms near extraordinary points,"
"Smooth subdivision surfaces based on triangles,"
"Efficient, fair interpolation using Catmull-Clark surfaces,"
"A signal processing approach to fair surface design,"
"A butterfly subdivision scheme for surface interpolation with tension control,"
"Subdivision schemes for surface interpolation,"
"Interpolating subdivision for meshes with arbitrary topology,"
"Elastically deformable models,"
"Deformable models,"
"Good vibrations : Modal dynamics for graphics and animation,"
"Dynamic deformation of solid primitives with constraints,"
"Multiresolution stochastic hybrid shape models with fractal priors,"
"Deformable curve and surface finite elements for free-form shape design,"
"Representing PDE surfaces in terms of B-splines,"
"Using partial differential equations to generate free-form surfaces,"
"Linear constraints for deformable B-spline surfaces,"
"Variational surface modeling,"
"Dynamic NURBS swung surfaces for physics-based shape design,"
physics-based framework for geometric design,"
"Dynamic NURBS with geometric constraints for interactive sculpting,"
"Finite-element methods for active contour models and balloons for 2-d and 3-d images,"
"Dynamic 3d models with local and global deformations: Deformable superquadrics,"
"Shape and nonrigid motion estimation through physics-based synthesis,"
"Recovery of non-rigid motion and structure,"
"Surface description of complex objects from multiple range images,"
"Adaptive-size physically-based models for nonrigid motion analysis,"
"Hierarchical shape representation using locally adaptive finite elements,"
"A finite element model for 3d shape reconstruction and nonrigid motion tracking,"
"Adaptive meshes and shells : Irregular triangulation, discontinuities and hierarchical subdivision,"
Hamilton's Principle and Physical Systems
"Regularization of inverse visual problems involving discontinuities,"
The Finite Element Handbook
"3d edge detection using recursive filtering,"
--TR
--CTR
Kaihuai Qin , Zhengyi Chang , Huawei Wang , Denggao Li, Physics-based loop surface modeling, Journal of Computer Science and Technology, v.17 n.6, p.851-858, November 2002
Frank Dachille, IX , Hong Qin , Arie Kaufman , Jihad El-Sana, Haptic sculpting of dynamic surfaces, Proceedings of the 1999 symposium on Interactive 3D graphics, p.103-110, April 26-29, 1999, Atlanta, Georgia, United States
Ye Duan , Hong Qin, Intelligent balloon: a subdivision-based deformable model for surface reconstruction of arbitrary topology, Proceedings of the sixth ACM symposium on Solid modeling and applications, p.47-58, May 2001, Ann Arbor, Michigan, United States
Han , Raffaele De Amicis , Giuseppe Conti, Interactive spline-driven deformation for free-form surface styling, Proceedings of the 2006 ACM symposium on Solid and physical modeling, June 06-08, 2006, Cardiff, Wales, United Kingdom
Seth Green , George Turkiyyah , Duane Storti, Subdivision-based multilevel methods for large scale engineering simulation of thin shells, Proceedings of the seventh ACM symposium on Solid modeling and applications, June 17-21, 2002, Saarbrcken, Germany
Kevin T. McDonnell , Hong Qin , Robert A. Wlodarczyk, Virtual clay: a real-time sculpting system with haptic toolkits, Proceedings of the 2001 symposium on Interactive 3D graphics, p.179-190, March 2001
Guiqing , Li Hua, Blending parametric patches with subdivision surfaces, Journal of Computer Science and Technology, v.17 n.4, p.498-506, July 2002
Gerold Wesche , Hans-Peter Seidel, FreeDrawer: a free-form sketching system on the responsive workbench, Proceedings of the ACM symposium on Virtual reality software and technology, November 15-17, 2001, Baniff, Alberta, Canada
Chhandomay Mandal , Hong Qin , Baba C. Vemuri, A novel FEM-based dynamic framework for subdivision surfaces, Proceedings of the fifth ACM symposium on Solid modeling and applications, p.191-202, June 08-11, 1999, Ann Arbor, Michigan, United States
Ye Duan , Hong Qin, A subdivision-based deformable model for surface reconstruction of unknown topology, Graphical Models, v.66 n.4, p.181-202, July 2004
Gerold Wesche , Marc Droske, Conceptual free-form styling on the responsive workbench, Proceedings of the ACM symposium on Virtual reality software and technology, October 22-25, 2000, Seoul, Korea
Steve Capell , Seth Green , Brian Curless , Tom Duchamp , Zoran Popovi, A multiresolution framework for dynamic deformations, Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 21-22, 2002, San Antonio, Texas
Zo J. Wood , Peter Schrder , David Breen , Mathieu Desbrun, Semi-regular mesh extraction from volumes, Proceedings of the conference on Visualization '00, p.275-282, October 2000, Salt Lake City, Utah, United States
Chhandomay Mandal , Hong Qin , Baba C. Vemuri, Dynamic Modeling of Butterfly Subdivision Surfaces, IEEE Transactions on Visualization and Computer Graphics, v.6 n.3, p.265-287, July 2000
Denis Zorin, Modeling with multiresolution subdivision surfaces, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts | finite elements;subdivision surfaces;interactive techniques;computer graphics;visualization;dynamics;CAGD;deformable models |
614463 | Accessibility Analysis Using Computer Graphics Hardware. | AbstractAnalyzing the accessibility of an object's surface to probes or tools is important for many planning and programming tasks that involve spatial reasoning and arise in robotics and automation. This paper presents novel and efficient algorithms for computing accessible directions for tactile probes used in 3D digitization with Coordinate Measuring Machines. The algorithms are executed in standard computer graphics hardware. They are a nonobvious application of rendering hardware to scientific and technological areas beyond computer graphics. | Introduction
Reasoning about space is crucial for planning and programming of tasks
executed by robots and other computer-controlled machinery. Accessibility
analysis is a spatial reasoning activity that seeks to determine the directions
along which a tool or probe can contact a given portion of a solid object's
surface. For concreteness, this paper discusses accessibility in the context
of automatic inspection with Coordinate Measuring Machines (CMMs), but
the concepts and algorithms are applicable to many other problems such
as tool planning for assembly [30, 31], sensor placement for vision [14, 28],
numerically controlled machining [3, 6], and so on.
Figure
1: A typical coordinate measuring machine. The large component
directly attached to the touch probe is called the ram.
A CMM (
Figure
1) is essentially a very precise Cartesian robot equipped
with a tactile probe, and used as a 3-D digitizer [2]. The probe, under computer
control, touches a sequence of points in the surface of a physical object
to be measured, and the CMM produces a stream of x, y, z coordinates of
the contact points. The coordinate stream is interpreted by algorithms that
support applications such as reverse engineering, quality control, and process
control. In quality and process control, the goal is to decide if a manufactured
object meets its design specifications. This task is called dimensional inspec-
tion, and amounts to comparing the measurements obtained by a CMM with
a solid model of the object. The model defines not only the solid's nominal
or ideal geometry, but also the tolerances or acceptable deviations from
the ideal [1]. The inspection results are used to accept or reject workpieces
(quality control), and also to adjust the parameters of the manufacturing
processes (process control).
This paper focuses on accessibility analysis for automatic planning and
programming of dimensional inspection tasks with CMMs. Given a solid
model of an object, including tolerances, and a specification of the task (typ-
ically as a set of features to be inspected), the goal is to generate a high-level
plan for the task, and then to expand this plan into a complete program for
driving the CMM and inspecting the object. The high-level plan specifies
how to setup the part on the CMM table, which probes to use and how to
orient them, and which surface features to measure with each setup, probe
and probe orientation. The final program contains specific probe paths and
points to be contacted by the probe tip, and is interpretable by the CMM
controller.
Automatic planning algorithms are beyond the scope of this paper, but a
brief outline of the planning methodology we assume is useful to understand
the role of accessibility analysis. We envisage a planner that operates within
the generate-test-repair paradigm. A tentative plan is generated by taking
into consideration the task and various constraints, the most important of
which is accessibility. A proposed plan is then tested by simulation. If
the test fails, for example because there are tool/workpiece collisions, the
plan is either repaired or a new tentative plan is generated. The generate-
test-repair cycle continues until an acceptable solution has been found. This
architecture requires a smart generator that produces good plans most of the
time, to avoid expensive repair and backtracking. The knowledge obtained
through accessibility analysis guides the plan generator to favorable solutions.
However, a plan proposed by the generator does not necessarily have to be
correct, because no plan is accepted without being tested for correctness.
It follows that the accessibility algorithms used by the plan generator can
produce erroneous results without compromising the correctness of the final
plan. Of course, incorrect results must occur infrequently, or testing will fail
too often and planning will proceed very slowly.
Exact and complete algorithms for accessibility analysis in the domain of
curved objects are either unknown or impractically slow. We present in this
paper several accessibility algorithms that make a variety of approximations
and trade speed of execution for accuracy or correctness. Some of the approximations
are pessimistic, i.e., they may miss correct solutions, typically as a
result of discretizations. Other approximations are optimistic and may sometimes
produce incorrect solutions. (These will eventually be rejected when
the plan is tested.) The algorithms described here have been implemented
and tested on real-world mechanical parts, and have been incorporated in a
prototype inspection planner.
The remainder of the paper is organized as follows. First, related work is
briefly reviewed. Next, we discuss accessibility for the tips of probes. Then
accessibility for the case in which probes are straight, i.e., aligned with the
CMM's ram. Then we consider bent probes, which consist of two non-aligned
components. A final section summarizes the chapter and draws conclusions.
Related Work
2.1 Accessibility Analysis
Spyridi and Requicha introduced the notion of accessibility analysis as a tool
for high-level inspection planning for CMMs [24, 25, 26]. Their implementation
computed exact global accessibility cones (GACs, defined below) for
planar faces of polyhedral parts using Minkowski operations. Sets of direc-
tions, called direction cones, were represented as 2-D boundaries on the unit
sphere and GACs were computed by projecting elements of the Minkowski
sum onto the sphere. Their algorithm proved to be impractical for complex
parts with curved surfaces.
Other researchers computed GACs at single points, thus eliminating the
need of computing Minkowski sums. This is the approach that we take as
well. Lim and Menq used a ray casting technique with an emphasis on
parts with free-form surfaces [11]. Limaiem and ElMaraghy developed a
method to compute GACs that used standard operations on solids [12, 13].
A similar technique was independently developed by Jackman and Park [7].
Medeiros et al. use visibility maps, which provide a representation for non-homogeneous
direction cones [9, 33, 34]. All of the above methods are too
slow for practical inspection planning, where many accessibility cones must
be computed for complex objects.
Accessibility analysis is related to work in other fields. The visibility
problem is a generalization of the global accessibility problem, because directions
of accessibility correspond to points of visibility at infinity [16, 17, 4].
Sensor placement in visual inspection systems is related to the problem of
straight probe accessibility [14, 27, 28]. Other fields which require accessibility
analysis for high-level task planning include assembly planning [30, 31]
and numerically-controlled machining [3, 6, 8, 29, 32].
We are not aware of previous work involving accessibility analysis of bent
probes as introduced in Section 5. The theoretical foundations for bent
probe accessibility appear in [24], and a rigorous mathematical analysis of
accessibility in [23].
2.2 Cubic Maps
We use a cubic mapping of the unit sphere to represent direction cones, i.e.,
subsets of the unit sphere. This technique has been used in other areas of
computer graphics, such as radiosity, shadow computations and reflections.
This is not surprising, because global accessibility is strongly related to global
visibility, as noted above.
Environment (or reflection) maps ([5], pg. 758-759) are a generalization
of the direction-cone map presented here. The environment map holds color
images, while the direction-cone map holds bitmaps. The light-buffer ([5],
pg. 783) is another cubic map that is used to partition the space visible
to a light source. The hemi-cube structure ([5], pg. 795-799) is used to
determine visibility between surface patches and calculate their contribution
to the radiosity equation. Unlike our direction-cone mapping, the hemi-cube
is not aligned with the world coordinate system. It is aligned with each
patch, and therefore is not suitable for Boolean operations between direction
cones.
Shadow maps ([5], pg. 752) are depth images of a scene as viewed from
a light source. These are used to compute the space that is visible to a light
source in order to apply global shading. This is not a cubic map, but the
technique used in the two-pass z-buffer shading algorithm ([5], pg. 752) is
similar to our method of extracting the first-component directions of a bent
probe (Section 5.3).
A CMM has a touch-trigger (or tactile) probe with a spherical tip. We define
the origin of the probe to be the center of the tip. The CMM measures the
spatial coordinates of the tip's center when the tip comes in contact with
an obstacle. We say that a point p is accessible to a tip with respect to
an obstacle X if the tip does not penetrate X when its origin is placed at
p. With CMMs, the obstacle X is normally the workpiece to be inspected.
(Fixturing devices and other obstacles are ignored here, because they are not
relevant to accessibility analysis in the early stages of inspection planning.)
Testing tip accessibility is a simple matter of placing the tip at p and
checking for collisions with the obstacle. Notice that if p is the point to
be measured on the surface of the workpiece, then placing the center of the
tip at p will cause the probe to penetrate the part. Instead, we perform
accessibility analysis for the offset point Figure 2), where r
is the radius of the tip and ~n is the normal to the surface at p. (We assume
that p is not singular, because it is not wise to measure a singular point with
Figure
2: The offset point
(a) (b) (c)
ram
tip
stylus
Figure
3: A straight probe and some possible abstractions
a tactile probe.) For the remaining of this chapter, we ignore these issues
and assume that p is the offset point to be accessed by the center of the tip.
We assume that the CMM has a small number of probes, therefore testing
the accessibility of each tip at each point is reasonable. See [15] for an
alternative approach.
A straight probe (Figure 3a) is attached to the CMM ram (Figure 1), which
is much longer than the probe and aligned with its axis. In the remainder
of this dissertation we refer to the whole ram/probe assembly as a straight
probe and assume in this case that the CMM has a fixed head, such as the
Renishaw PH6 [18]. In general, the straight probe can be any tool, not
necessarily a CMM probe, that is symmetric about an axis. Examples of
GAC
Figure
4: The GAC of point p with respect to obstacle X
such tools are drills, screw drivers and laser range finders.
On the axis of the tool we define a point that is the origin of the tool.
Accessibility analysis for a point p with respect to an obstacle X seeks to
determine the directions of the tool axis, such that the tool does not penetrate
X when the tool's origin is placed at p.
In this section we investigate the accessibility of a point by several straight
probe abstractions. Then we generalize to the accessibility of surfaces and
briefly outline how to apply the results to setup planning for dimensional
inspection with CMMs.
4.1 Half-line Probes
Consider a straight probe abstracted by a half-line that is the main axis of
the probe (see Figure 3b). This is an optimistic abstraction of the probe,
because it ignores the fact that the probe has volume, but it captures the
fact that the CMM ram is typically very long. Furthermore, it is simplistic
enough to give rise to efficient algorithms.
We say that a point p in the presence of obstacle X is accessible if the
endpoint of a half-line can be placed at p while not penetrating X. The
direction of such a half-line is called an accessible direction. The set of all
accessible directions is called the global accessibility cone (GAC) of point p
with respect to obstacle X, and is denoted by GAC(X; fpg).
Figure
4 illustrates the global accessibility cone of a point p with respect
to an obstacle X. GAC(X; fpg) is the highlighted portion of the unit sphere
centered at p. It is easy to verify that the GAC complement is the projection
of X onto the sphere. This forms the basis for our algorithm to compute the
GAC of a point: project the obstacle onto a sphere centered at p and take
the complement.
The global accessibility cone is computed in the same fashion as environment
maps [5]. The obstacle (i.e., environment) is projected onto the faces
(a) (b)
Figure
5: Experimental results - GAC
of a cube centered at p. The cube is aligned with the world coordinate sys-
tem, and each face is a bitmap. The algorithm first sets all the bits of the
cube to 1, then renders the obstacle as 0s in order to delete the projected
obstacle from the direction cone. The remaining directions are the desired
complement set that form the GAC.
The cubic map is used as a non-uniform partition of the unit sphere.
There is a one-to-one mapping between directions and unit vectors in 3-D
space, therefore the cubic map is a discrete representation for a set of directions
or a direction cone. The mapping between a direction and a bit on the
cube involves selecting the appropriate face of the cube, projecting the unit
vector (i.e., direction) onto the face, and normalizing the result to bitmap
coordinates. The direction falls on the face of the cube that lies on the axis
of the largest coordinate of the (x; vector, with the sign of this
coordinate used to distinguish between the two opposing faces.
Figure 5a shows the result of running our algorithm on a real-world mechanical
part, which was modeled in ACIS [22]. The GAC is projected onto
a sphere that is centered about the point of interest (in the center of the
figure). As a preprocessing step, the ACIS faceter produced the mesh that
was used to render the mechanical part. This mesh is a collection of convex
polygons that were not optimized for rendering other than being placed in
an OpenGL display list. The mechanical part contains 103 faces (including
curved surfaces) and the mesh contains 1980 polygons. The code executed
on a Sun ULTRA 1 with Creator 3D graphics hardware, Solaris 2.1 and 128
MB of memory. Direction cones were represented by six 32 \Theta 32 bitmaps, for
a total of 6144 directions at a cost of 768 bytes. The running time for the
algorithm was 0.08 seconds. Not surprisingly, most of the load was on the
graphics hardware.
The complexity of the algorithm described above depends solely on the
time to render the obstacles. The obstacle X may be rendered many times
to compute GACs at different points, so it is wise to optimize the mesh used
to display X. For example, one can use triangle strips [21].
4.2 Grown Half-lines
In the previous section we abstracted a straight probe by a half-line. Here
we generalize this to a half-line that is grown by a radius r (see Figure 3c).
An object grown by a radius r includes all the points that are at a distance
no greater than r from another point in the object [20]. It also equals the
Minkowski sum of the object and a ball of radius r. Thus, a grown half-line
is a semi-infinite cylinder with a hemi-sphere over the base. This leads to a
straight probe abstraction that can serve as an envelope for the volume of a
probe, and therefore is a pessimistic approximation.
It is easy to verify that a half-line grown by a radius r penetrates an
obstacle X iff the non-grown half-line penetrates
grown by r. (This is a well known result in robot motion planning [10].)
In other words, GAC(X " describes the directions from which a point
p is accessible by a half-line grown by r.
A straightforward algorithm to compute GAC(X " fpg) is to compute
the grown obstacle X and to apply the algorithm presented previously in
Section 4.1. Unfortunately, computing the solid model of a grown object is
an expensive and non-trivial task that is prone to precision errors [20] and
produces curved objects even when the input is polyhedral. If the accessibility
algorithm is to be applied many times and for a small set of given radii,
then it may be wise to compute the grown solids as a preprocessing step.
We choose an alternative approach in which we implicitly compute the
grown obstacle, as it is rendered. The main observation is that only
the silhouette of the obstacle is needed as it is projected onto the cubic map.
Therefore, we render a superset of the boundary of the grown object that is
also a subset of the grown object itself. The naive approach is to render each
vertex of the mesh as a ball of radius r, each edge as a cylinder of radius r,
(a) mesh (b) grown nodes (c) grown edges
(d) nodes and edges (e) convex edges (f) offset faces
Figure
Growing a solid
and to offset each polygon by a distance r along the normal. This algorithm
is correct and can be optimized by not rendering concave edges, which will
not be part of the grown obstacle's boundary. A more drastic optimization
can be applied, if the mesh is partitioned into face sets, each corresponding
to a face of the original solid model. Then a facial mesh is represented by
an array of nodes and an array of polygons. Each node corresponds to a
point on the face and the normal to the face at this point. Each polygon
is represented as a list of nodes. To offset a facial mesh, first we offset the
nodes by translating each point along its normal, and then we render the
polygons at these offset nodes. The gain is that the edges and vertices that
are internal to a facial mesh do not need to be rendered as grown entities.
The only vertices and edges of the mesh that are rendered are those which
fall on actual edges of the solid model (see Figure 6).
The spheres and cylinders are rendered as quad strips [21] of very low
resolution to maximize rendering speed. The cylinders contain 6 faces with
no tops or bottoms, because they are "capped" by spheres on either end.
ram
d
r
GAC
(a) (b) (c)
Figure
7: The GAC for the ram component of a straight probe
The spheres are composed of 3 stacks of 6 slices each (latitude and longitude,
respectively). Our results show that these approximations are adequate for
practical problems. Finer approximations can be used for more accurate
results. The running time to compute the GAC of Figure 5b is 1.3 seconds.
4.3 Ram Accessibility
The straight probe abstractions introduced so far have constant thickness.
In practice, the CMM ram is considerably fatter than the probe stylus. An
improved probe abstraction will take this fact into account as shown in Figure
3a. In this case, the probe is modeled by two components, two dilated
half-lines that are aligned with each other, one to model the ram and the
other to model the stylus. Notice that in order for such a probe to access
a point both the ram and the stylus must be able to access it. In other
words, the GAC of such an abstraction is the intersection of the GACs of
each component of the probe. In this section we focus on the ram component.
We model the ram by a truncated half-line (d; 1) that is grown by a radius
r (Figure 7a). A truncated half-line (d; 1) includes all the points on the half-line
at a distance no less than d from the origin. Using similar arguments
as in the previous sections, the GAC for a ram with respect to an obstacle
X is identical to the GAC of the ram shrunken by r with respect to X " r.
The shrunken ram is precisely the truncated half-line (d; 1) (Figure 7b).
We already know how to render therefore we reduced the problem to
computing the GAC for a truncated half-line.
It is clear from Figure 7b that the GAC of a truncated half-line (d; 1) is
the GAC of a half-line but with a different obstacle. The idea is to remove the
irrelevant region from the obstacle. A truncated half-line (d; 1) positioned
at p cannot collide with any portion of the obstacle that is at a distance
closer than d from p. In other words, the ball of radius d that is centered
at p can be removed from the obstacle. The GAC with respect to the new
obstacle corresponds to the GAC of the truncated half-line (Figure 7c).
Calculating the GAC of a truncated half-line (d; 1) then entails subtracting
a ball centered at p from the obstacle and using our algorithm for regular
GACs. However, computing the solid difference between an obstacle and a
ball is an expensive computation that we wish to avoid. In addition, we do
not have a solid model of the grown obstacle itself (see previous section).
Consequently, we choose an alternative approach in which we use clipping
operations to approximate the solid difference. The clipping is performed
during the projection of the obstacle within the GAC algorithm, by introducing
a read-only depth-buffer that is initialized with a spherical surface of
radius d. This is the portion of the sphere that is visible through each face
of the cubic mapping (the depth values are symmetrical for each face). If the
depth-buffer is enabled with a "greater-than" comparison, then the clipping
operation will approximate the subtraction of a ball of radius d, as needed.
Notice that, in general, the clipping operation is an operation between
surfaces and not a Boolean operation between solids [19]. In our case, we
position the far clipping plane beyond the obstacle. This ensures that the projection
of the solid difference is correct, because a truncated half-line (d; 1)
positioned at p intersects X iff it intersects the boundary of X. Therefore,
the point of intersection is rendered along with the boundary of X and it is
not clipped, because it is within the viewing frustum and not closer than d
to the viewer.
The quality of the approximation depends on the depth-buffer precision.
To maximize the precision of the depth-buffer, the distance between the far
clipping plane and the near clipping plane should be minimized. Therefore,
the far clipping plane should be a tight bound on the obstacle - we use the
diameter of the bounding box as a reasonable bound. The near clipping plane
is set to a distance of d=
3, so that the near face of the viewing frustum is
contained in the ball of radius d.
Figure
8 shows the viewing volume through a single face of a cube centered
at p. The size of the cube is irrelevant, since the result is projected onto
the face. The near and far clipping planes are labeled
d
f
Z Z
d=f
r
r
Figure
8: The viewing volume for truncated half-lines (d; 1) and (0; d)
respectively. The viewing volume is bound by the clipping planes and is
shaded in the figure. The lighter shade corresponds to the volume clipped by
the depth-buffer, which is initialized to be the sphere of radius d centered at p.
The left hand side of the figure is the desired viewing volume for a truncated
half-line (d; 1). The far clipping plane is assumed to be beyond the obstacle.
It is easy to see that the near clipping plane must satisfy
3.
The complexity of the algorithm is identical to the regular GAC algorithm
with the addition of depth-buffer comparisons and the overhead of initializing
the depth buffer with a spherical surface. The depth-buffer comparisons are
performed in hardware and should have negligible run-time overhead. Notice
that the depth-buffer is read-only, thus it needs to be initialized only once.
In addition, the same depth buffer may be used for all probes that have a
stylus of length d. Therefore, the cost of initializing the depth-buffer may be
amortized over many direction cones.
Our results show that the cost of computing the GAC for a truncated
half-line is identical to the cost of computing a regular GAC. The cost of
initializing the depth-buffer is negligible, because the buffer is relatively small
(32 \Theta 32 bits).
To review, Figure 9 illustrates the GACs computed with our system for a
simple "L" shaped obstacle. The left column shows the GACs for the original
obstacle, and the right column shows the GACs with respect to the obstacle
regular obstacle grown obstacle
(a) (a')
half-line
(b) (b')
truncated
(c) (c')
truncated
(0;
d)
(d) (d')
Figure
9: The variety of GACs for straight probe abstractions
grown by a constant radius. Figure 9c' shows the GAC for a truncated half-line
(d; 1) with respect to a grown obstacle, which is precisely the GAC for a
ram. The last row of this figure illustrates the GACs for truncated half-lines
(0; d), which will be introduced in Section 5.1.
To aid visualization, the 3-D cones have been rendered with transparent
material. For example, the GAC in Figure 9b has 3 shades of gray. The
lightest shade of gray is on the bottom of the cone. This portion includes
the directions that go out of the page and downward. The top potion of
the cone is darker, because it includes both outward directions and inward
directions. In other words, two surfaces overlap. They are both rendered,
because the cone is transparent. The intermediate shade of gray (in the
nearly rectangular region) only includes directions that go into the page and
away from the protrusion on the obstacle.
4.4 Surface Accessibility
Up to this point we have discussed the accessibility of a single point. Now,
we extend the notion of accessibility to arbitrary regions of the workspace,
which we call features. For dimensional inspection, these are normally surface
features on the boundary of a workpiece. The goal is to find the set of
directions from which a straight probe can access all the points of a feature.
The global accessibility cone of a feature F with respect to an obstacle
X is denoted by GAC(X;F ), and corresponds to the directions from which
all the points in F can be accessed by a half-line. Clearly, this cone is the
intersection of the GACs for all the points in F . Notice that GAC(X; fpg)
is a special case - the GAC of a feature containing a single point p.
Exact GACs for planar surfaces and polyhedral obstacles can be computed
using Minkowski operations [24]. Algorithms for Minkowski operations
are expensive, do not scale well, and are not available for curved surfaces.
We choose an alternative approach in which we sample a few points from
F and compute the intersection of the GACs at these points. This approximation
is especially suitable for CMMs, which are normally restricted to
inspect discrete points. In addition, computing the intersection of direction
cones represented by cubic maps is an efficient and trivial operation on
bitmaps. The approximation is optimistic because it is a lower bound on the
intersection of the infinite number of GACs for all the points of the feature.
Notice that the direction of the straight probe corresponds to the direction
of the CMM ram with respect to the workpiece. Therefore, we can use this
Figure
10: Setup planning with a straight probe
direction to represent the orientation of the workpiece setup on the CMM
table. Computing the GAC for a feature translates to finding the set of
setup orientations from which the entire feature can be inspected. If the
GAC of a feature is empty, then this feature cannot be inspected in a single
setup orientation. (In this case the feature is segmented into sub-features or
a different probe is used.)
For dimensional inspection planning, computing the GACs for all surface
features that need to be inspected may be the first step of a high-level planner.
Clustering these GACs can produce a minimum number of workpiece setup
orientations. This is a very important characteristic, since each setup change
is usually a time-consuming manual operation.
Figure
shows the usefulness of accessibility analysis for spatial reason-
ing. In this example, we computed the GACs of all the faces of the work-
piece. The cones where partitioned into three clusters, each cluster composed
of cones whose intersection is not empty. A direction was chosen from each
cluster. The result is that each face can be accessed from at least one of
these directions. The directions of the probes and the faces on the part are
color coordinated (or in different shades of gray) to illustrate which faces a
probe can access.
4.5 Path Accessibility
When the CMM inspects a point, the probe normally traverses a short path,
which we call an approach/retract path. This path is along a line segment
that is normal to the point of contact and is proportional in length to the
ram
tip
stylus
(b)
(a)
rotary joint
tip
2nd component
1st component
d
Figure
11: A bent probe and a possible abstraction
size of the tip. The probe will approach the point in a slow motion along this
path and then retract. The idea is to minimize crashes at high speed and to
maximize the accuracy of the measurement.
The goal is to find the set of directions from which a probe can access all
the points along the approach/retract path, so that the entire path (minus
the endpoint) is collision free. Notice that the path can be viewed as a feature
in the system, thus all the arguments from the previous section hold true.
Since the approach/retract path is typically short relative to the size of the
workpiece, it is reasonable to approximate this path by its two end points.
Again, this is an optimistic approximation.
5 Bent Probes
Orientable probes, such as the Renishaw PH9, are more expensive than
straight probes, but much more versatile, and are used often in CMM in-
spection. The probe can be oriented in digital steps under computer control.
We consider the whole ram/probe assembly as forming a bent probe, which is
not necessarily aligned with the ram. A bent probe is a linked chain of two
components that are connected at a 2 degrees-of-freedom rotary joint.
We model the probe by a 2-component abstraction as in Figure 11b.
The first component is a truncated half-line (0; d) straight probe abstraction,
which includes all the points of a half-line at a distance no greater than d
from the origin. The center of the tip of the bent probe coincides with the
tip of the first component. The second component is a half-line or straight
probe abstraction with endpoint at a distance d along the axis of the first
component. This endpoint corresponds to the probe's rotary joint.
d
qF
GAC
(a) (b) (c)
Figure
12: Computing D 1
and a portion of D 2
For the rest of this section we assume that a bent probe has no volume,
i.e., it is modeled by (truncated) half-lines. Similar generalizations, such as
grown half-lines, can be introduced very easily in the manner of Section 4.2.
Additionally, one can generalize the bent probe concept to more than 2 com-
ponents, but we will not do so here.
The length of the first component, d, is a constant. Therefore, we can
describe the configuration of a bent probe by using a pair of directions -
one for each component of the probe. The result is that a GAC for a bent
probe is a 4-D cone. Fortunately, applications normally need the second
component directions rather than the entire 4-D cone. For example, as with
straight probes, the directions of the second component are used to find a
minimal number of orientations for setting up the workpiece on the machine
table.
The remainder of this section is outlined as follows: Section 5.1 introduces
the GAC for the first component of a bent probe, Section 5.2 shows how to
compute the GAC for the second component of a bent probe, and Section 5.3
shows how to compute the first component accessibility given a direction of
the second component.
5.1 First Component Accessibility
The first component of a bent probe is a truncated half-line (0; d). This is
the complement of the ram abstraction that was introduced in Section 4.3.
Assume that the point of interest is p and the obstacle is X. Then, using
arguments similar to those of Section 4.3, the GAC for the first component is
a regular GAC after the irrelevant parts of the obstacle have been removed.
In this case the obstacle is intersected with the ball of radius d centered at
(see
Figure
12a-b). We denote the GAC of the first component of a bent
probe by D 1 (X; fpg). It is assumed that d is known, therefore, for clarity, it
is omitted from D 1
Notice that D 1 is exactly the set of accessible directions for the first component
of a bent probe when the second component is ignored. This means
that given a direction in D 1
(X; fpg), then a truncated half-line (0; d) that is
oriented along this direction and with an endpoint at p will not penetrate X.
However, D 1
is only an upper bound on the accessible directions of the first
component when the whole probe is taken into account, because it is not
guaranteed that an accessible second component direction exists for every
direction selected from D 1 .
The algorithm to compute D 1
uses a spherical surface in the depth buffer
to approximate the intersection of the obstacle with a ball of radius d. This
is similar to the algorithm used to compute the ram GAC for a truncated
half-line (d; 1) but now the depth-buffer acts as a far clipping surface (see
right hand side of Figure 8). If the depth-buffer is enabled with a "less-than"
comparison, then the clipping operation will approximate the intersection of
a ball, as needed.
Again, the quality of the approximation depends on the depth-buffer pre-
cision, therefore the distance between the far clipping plane and the near
clipping plane should be minimized. We assume that the studied point is accessible
to the probe's tip (see Section 3). Therefore it must be at a distance
of at least r from the obstacle, where r is the radius of the tip. The near
clipping plane is then set to a distance of r=
3, which is the furthest it can
be and still have the viewing volume include all the points that are outside
of the tip (see right hand side of Figure 8). The far clipping plane is placed
at a distance d, which is a tight bound on the ball of radius d.
The clipping operation is only an approximation of the Boolean intersection
between the obstacle and the ball. However, since the near clipping
plane does not intersect the obstacle (based on the assumption that the tip
does not penetrate the obstacle), then we argue that the projection of the
clipped obstacle is correct. Section 4.3 gives a similar argument using the far
clipping plane.
The complexity of computing D 1
, i.e., the GAC of a truncated half-line
(0; d), is identical to the complexity of computing the GAC of a truncated
probe (d; 1), which is the abstraction of a shrunken ram (see Section 4.3).
Our experiments confirm this fact and show that the cost of computing D 1
is
nearly identical to the cost of computing a regular GAC. Figure 9d illustrates
the D 1
cone with respect to a simple obstacle. Notice that the GAC in
Figure 9b is the intersection of the cones in Figure 9c and Figure 9d. This
illustrates the fact that an accessible direction of the probe as a whole must
be a common accessible direction of its components.
5.2 Second Component Accessibility
When the first component takes every possible orientation in D 1
, the articulation
point between the first and second components traverses a locus which
is the projection of D 1
on a sphere of radius d that is centered at p. Without
loss of generality we assume that p is the origin, and we denote this locus by
, since it is also the result of scaling D 1
by a factor of d.
If we succeed in placing the second component such that the articulation
point lies in dD 1
, it is clear that the first component can be placed with its
tip at the origin without collisions. In other words, if the second component
can access some point of dD 1 , then the entire probe can access the origin.
The converse is also true and therefore the origin is accessible iff the second
component accesses some point of dD 1
(see
Figure
12).
The set of directions for which the second component can access at least
one point of dD 1 is called the weak GAC of the feature dD 1 . In general, the
global accessibility cone of a feature F with respect to an obstacle X,
is denoted by WGAC(X;F ), and corresponds to the directions from which
at least one point in F can be accessed by a half-line (notice the analogy
to weak visibility [16]). While the GAC of a feature is the intersection of
the GACs of all the points in the feature, the WGAC of a feature is the
corresponding union.
We denote the GAC of the second component as D 2
. Then D 2
WGAC(X;F ), where the feature F is D 1
(X; fpg) scaled by d about the point
p. To compute D 2 we sample points on F and take the union of the GACs
at these points. This is a lower bound on the real D 2
, so it is a pessimistic
approximation. Figure 12c shows F when p is the origin, and illustrates the
GAC of a point q sampled on F . This GAC will be a part of the union that
forms D 2 .
Notice that the accessibility of a point by a bent probe is weaker than the
concept of approachability. The fact that a bent probe can access a point does
not guarantee that there exists a collision-free path for the probe to reach the
point. (This happens to be true with straight probe abstractions.) Figure 13
shows an example of a point that is accessible, but not approachable by a
Figure
13: A point that is accessible but not approachable by a bent probeX
d
eye
not obstructed by X
portion of dD1 that's
Figure
14: Computing D 0ae D 1 that corresponds to ~
bent probe with the given obstacle. Computing the approachable directions
for the second component of a bent probe is a problem that can be as hard as
the FindPath problem [10]. We use accessibility instead of approachability,
because of the efficient algorithms that are available. A generate-and-test
planner, as described in the introduction, will have to verify the approach-
ability condition with a path planner or a simulator. Our experiments on
real-world mechanical parts show that failures of this kind occur infrequently.
5.3 First Component Accessibility Revisited
We have shown how to compute the GAC of the first component, D 1 , and
from this the GAC of the second component, D 2
. The D 2
cones are used
to compute setup orientations from which points are accessible to the bent
probe. Once a setup is selected, we wish to compute the directions from
which the first component of the probe can access a point.
Given a direction of the second component, ~
, what are the corresponding
directions of the first component for the bent probe to access a
point p? Without loss of generality we assume that p is the origin, then the
articulation point must lie in dD 1 . Hence, for each ~
we are looking
for the directions ~
, such that the second component oriented along ~
and with endpoint at d~v 1 does not collide with X. Spyridi [24] observed that
these directions correspond to the points on dD 1
that are not obstructed by
X in the orthographic projection of dD 1
onto a plane perpendicular to ~
Figure
14 illustrates this fact. The projection lines in the figure correspond
to possible placements for the second component.
We use this observation in our algorithm to compute the subset D 0ae D 1
that corresponds to ~
. The viewing parameters for the orthographic projection
are depicted in Figure 14. We use a parallel projection with direction ~
and a view port large enough to enclose the projection of the ball of radius
d, which is a superset of dD 1 . To check if a point on dD 1 is obstructed by
the obstacle X we use the depth-buffer in a process that is similar to the
two-pass z-buffer shading algorithm [5]. First, we render X into the depth-
buffer. Next, we check if a point is obstructed by X by transforming it to
the viewing coordinates and comparing its depth value with the value in the
depth-buffer. It is not obstructed by X iff its depth value in the appropriate
depth-buffer location is closer to the viewer. Note that D 1
is represented by
bitmaps on the faces of a cube and therefore dD 1 is also discretized. This
is another approximation used by the algorithm. To maximize depth-buffer
precision, the distance between the near and far clipping planes should be
minimized, while still enclosing the obstacle.
The top of Figure 15 illustrates the result of computing D 1 . The length of
the probe, d, is equal to the radius of the sphere used to represent the cone.
It took 0.07 seconds to compute D 1
. The bottom of the figure shows the
accessible directions for the first component of a bent probe, D 0, given that
the second component is normal to the figure. Notice that these directions
are exactly those that are not obstructed by the obstacle in the given view
(some of the obstructed direction are inside a slot). It took 0.07 seconds to
compute D 0. The inaccuracies in the cone are due to aliasing effects from
the use of the same low resolution frame buffer (32 \Theta 32 bits) as with the
GAC algorithm and the limited precision available with the depth-buffer. In
addition, Figure 15 is illustrated with a perspective projection rather than an
orthographic projection, which is used to compute the obstructed directions.
Figure
15: Experimental results - D 1
and D 0
6 Summary and Conclusions
This paper describes simple and efficient algorithms that exploit computer
graphics hardware to compute accessibility information for applications in
spatial reasoning. Our approach is an unconventional application of graphics
hardware. We approximate spherical projections using perspective projec-
tions, and we use clipping and the depth-buffer to approximate the intersection
and the difference of a solid with a sphere. The depth-buffer is also
used to compute the articulation points (between the first and second components
of a bent probe) that are not obstructed by an obstacle under an
orthographic projection.
The algorithms have been implemented and tested. The empirical results
are satisfactory for practical applications with parts of realistic complexity.
A dimensional inspection planner that uses the accessibility tools presented
here is currently operational, and will be described elsewhere.
Acknowledgments
The research reported in this paper was supported by the National Science
Foundation under grants DMI-96-34727 and DDM-87-15404.
--R
Coordinate measuring machines and systems.
Computing machinability on three-
Computer graphics.
Computer Graphics: Principles and Practice.
Efficient geometric algorithms for workpiece orientation in 4- and 5-axis NC machining
Probe orientation for coordinate measuring machine systems using design models.
Manufacturing processes.
Part orientations for CMM inspection using dimensioned visibility maps.
Robot Motion Planning.
CMM feature accessibility and path genera- tion
A general method for accessibility analysis.
A general method for analysing the accessibility of features using concentric spherical shells.
Automatic inspection of three-dimensional geometric features
Efficient algorithms for local and global accessibility shading.
Art Gallery Theorems and Algorithms.
Visibility. In Jacob E.
Renishaw Inc.
Interactive inspection of solids: Cross-sections and interferences
Offsetting operations in solid modelling.
The OpenGL graphics system: A specification (version 1.1).
Accessibility analysis for planning of dimensional inspection with coordinate measuring ma- chines
Automatic Generation of High Level Inspection Plans for Coordinate Measuring Machines.
Accessibility analysis for the automatic inspection of mechanical parts by coordinate measuring ma- chines
Accessibility analysis for polyhedral objects.
Computing occlusion-free view- points
Accessibility analysis in 5-axis machining of sculptured surfaces
On Geometric Assembly Planning.
Geometric reasoning about assembly tools.
Visibility maps and spherical algorithms.
Automated feature accessibility algorithm for inspection on a coordinate measuring machine.
Automating probe selection and part setup planning for inspection on a coordinate measuring machine.
--TR
--CTR
Aristides A. G. Requicha , Steven N. Spitz, Spatial modeling and reasoning for automatic dimensional inspection, From geometric modeling to shape modeling, Kluwer Academic Publishers, Norwell, MA, 2002
A. James Stewart, Vicinity Shading for Enhanced Perception of Volumetric Data, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.47, October 22-24,
Min Liu , Karthik Ramani, Computing an exact spherical visibility map for meshed polyhedra, Proceedings of the 2007 ACM symposium on Solid and physical modeling, June 04-06, 2007, Beijing, China | rasterizing computer graphics hardware;visibility;spatial reasoning;direction cones;visual inspection;dimensional inspection planning;configuration space;accessibility analysis;CAD/CAM;coordinate measuring machines |
614468 | Interactive Virtual Relighting of Real Scenes. | AbstractComputer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semiautomatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities. Our results show that we are able to virtually relight real scenes interactively, including modifications and additions of virtual light sources and objects. | For a lighting designer, this system provides realistic
tools to experiment with the illumination of an enhanced
real environment. All that is required is a few photographs
of the real scene, the reconstruction of a small number of
objects, and the system preprocess as will be described in
this paper; the designer can then interactively manipulate
real light intensities, or insert and manipulate virtual lights
and objects. In Figure 1, an example of a modeled real
scene is shown in (a). In (b), its real illumination was mod-
ified by switching off two lights. Moreover, a virtual light
source was inserted into the real scene, modifying real object
shadows. A virtual object was also inserted into the
real scene, casting shadows onto real objects. This virtual
object can be moved interactively at 3 frames per second1.
(a) (b)
Figure
1: (a) Original real scene. (b) Virtual modification
of the illumination of the real scene enhanced by a virtual
object (the orange box on the floor) that moves at 3 frames
per second.
In the following sections, we first present previous work
in the several domains related to this work: augmented
reality, 3D reconstruction of real scenes and hierarchical
radiosity. Previous common illumination approaches are
1See quicktime video sequences on the web http://www-
imagis.imag.fr/Publications/loscos/TVCG00/index.htmlthen discussed in more detail. We proceed to explain how
we build a 3D geometric model representing the real scene,
and present an overview of the algorithm for interactive
re-lighting. The preprocess phase is presented in detail,
followed by a description of the interactive relighting pro-
cess. We then describe results of relighting, that is interactive
modification of real light intensities and the insertion
of virtual lights, and conclude with a discussion and future
work.
Previous Work
Our work draws on multiple fields; in particular augmented
reality, vision based reconstruction and global il-
lumination. In the following, we will first give a rapid
overview of augmented reality which concentrates, in gen-
eral, on registration and calibration aspects. We next
briefly discuss the 3D reconstruction method we use to
build a simplified model of the real scene. We will use hierarchical
radiosity to create a representation of real-world
illumination, and also to permit interactive updates when
moving virtual objects or modifying illumination; we thus
introduce the basic concepts of this approach which are
central to the understanding of our algorithm. We finally
detail previous work on global common illumination, insisting
in particular on the most closely related approaches
which use radiosity methods.
2.1 Introduction to augmented reality
There are two main approaches to augmented reality: virtual
and real environments can be combined by superimposing
virtual objects on the real world viewed from semi-transparent
glasses; alternatively, virtual and real environments
can be merged with video images, and the result
reprojected onto a screen. These two approaches are presented
in detail in the survey of Azuma [2] which also provides
extensive references to related literature.
The first approach involves the calibration, registration
and display of virtual objects in real time to avoid delays
between projected images and the perceived real world.
The second approach allows more interaction between real
and virtual objects, because a geometric representation of
the real scene is created from the images. We can therefore
handle occlusion between real and virtual objects, as well
as visual effects such as common illumination, which is the
interaction of light between virtual and synthetic objects.
Nevertheless, achieving real time or interactive display of
these effects remains a challenging problem.
2.2 Reconstruction of real models
The simulation of common illumination effects requires a
geometric representation of the real world. Much research
on the subject exists in the field of Computer Vision; we
have chosen to use an advanced vision-based technique,
which allows semi-automatic reconstruction based on multiple
views.
The approach we use is presented in [11]. In order to
build a representation of a real scene, several vision techniques
are combined: automatic calibration of the camera,
mosaicing, computation of the epipolar geometry which
results in a polygonal reconstruction of the scene, and the
projection of textures. The first step is the calibration of
the camera which consists in retrieving the intrinsic parameters
from a non-planar calibration pattern image using
an automatic algorithm [21]. The user provides approximate
positions of 6 reference points. From this, the
system retrieves intrinsic and extrinsic parameters of the
camera. Then, four sets of three photographs each are
taken, and a mosaic is built automatically for each set as
presented in [33]. From the four mosaics, a 3D model is
defined using the TotalCalib system [1] developed at the
ROBOTVIS group. This system, shown in Figure 2, combines
several techniques. Point correspondences are provided
by a user, who clicks on one image to create a reference
point. The matched points on the 3 other mosaics are
given automatically by the system. From about
correspondences, fundamental matrices are computed using
a non-linear method [32]. Polygonal regions are next
manually selected by a user from point correspondences,
and the system provides 3D coordinates of these polygons
from the projection equations. Finally, textures are projected
to allow correct perspective effects for a fixed view-point
[11]. For each reconstructed polygon, a texture image
is computed by de-warping the original image (from a
given viewpoint), and mapping it to the plane of the polygon
The main advantage of such a system is that user intervention
is restricted to the choice of reference matches
and polygon vertex selection. This system is however not
without limitations: the resulting model of the real scene
is approximate and may contain artifacts, since there is no
guarantee that geometric properties such as parallel edges
or orthogonal angles will be preserved. This drawback can
be removed by taking into account additional user input,
as presented in the work of Debevec et al. [7] or Poulin et
al. [20].
In the work by Debevec et al. [7], reconstruction is
based on a hierarchy of blocks. The main idea is to build
polyhedra which include geometric constraints, such as
Figure
2: The TotalCalib system to build 3D models of
real scenes, using automatic calibration, and epipolar geometry
parallelism, orthogonality, and size aspects. Polyhedra
provide good approximations of many objects of the real
world, especially for outdoor architectural scenes. This
also allows the reconstruction of vertices which are invisible
in the original images, but correspond to hidden
vertices of the polyhedra. Another approach is described
in [20] in which the primitives are points, lines and poly-
gons, and constraints such as parallelism, orthogonality, or
co-planarity are determined by the user.
2.3 Hierarchical radiosity
To achieve interactive relighting, we need an efficient description
of light exchanges in the scene, including shadow
information. We have chosen to use the hierarchical radiosity
approach with clustering [15, 23] with the extensions
to dynamic environments [9]. We next introduce certain
basic concepts of radiosity methods.
The radiosity method is based on energy exchanges, and
has been used in computer graphics to simulate light interactions
in synthetic environments [25], including indirect
illumination. Since the radiosity method is a finite-element
approach, a mesh representation of the scene is required,
which is usually constructed with quadtrees.
Hierarchical radiosity [15] uses a multi-resolution representation
of light, by creating a hierarchy of patches on
each surface. Light exchanges are established at the appropriate
levels at the patch hierarchy via a link data struc-
ture, resulting in an efficient solution. A generalization
of this approach can be achieved using clusters [26, 24],
which represent groups of objects. The entire scene is
contained in a single, root cluster. Clusters and patches
can be linked at the appropriate level, depending on the
refinement criterion which decides whether the link represents
the light transfer at a suitable, user defined, level
of accuracy. If the light exchange is not sufficiently well-
represented, the link is refined and the patches or clusters
are then subdivided.
When links are established, the incoming irradiance is
gathered at each patch, followed by a push-pull step performed
to maintain a coherent multi-resolution representation
of radiant exchanges [15]. The cluster-based hierarchical
radiosity starts with the root cluster linked to itself.
The algorithm described by Sillion [23] performs a refine-
ment step, establishing links at appropriate levels followed
by the gather and push-pull steps. Irradiance is pushed
down to the leaves of the patch hierarchy, and radiosity is
pulled up by averaging [23]. This is repeated until convergence
Visibility information and form factors are stored with
links. The visibility information can be of three types:
VISIBLE, INVISIBLE or PARTIAL. When computing radiosity
exchanges between two patches, the incoming irradiance
is multiplied by the form factor and an attenuation
which varies from zero when the patches are mutually
completely occluded, to one when the patches are entirely
mutually visible. The attenuation factor represents
the degree of occlusion between two patches. It is typically
estimated by shooting rays between the two patches,
and counting the percentage of rays blocked by occluders.
The hierarchical representation with links can be
adapted to allow fast radiosity modification [9], by augmenting
the links with a shaft data structure [14]. In addi-
tion, previously subdivided links, called passive links are
maintained. The passive links contain all the necessary
information allowing them to be reactivated at no cost, if
it is required by a geometry change. See Figure 3 for an
example.
(a) (b) (c)
Figure
3: (a) Original subdivision and links in purple.
(b) Adding a dynamic object, and updating the hierarchy
of elements and links. Eight links shown in blue were cre-
ated. (c) The passive links with their shafts are maintained
in the hierarchy, allowing fast identification of the dynamic
object movement. In this case, two passive links shown in
green were maintained. The corresponding shaft is outlined
in grey.2.4 Common illumination in augmented reality
The retrieval and simulation of common illumination between
virtual and real objects has been treated by several
researchers in previous work [28, 19, 31, 16, 4, 12, 8, 30,
22]. All use some form of a 3D representation of the real
scene.
State et al. [28] use a composition of vision-based and
magnetic tracking methods for accurate registration of the
real environment. Virtual objects are inserted into a real
scene and common illumination is performed, with a moving
(real) point light source. Shadow maps are used allowing
updates in real time, but only for direct illumination
and sharp shadows from point sources.
Nakamae et al. [19] developed a solution for merging
virtual objects into background photographs, and estimated
the sun location to simulate common illumination
effects in outdoors environments. More recently Yu [31]
proposed a solution to virtually modify the illumination
with different virtual positions of the sun in outdoors
scenes. A pseudo-BRDF is first estimated, which is a function
of the incident radiance on the reflected differential ra-
diance. Diffuse and specular reflectances are retrieved using
multiple images from multiple viewpoints. From various
virtual positions of the sun and from modified sky and
environment illumination, modified outdoors illumination
is performed pixel by pixel for each reconstructed trian-
gle. However, for certain applications, an approximation
of only the diffuse reflectance is sufficient.
For indoors environments, Jance'ne et al. [16] used
vision-based techniques to retrieve the geometry of the real
scene from a video sequence. Common illumination between
virtual and real objects is simulated. This allows
the creation of video sequences, with animated virtual objects
such as a cloth, and the modification of the reflective
properties of real objects. The final rendering is performed
using a ray-tracing system, and images are merged using a
masking algorithm.
Debevec [4] also simulates common illumination effects
using RADIANCE [29], a ray tracing based global
illumination system. In this work, the real environment
is decomposed into the distant scene and the local scene.
The distant scene is used to evaluate the global radiance,
and the source emittance [6]. An approximate geometric
model of the local scene is built using the methods previously
developed by the same author [7]. Since radiance
is accurately retrieved from images, rendering with mixed
images is done by using the difference of the desired effects
and the original image value. This method can be
adapted for indoors or outdoors environments.
Finally Sato et al. [22] propose a solution to insert virtual
objects into a real scene. They used radiance images
to estimate the luminance of each surface [6]. The rendering
is done by ray-casting and the color of each pixel is
modified by a factor corresponding to the change in illumination
The common illumination methods presented above are
geared towards high-quality image generation, requiring
in the order of minutes per frame. Those which allow relighting
need several images under different lighting con-
ditions, or several viewpoints.
Our approach is complementary: we want to use simple
data, that is a single image of a single viewpoint under
original lighting conditions, and from this we want to provide
interactive common illumination effects, which will
allow a designer to modify and experiment with different
lighting conditions. Digital prototyping or mock-ups require
this type of interactive capability; for a final high-quality
animation, one of the previous methods can always
be used.
Radiosity-based systems for common illumination
The most closely related previous work is that of Fournier
et al. [12] and its interactive extension [8]. The system presented
permits the retrieval of radiosity parameters from
the textures extracted from the real scene images. In our
approach, we use Fournier et al.'s basic derivations for the
extraction of the quantities required to initialize the radiosity
solution. We thus describe this work in more detail.
First the real scene is modeled manually, using a simpli-
fied representation. Given this model which is subdivided
into patches, the reflectance can be extracted from the image
textures. The reflectance of each patch i is chosen to
be:
B^A
where B^i is the average intensity of the pixels in an image
corresponding to the projected patch i, B^A the average intensity
of the real image, and ^ is the average reflectance
of the scene (given by the user). This estimation of the re-
flectance depends on the color of the texture (i.e., the photograph
of the real scene), which will be darker for patches
in shadow. The emittance Ei of each source is estimated
from the following equation:
with Ai being the area of patch i. This approximation is
based on the estimated ambient term in the progressive
radiosity algorithm [3]. To simplify, and as it is approximately
the case for our scenes, we consider that all thesources have the same intensity. However a system of
equations could be solved for non-homogeneous intensities
Once the reflectance(s) i and the emittance(s) Ei have
been estimated, a progressive radiosity solution is applied.
The result of this simulation is the radiosity Bi of each
patch. The display is done using a display correction factor
Di of a patch i, which is first initialized to the radiosity
Bi. When the scene is modified, the current radiosity Bi
is updated to reflect the change. For example, if a virtual
object is inserted, the patches on which a shadow is cast
will have Bi Di. Modifications to the scene (notably
the addition of virtual lights and objects), are performed
by modulating the texture Ti of a pixel as follows:
It is important to note here that the accuracy of the radiosity
estimation Bi is irrelevant. Since a ratio is being used
(which is 1 if there is no change), the only requirement is
that the modifications to Bi have to be consistent. Note that
ray-casting is used for rendering in [12].
This approach was adapted in [8], in the context of a hierarchical
radiosity system, which allows common illumination
between a dynamic virtual object and a real scene.
The interactive update of the illumination when the virtual
object moves uses the dynamic hierarchical radiosity solution
described in [9]. An example of the results obtained
by the method of [8] is shown in Figure 4, where a red dynamic
virtual object was inserted into a real scene, on the
top of the desk. The shadows are virtually projected onto
the table, using the display ratio described above (Eq. (3)).
(a) (b) (c)
Figure
4: (a) A virtual object, floating above the table, was
inserted into a real scene using [8], in 5.65 seconds. Shadows
are projected onto the table using the display ratio of
Eq. (3). (b) and (c) The dynamic object moves above the
table. Links and radiosity are updated interactively at 3
frames per second.
2.5 Shortcomings of previous approaches
If we use the method of [12, 8] to change the intensity of
a real light source, the result is unsatisfactory. This can
clearly be seen in Figure 5(b).
(a) (b)
Figure
5: (a) The original illumination of the real scene.
Two sources (left and right) illuminate the wall, causing
the shadow of the table to be cast on the wall. (b) Using the
method of [8], we turn off the left-hand light; pre-existing
shadows are not removed and the quality of the relighting
is unsatisfactory: the shadow due to the left-hand light is
still clearly visible.
To see why, recall that the display is performed using
the initial real world photograph [12] or textures [8]. The
image or textures are then modulated by the ratio of the
current radiosity value Bi (changed for example by turning
off a light) over the originally computed radiosity value Di.
Since the texture being modulated is a snapshot of the real
global illumination in the scene, real shadows are already
represented.
Consider the Figure 5(b) for which the left-hand light
has been turned off. Observe the region in the blue square:
it contains a region of the wall which was in shadow with
respect to the left-hand light, and a region which was not.
Since the left-hand light is turned off, the current radiosity
value Bi will be reduced for both regions, by amounts
which are very close in value since they vary only by the
corresponding form factors. The textures of both regions
are modulated by the ratio of this current radiosity Bi over
the original radiosity value before the light was switched
off. Since the texture (photo) corresponding to the region
originally in shadow is much darker to begin with, the
shadow will still be visible after the change. For the correct
image to be displayed, we need a way to make the texture
in both shadowed and unshadowed have similar values.
This reveals the limitation of previous texture modulation
approaches, which can only treat modifications to virtual
objects or sources, since modification of real lighting
conditions requires the modification of the original images
or textures of the scene.3 The Common Illumination System
The goal of our approach is to allow interactive modifica-
tion of real source intensities, the insertion (and modifica-
tion) of virtual sources, and the insertion and interactive
manipulation of other virtual objects. All interactive updates
will be performed with consistent update of shadows
of real and virtual objects. Our system consists of 3 steps:
3D reconstruction of the real scene, a preprocessing initialization
stage, and an interactive modification stage, during
which the user can modify and enhance the real scene. The
entire algorithm is summarized in Figure 6.
reconstruction
Build a simplified 3D model of the real scene
Preprocess
Hierarchical radiosity system set up
Refinement for shadow boundaries
Creation of the unoccluded illumination textures
System re-initialization and shadow reprojection
Additional preprocess for the insertion of virtual objects
Interactive modification
Modification of real and virtual lights
Update when a virtual object moves
Figure
Complete algorithm.
Representation of the real scene
The real scene is represented in our system with an approximation
of its geometry and with projected textures.
The model of the scene is built semi-automatically, using
advanced vision techniques [11, 8] as described in Section
2.2. This process allows the reconstruction of the basic
3D model visible in the captured images (for example
the mosaics shown in Figure 7).
The rest of the scene, which is not visible in the images,
is built with points measured manually. Approximate textures
are used to map the polygons of this part. The positions
of the light sources are also measured manually and
inserted into the 3D model.
The model is then an approximation of the captured
room, with a more precise model for the visible part of
the scene, and a coarse model for the rest. An example of
the resulting reconstruction is shown in Figure 8.
A limitation of this approach is that the projection of
the textures is done only for a single point of view. We
are therefore restricted to viewing the scene from a static
viewpoint. In Figure 8(a), the model is viewed from our
(a) (b)
(c) (d)
Figure
7: The four mosaics from four different points of
view.
system, and in (b) the complete model is shown, including
the non-visible part.
(a) (b)
Figure
8: (a) The real scene viewed from our system.
(b) The complete model including four lights (A, B, C,
D).
Preprocess to enable interactive re-lighting
The main contribution of our approach is the preprocessing
algorithm which results in the generation of modified
original textures, approximating unoccluded radiosity in
the scene. We call these the unoccluded illumination tex-
tures. The original values of the textures, taken from the
initial scene photograph are thus modified to represent illumination
as if shadows of real objects where not taken into
account. These shadows can be due to real light sources,
or other secondary reflector objects.
Once we have created this unoccluded illumination tex-
ture, we can perform rapid relighting by modulating the
texture with a ratio, corresponding to the increase or decrease
in illumination due to lighting changes. This is
achieved using a mesh of elements created by the radiosity
algorithm. These elements are finer in the regions of
shadow, and sufficient to capture other changes in illumination
(for example due to indirect light).
The preprocess begins by setting up all necessary parameters
for the estimation of the real scene illumination as
in [8]. A suitably subdivided mesh is essential for the appropriate
modulation of texture; to achieve this a texture-based
refinement is applied. To create the unoccluded light
textures, the information contained in the radiosity solution
is used. Due to inaccuracies of the capture and reconstruction
process, we use a heuristic correction to this pro-
cess, based on shadow boundaries which are appropriately
inferred from the radiosity solution. The result of this process
are the unoccluded illumination textures, which can
then be modulated by the ratio of the final radiosity to unoccluded
radiosity to reproject shadows and other illumination
effects. This pre-process is explained in detail in
the next section.
Virtual objects and virtual light sources can then be inserted
if desired. The insertion of dynamic objects is performed
using the method of [8]. The algorithm used to
insert virtual light sources is described in Section 4.5.
Interactive Relighting
When the entire preprocessing step is completed, we can
interactively modify the illumination of the light sources.
The algorithm used is presented in Section 5. Our inter-
face, shown in Figure 9, allows the user to choose a new
emittance in each RGB channel for real and virtual light
sources.
A similar interface also exists for the insertion of real
and virtual lights or objects 2.
Figure
9: A screen snapshot of the interactive system. The
user can manually select new light intensities for real or
virtual light sources using the sliders shown in the inset.
2See quicktime video sequences on the web http://www-
imagis.imag.fr/Publications/loscos/TVCG00/index.html
4 Preprocessing for Virtual Interactive
Relighting
As in [8], we start by initializing the hierarchical radiosity
system based on textures extracted from the original pho-
tographs, as presented in detail in Section 2.4. To improve
the estimation of average reflectance, we first use the manually
set value as in [12], followed by an additional step,
which uses the average of the computed reflectances.
To achieve the modification of real lighting we need to
construct the unoccluded illumination textures, which are
textures capturing an approximation of the illumination in
the environment as if there were no occlusion from the
light sources and secondary sources.
The creation of these textures has two steps: first we add
in blocked light, using the information contained in the radiosity
solution. Since this gives imperfect results due to
the numerous approximations performed, a heuristic correction
is applied by finding an appropriate reference patch
which will give us a strong indication of the desired final
color.
For both steps, it is important to have an appropriate
mesh subdivision for radiosity, notably for the shadows on
objects which are visible for our given viewpoint. We begin
by describing our texture-based refinement, and then
proceed to describe the two steps of the unoccluded illumination
texture generation.
4.1 Texture-based refinement for
shadow boundaries
If we use standard refinement criteria, such as BF re-
finement [15] or error-driven refinement [13] we do not
obtain suitable radiosity mesh subdivision. The main problem
is that these approaches do not always guarantee good
shadow boundaries (even when using the visibility factor
of [15]). In addition, the problem is more apparent in our
case, since the geometry reconstruction and the visibility
computation via ray-casting are not completely accurate.
Discontinuity meshing [17] is unsuitable for the same rea-
sons, since discontinuity lines would be geometrically in-
accurate. As a consequence, we use quadtree subdivision,
with new, texture-based refinement.
The main idea is to use color information contained in
original textures (i.e., the photos of the real scene reprojected
as texture onto the reconstructed polygons), combined
with the visibility information provided by the radiosity
system as initialized above. Real shadows already
exist in the textures, and correspond to regions which are
darker. By using the visibility type (VISIBLE, PARTIAL,
OCCLUDED see Section 2.3) contained in the links topatches in penumbra, and the color differences between
neighboring patches, we can force refinement in regions
corresponding to real shadow.
This refinement occurs after the first approximation of
the radiosity solution of the real scene using the approach
of [12, 8]. The first radiosity solution is used to initialize
several parameters such as reflectances and light source in-
tensities. As shown in Figure 10(a), the initial subdivision,
obtained using BF refinement, is coarse. Links have been
attached to the leaves of the hierarchy of patches as in Figure
10(b), to provide accurately visibility information with
respect to the source patches.
(a) (b)
Figure
10: (a) Coarse mesh before refinement. (b) All
links are at leaves.
The texture-based refinement algorithm compares the
visibility and the color of two neighboring leaves of the
patch hierarchy. The visibility must be consistent with the
color differences. We consider two cases, for a patch and
each of its neighbors (the meaning of similar for color
and visibility is defined
1. If the two patches have similar colors, they should
also have the same visibility type with respect to all
the real light sources. If it is not the case, then we
subdivide the patch.
2. If the two patches have different colors, they should
also have different visibility types. If not, we subdivide
the patch.
If the patch has been subdivided, we examine the children
created; if there is no change in visibility, the patch
subdivision is cancelled and the patch is again a leaf of the
patch hierarchy.
Case 1 occurs at the limits of shadow boundaries, and
helps in producing finer elements in these regions. The
process will stop when we reach the maximum subdivision
level or when the patches are separated into visible and in
shadow.
Case 2 occurs when ray-casting has failed to identify
the correct visibility type. The patch may be unsubdivided
however when the color difference is not due to a visibility
change, but to a different texture. This is the case for the
orange poster on the back wall in Figure 11(a).
Figure
12 shows how the refinement algorithm recursively
traverses the hierarchy of elements and compares
each pair of neighboring hierarchy leaves. We consider
that visibility is similar if the difference of the attenuation
factor is less than a visibility threshold fixed by the user.
Similarly, we consider two patches to have the same color
if the distance in color is less than a color threshold also
fixed by the user. To compute this distance, we first convert
RGB values into CIELAB values [10]. The distance
between the two colors is simply computed by:
(a) (b)
Figure
11: (a) A patch in pink with a different color than
neighbors but the same visibility. The patch was not sub-
divided. (b) Mesh after texture-based refinement with improved
shadow boundaries. Compare to Figure 10.
At the end of the refinement process, we set up all the
necessary parameters: the reflectance, the display correc-
tion factor D , which is equal to the original radiosity
, and the texture T , which is the original texture
before any correction: i.e., extracted directly from the original
photographs.
Links from real light sources are fixed at the leaves of
the patch hierarchy. A radiosity step (gather/push-pull) is
then computed, corresponding to this new subdivision.
The texture-based refinement results in well-defined
shadow boundaries, which is very important for the subsequent
texture modification step. The resulting refinement
is shown in Figure 11(b).
4.2 Creating the unoccluded illumination tex-
adding in blocked light
Once an appropriate refinement has been performed for a
radiosity solution, we can proceed with the creation of theRefinement for shadow boundaries
for each leaf i, compare with its neighbor leaves n
if i has a similar color to n
and a different light source visibility
then subdivide i
else if i has a different color to n
and similar light source visibility
then subdivide i
else if the visibility type is PARTIAL
then subdivide i
if i has been subdivided
then
if i has no light source visibility
differences with its children
then remove the subdivision of i (i is a leaf again)
else redo the process for each new child of i.
Figure
12: Texture-based refinement for shadow boundaries
unoccluded illumination textures. As mentioned above,
the first step consists in adding in blocked light. The
result of this step will be the generation of a modified tex-
ture, into which the blocked light has been incorporated.
We define Eis to be the irradiance from a source s
blocked from patch i due to occlusion. A source is either
a primary light source or a secondary source (i.e., a
reflecting patch). This additional irradiance is the sum of
the radiosity of each source times the form factor Fis and
the complement of the attenuation factor equal to 1;Vis
for each primary or secondary source s.
Considering each real source, we have the additional irradiance
Ei for patch i:
Fis
s
The fact that all links are at the patch hierarchy leaves
allows satisfactory estimation of Ei, since the form-factor
and visibility information are relatively accurate. For more
accuracy, we take into account the occluded indirect illu-
mination. However, since we have not reconstructed every
object of the scene, and since the geometric model is ap-
proximate, the occluded irradiance due to indirect illumination
is less accurate. In our tests the effect of adding in
indirect light at this step has not been decisive.
To generate a new texture with the blocked light added,
the original texture is modulated by a correction factor
computed at the vertices of the leaf radiosity patches.
Modulating the texture at patch vertices results in smooth
modified textures.
(a) (b) (c)
inter final
Figure
13: (a) Original texture. (b) The resulting texture T , (c) The resulting texture T .
(a) (b) (c)
inter
Figure
14: (a) Original texture. (b) The resulting texture T , with real occluded illumination removed, mapped onto
final
the geometry of the real scene. (c) The final texture T after the texture-based correction.
The correction factor is based on the additional irradiance
described above in Eq. (5). To include the blocked
radiosity, we modulate the original texture T as follows
In this equation, Ei is the potentially blocked irradiance
(direct plus indirect), and Bi Di iEi. However,
Ei is computed with the approximate values Fis, Vis and
Es, and thus the modulation of Eq. (6) is not sufficiently
accurate.
inter
The intermediate texture T is generated by rendering
the leaves of the radiosity hierarchy with appropriate
modulation values (Eq. (6)). If the modulation factor is
greater than one, a multi-pass approach is used, as described
in Appendix A.
In
Figure
13(b), we show an example of the texture generated
after the addition of the blocked light, on the floor's
original texture shown in (a). As can be clearly seen,
the values computed are far too bright in the regions of
shadow, due to the inaccuracies of the different processes
used.
The texture obtained after this first step is used to up-date
new reflectance values inter, extracted in the same
manner as for the original photographs (Section 2.4, Eq.
inter
(1)). Radiosity values B are updated using these new
reflectance values, as well as the display correction factor
inter
D which is set equal to the newly computed radiosity
plus the blocked light.
As was demonstrated in the example (Figure 14(b)), the
resulting textures cannot be used as is. Typically, the resulting
texture is too bright, implying an overestimation of
radiosity. We believe that this is due to insufficient accuracy
of the emittance and reflectance estimation, the actual
radiosity calculation which includes form-factor and visibility
computation. The approximate geometric representation
may also add to these problems. To compensate for
the inaccuracies of the initial step, a subsequent heuristic
correction factor is applied, based on texture color.
4.3 Creating the unoccluded illumination tex-
2: texture color based correction
The intuition behind the heuristic correction step is to estimate
the desired color of the unoccluded texture for a
given element in shadow, based on a truly unoccluded element
elsewhere on the same surface. We would like the
color of each pixel of the occluded part of the texture to
have a color similar to that of an unoccluded pixel. The
similarity is modulated by the form factors since we want
to keep unoccluded illumination effects in the final texture.
Consider a patch i in shadow, and a patch r chosen appropriately
which is unoccluded. If i and r were in the
same position, we would want the corresponding texture
values and Tr to be equal. Since their position is differ-
ent, instead of equality, want Ti to be equal to Tr modulated
by the ratio of form-factors of each patch to the source. For
the light sources S, the resulting desired value for the texture
for i is as follows:
s Fis
s Frs
Since we operate in the context of polygon-based hardware
rendering, we perform this correction on a per-patch
basis. We modulate the texture of each patch using a correction
factor. Instead of using the color of the texture,
we use reflectance values which are stored with the radiosity
system, and which are computed directly from textures
(Eq. (1)). We associate to each occluded mesh radiosity
element, a reference patch which will serve to correct
the texture. For each radiosity mesh element in shadow,
we thus attempt to find a corresponding unoccluded mesh
element. We attempt to find a patch which is close, and
which has similar reflectance values.
To do this, we first determine the frontier between
occluded and unoccluded patches according to all light
sources. Having all links at leaves ensures the classifica-
tion VISIBLE, INVISIBLE or PARTIAL for a given patch
with respect to a given source. We are therefore able
to define a frontier composed of completely unoccluded
patches that have occluded neighbors with respect to real
light sources. This frontier usually encloses the regions
where we need to modify the texture. However the algorithm
does not depend on the creation of a closed re-
gion. The frontier elements will be used as references as
explained below. From these selected elements, we keep
only those which are visible from the viewpoint. This restriction
is due to the view-dependent property of the textures
we use. An example of such frontier patches is shown
in
Figure
15(a).
For each occluded patch i, we define a reference patch
chosen in the frontier of unoccluded patches. The reference
patch r is chosen to have a similar color as the
occluded patch and to be at a minimum distance from i.
For the occluded red patch, shown in Figure 15(b), the
algorithm chooses the black patch as a reference patch
from the frontier list of unoccluded elements shown in Figure
15(a). The black frontier patch is the closest patch that
has a similar color to the red occluded patch (see the algorithm
in Figure 16). We define colors to be similar if the
distance between them is less than a threshold defined by
the user.
(a) (b)
Figure
15: (a) Frontier in green composed of unoccluded
patches, which enclosed shadow regions. (b) Black patch
chosen in the frontier as a reference for the red selected
patch in shadow.
for each leaf i
mindistance
mincolor
for each patch in frontier list n
if Distance(i, n) mindistance
and DistanceColor(n, i) mincolor
then
Figure
Algorithm to choose reference patches.
As for the refinement, reflectances are converted into
LAB values, and the distance DistanceColor is computed
using Eq. (4). If no patch in the frontier of unoccluded
elements is found for a certain patch i, then the reference
patch is a default reference patch previously selected by
the user on the polygon before the texture correction process
Once the reference patch has been chosen, we use Eq.
(7), to determine the correction factor to be applied to the
texture of the patch. Since the reference patch is at a cer-
(a) (b) (c) (d)
final final final
Figure
17: (a) Display correction D corresponding to the new texture T . (b) Radiosity B corresponding to
final
the new texture T . (c) The resulting final texture with shadows removed. (d) The resulting reprojection using these
final values.
tain distance from the occluded patch, we modulate the
reflectance of the reference patch by the ratio of the form
factors Fis of patch i and Frs of patch r with respect to the
source s.
First, a corrected reflectance corr is computed:
corr s Fis
s Frs
Using the corrected reflectance, we generate the final
unoccluded illumination texture. To generate this texture,
we render the textured leaf patches of the patch hierarchy
with an appropriate modulation factor, as when adding in
blocked light.
inter
For occluded patches only, the texture T is modulated
by the ratio of the correction factor corr of patch i
over the intermediate reflectance inter computed directly
inter
from the intermediate textures
final ci orr inter
If corr is greater than inter, we use a multi-pass display
method described in the Appendix A, as for Step 1.
final
From this final unoccluded illumination texture T ,
we recompute new reflectance values final for occluded
patches and perform a radiosity step, resulting in new ra-
final
diosity values B based on the new reflectance. We
final
then compute a new display correction factor D , equal
to the new reflectance times the sum of the occluded irradiance
Eifinal and the additional irradiance Ei (see Eq. (5)).
Note that this display factor does not take into account
shadow calculations.
final
An illustration of D is given in Figure 17(a), and
final
B is shown in Figure 17(b). The result of the final textures
is shown in Figure 17(c). Note that shadows have
been mostly removed, and the texture does effectively represent
illumination as if shadows had not been computed.
4.4 Shadow reprojection
After the steps previously described, we have a texture representing
unoccluded illumination; we now need a way to
(i) reproject original shadows and (ii) modify the intensity
and add virtual objects and light sources.
This is achieved by modulating the unoccluded illumi-
final
nation texture Tfinal by the ratio Bi , which intuitively
final
is the ratio of radiosity including shadow calculations over
final
radiosity without shadows. Since B has a smaller
final
value than D in regions of shadow, these areas are
darker. As a result, shadows are appropriately reprojected,
resulting in an image which is close to the original photo-
graph. The result of this process is shown in Figure 17(d).
It is usually unnecessary to maintain the same subdivision
used to modify the textures during preprocess, since it
needs to be very fine. For the interactive updates, this can
be wasteful. Nonetheless, in some cases the mesh used for
the preprocess is satisfactory.
To generate a coarser mesh, we clear everything previously
computed, that is reflectances, radiosity, and the display
correction factor. We also clear the subdivision and
the link hierarchy. We then re-compute a solution based on
a simple BFV refinement [15], which results in a coarser
final
mesh. To compute both B and Di two radiosity solutions
are actually performed. At the end of the first solu-
tion, the display correction factor Di is computed with all
links at leaves of the hierarchy of mesh to allow accurate
blocked light computation. A second radiosity solution is
then computed, but keeping the same mesh; this permits
final
the initialization of B , using fewer links. The resulting
mesh is shown in Figure 18.
sively removing radiosity stored at each level of the hier-
archy. We then perform a complete radiosity step: without
performing additional refinement, we gather radiosity
across the links, and perform the push-pull step to maintain
a coherent representation of radiosity in the hierarchy. The
iterative process is stopped when the global illumination is
stable.
This process is interactive since the costly refinement
step (which includes visibility and form-factor computa-
tion) is avoided. The update time depends on the initial
level of subdivision. Note however that the insertion of a
Figure
18: After the texture modification, a radiosity solu- virtual object may result in additional subdivision. The up-
tion may be computed using a BFV refinement. The result- date rate is the same if we modify one or several lights. Exing
mesh is coarser in shadow regions than the one used to ample update rates are shown in Figure 19, and discussed
correct the texture. in the following section in detail.
4.5 Modified refinement to insert virtual sources 6 Results
To treat the insertion of a virtual light source, we adapt the
method of [8], in which a virtual object can be inserted
into the real scene and interactively manipulated [9]. This
results in the projection of the shadows due to the virtual
source on the real objects. The influence of a virtual light is
often significant, and thus we force additional refinement
by establishing all links to virtual sources on the polygons
as opposed to allowing links from the virtual sources to the
clusters. This is done on the polygons visible in the captured
images; the polygons corresponding to the hidden
parts of the scene are not affected by this forced refine-
ment.
The additional light sources brighten the scene; again
the multi-pass method of Appendix A is used to achieve
this effect. Virtual light source insertion is illustrated in
Figures
26, 21, 22, 24.
Final Relighting
At this stage, we have completed the preprocess, and mod-
ifications are based on changes to the radiosity system.
Links between patches and clusters in the radiosity hierarchy
have already been established, including the form factor
computation and the visibility determination. In order
to achieve fast updates, the subdivision and the links are
maintained during relighting. Since we only modify the
intensity of the light sources, the subdivision and links still
fit to the illumination even after modification. Keeping the
same hierarchy may result in overly fine mesh subdivision
if lights are switched off; since the user may switch them
on again later however, we prefer to maintain the mesh
subdivision. The modification process consists in recur-
We have tested the algorithm for two different real scenes.
For one of them, we also use radiance images obtained
using an adapted version of the algorithm of Debevec et
al. [6] (see
Appendix
B). For each scene, we present results
of relighting and adding virtual light and virtual objects,
all performed interactively. All timings are reported on an
SGI Onyx2 Infinite Reality workstation, R10000 running
at 195Mhz.
The first scene is shown in Figure 20(a), under the original
illumination. We first switch off the two back lights (C,
D) shown in Figure 8. In the resulting image Figure 20(b),
the scene is darker than the original illumination shown in
Figure
20(a) but with no change in shadows.
We then switch off the front left light (A) and double
the intensity of the right light (B) (see Figure 20). The
resulting shadow of the table is homogeneous in the umbra
regions. As expected, the shadow due to the left light
has disappeared, and the part of the scene which was illuminated
by this light source is darker. Compare the new
result with that of the method of [8] previously shown in
Figure
5(c), which was inexact, since real shadows were
not removed from textures.
We now switch on the left light with double the original
intensity and switch off the right light (see Figure 20(d)).
Again, shadows are as expected (i.e., the shadow boundary
of the right light is no longer visible). For each light
modification, the whole process (radiosity step and dis-
play) takes 0.8 seconds. The accompanying video3 shows
these light modifications, recorded in real time on an SGI
Onyx2 Infinite Reality workstation.
3See quicktime video sequences on the web http://www-
imagis.imag.fr/Publications/loscos/TVCG00/index.html
Refinement
Decrease light intensity
Time for modification 0.2 sec. 0.3 sec. 0.7 sec.
Time for display 0.2 sec. 0.2 sec. 0.6 sec.
Number of leaves/links 3486/11246 5781/16807 8247/50787
Figure
19: Interactive modification of a virtual light source intensity. The time rate depends on the level of the subdivi-
sion, and the number of established active links. The leaves at the bottom elements of the subdivision hierarchy.
(a) (b)
(c) (d)
Figure
20: (a) Original scene lit with shadow reprojection. (b) Back lights are virtually switched off. (c) Left light is
virtually switched off, and right light has double intensity. (d) Right light is virtually switched off, and left light has
double intensity. Note that in this case, the mesh used for the texture correction was sufficient.
We can also insert a virtual source, and modify its intensity
as described above. The insertion of the light source
takes 7.8 seconds. An interesting test is to switch off all the
real lights, and to illuminate the real scene only by a virtual
source (see Figure 21(a) and (b)). Notice that real shadows
from real light sources can no longer be seen. However,
real objects such as the table cast new shadows on the floor
and the walls, due only to the virtual light.
With this new illumination, we are still able to interactively
move a dynamic virtual object, such as the orange
box on the floor, previously inserted in 1.42 seconds,
in
Figure
22. Updates take approximately 0.3 sec. per
frame when moving the virtual object, with the subdivision
shown in Figure 22(a). With both real and virtual il-
lumination, this virtual object casts shadows onto the real
scene.
(a) (b)
Figure
21: (a) Insert a virtual light. Switch off all real
lights. The real scene is lit only by the virtual light. (b) Decrease
the intensity of the virtual light.
We have also tested our method on another real scene4,
shown in Figure 23(a). In (b), we have removed the real
shadows from textures of this scene.
Another interesting test is to compare the results of our
algorithm with real photographs in which we have turned
off some of the real lights in the scene. In the right column
((a), (b), (c), (d)) of the Figure 24, we show the original
photographs taken under different lighting conditions. On
the left, we show the simulation resulting from our method
for the same lighting conditions5. We have first performed
real relighting in (e) by switching off the two back lights.
In (f), we have switched off the left front light, and in (g),
we have switched off the right front light. The reprojected
shadows are softer than for the original picture. However,
the overall lighting effect is similar. In (h), we inserted a
virtual light, with all the original lights turned off. To test
this scene, we took a photograph of the real scene using
4This real scene was modeled using the Rekon system developed at
Montreal [20]
5We have applied an appropriate scale-factor correction to account
for the differences of overall lighting levels
(a) (b)
(c) (d)
Figure
22: (a) Insertion of virtual object and the consequent
subdivision. (b), (c), (d) The orange virtual object is
moving at interactive rates.
a real light which was used as a basis when modeling the
virtual source. We show these images side by side.
For this scene, we performed the same modifications as
above using radiance images as textures. The radiance
images were obtained using the algorithm of Debevec et
al. [6], adapted to our automatic camera 6, as described in
the
Appendix
B. The results of texture modification are
shown in Figure 23(c) and (d). Lighting modification has
also been performed and the results, shown in Figure 24(i),
(j), (k), (l), are very similar to those obtained using RGB
images.
We believe that for the cases presented, the high-dynamic
range images do not seem to provide a significant
advantage. This is probably due to the relatively low dynamic
range of the specific test images; for other cases the
difference could be more significant.
7 Discussion and Future Work
Concurrently with this work, and since the original submission
of this article, two new closely related methods
have been developed. We briefly discuss how our approach
relates to them, and then present our ideas for future work.
(a) (b) (c) (d)
Figure
23: (a) Original real scene viewed from our system with reprojection. (b) Real shadows were removed from real
world textures. (c) Original real scene viewed from our system, using radiance images as textures. (d) Real shadows
were removed from real world textures.
7.1 Discussion of more recent work
A recent method was developed by Yu et al. [30] which
permits relighting for indoors environments, as well as the
addition of virtual objects. In this method, reflectances
are estimated accurately both for the diffuse and specular
components, using a relatively large number of photographs
(around 30-40) and user-controlled constrained
lighting. However the rendering is computed using RADIANCE
[29], and is therefore far from interactive.
We [18] have also developed a completely different ap-
proach, which is based on taking several photographs of
the real scene, using different user-controlled lighting con-
ditions. This is used to estimate diffuse reflectance, for
relighting and adding virtual objects. This method allows
interactivity during scene modifications, but is based on
ray-tracing and thus is subject to limitations in the number
of sources and the resolution of the image.
The approach we present here is in many ways complementary
to the above. First, our approach has the simplest
capture process since both other methods require user-controlled
specific lighting, and more input photographs.
In our approach we simply photograph the scene from a
single viewpoint for the illumination processing 7. Sec-
ond, we do not attempt to perform a reflectance estima-
tion, since we use a simple texture modulation approach
for display. Finally, thanks to the use of the graphics hardware
for display, we can achieve faster update rates, with
fewer speed limitations.
It should be noted however that the approach of Yu et
al. handles any viewpoint in the environment (albeit non-
interactively), which is a significant advantage over our ap-
proach. The method in [18] notably allows the removal of
real objects using texture generation on the estimated re-
flectance.
7As with other methods of course, several photos are required for
geometric reconstruction.
7.2 Future work
Since the majority of the work is done during a preprocessing
step, the relighting process is interactive and allows
fast manipulation of the real scene. Two main issues need
to be addressed: the speed of the updates and the quality
of shadow removal.
The multi-pass display takes time, and it could also be
optimized. The radiosity steps could also be optimized
by avoiding the complete traversal of the hierarchy in the
spirit of [9], when doing some relighting.
The quality of the shadow removal is directly related to
the subdivision effected by the hierarchical radiosity algo-
rithm. Our texture-based refinement has greatly improved
the resulting quadtrees compared with traditional refine-
ment approaches, but it is still prone to problems mainly
due to inaccurate geometric reconstruction.
Another limitation of our system is the constraint to a
fixed view point. Users modifying the illumination of a
real scene would like to change the point of view to better
appreciate light effects. Building on recent work [20, 5]
we believe that we could develop a solution at least for a
limited set of viewpoints.
Another interesting research direction is to allow the removal
or replacement of real objects. The approach developed
in [18] shows how to do this on a pixel-by-pixel
basis. Even though the extension to our polygon-hardware
based rendering is non-trivial, we believe that it is feasible.
8 Conclusion
In this paper we have addressed the difficult problem of
common illumination for real and virtual scenes, and of
virtual relighting. The problem is hard, since real-world
photographs already contain real lighting effects such as
shadows, which need to be correctly identified if they are
to be modified.
Real Simulated (RGB Images) Simulated (Radiance)
(a) (e) (i)
(b) (f) (j)
(c) (g) (k)
(d) (h) (l)
Figure
24: (a), (b), (c), (d) are real photographs taken under different lighting conditions. (e), (f), (g) and (h) are
simulated images under respectively the same lighting conditions as the real photograph. (i), (j), (k) and (l) are the same
simulation as above but using radiance images.
We have presented a solution enabling interactive mod-
ification of both real and virtual illumination for reconstructed
real scenes. The algorithm we presented has three [5]
main steps. The first step is the real scene reconstruction of
a 3D geometric model using advanced vision-based tech-
niques. The second step is a preprocessing algorithm. We
first initialize a radiosity system, and use its structure to [6]
detect shadow regions, and appropriately refine the mesh.
Once these regions are identified, we modify real world
textures to include the real (primary and secondary) source
illumination blocked by real occluders, thus brightening
the dark shadow regions. After an additional heuristic cor-
rection, we can modulate these unoccluded illumination
textures to reproject real shadows appropriately. The resulting
simulated images are very similar to the original [8]
photographs.
We then have the ability to select and change the intensity
of each real and virtual light source, and also add
virtual objects into the scene. The results are convincing: [9]
shadow boundaries due to switched-off real lights disap-
pear, and we can modify light source intensities, and move
virtual objects. All modifications are interactive, and all
diffuse common illumination effects such as shadows be- [10]
tween real and virtual objects, are correctly represented.
Acknowledgments
This work was funded in part by the ESPRIT Reactive LTR
Project ARCADE (#24944) of the European Union. We would
like to thank Sylvain Bougnoux for providing the TotalCalib
system, and Pierre Poulin and his colleagues for the use of
Rekon. We also would like to thank Cyril Soler and Francois
[13]
Sillion for the many discussions and help. Many thanks to Bruce
Walter for his invaluable input on this project, and to the anonymous
reviewers whose comments greatly improved the revised
[14]
document and results.
--R
http://www.
A survey of augmented reality.
Greenberg A progressive refinement approach to fast radiosity image generation.
Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography
Recovering high dynamic
range radiance maps from photographs.
and rendering architecture from photographs: A hybrid
common illumination for computer augmented reality.
Rendering Techniques
EG workshop on Rendering)
Held in St.
global illumination using A line-space hierarchy
Color Appearance Models.
reconstruction of urban scenes from image sequences.
illumination between real and computer generated scenes.
finement and clustering for radiosity in complex environments
Computer Graphics Forum
Rendering Techniques in Computer Graphics (2nd EG
Workshop on Rendering)
A rapid hierarchical
Graphics (SIGGRAPH
Anne Ve
real and virtual objects in video sequences.
IEEE Workshop on Networked Realities
fr/syntim/textes/nr95-eng
Computer Graphics
A montage method: The overlaying of the computer generated images onto a background photograph.
Interactively modeling with photogrammetry.
Camera calibration without feature extraction.
Acquiring a radiance distribution to superimpose virtual objects onto a real scene.
A clustering algorithm for radiosity in complex environments.
Cyril Soler and Franc
Superior augmented reality registration by integrating landmark tracking and magnetic tracking.
The RADIANCE lighting simulation and rendering system.
Inverse global illumination: Recovering reflectance models of real scenes from photographs.
Recovering photometric properties of architectural scenes from photographs.
Using geometric corners to build a 2D mosaic from a set of images.
In our algorithm
as a color
This display is insufficient when Bi is greater than Di.
due to a limitation of the glColor function of OpenGL
requires a color value between zero and one.
special treatment is done when Bi Di is greater than one
is automatically clipped to one
Figure 25: Multi-pass display
integer part of Bi Di.
of Bi Di
glBlend function of OpenGL.
texture by the color Bi Di
The multi-pass algorithm is described in Figure 25
of this approach is that the display time is increased during
the light modification pass
where a virtual light source has been added to the scene.
Figure 26: (a) Insertion of virtual light source
passes in 0.3 seconds).
several weighted textures to create a single one
textures with direct illumination.
The use of RGB textures extracted from a camera has several
by high-dynamic range photographs [6]
priced digital camera such as the Kodak DC260 which costs ten
times less than professional level digital cameras.
with cameras in this price range is the lack of precise control of
To create these radiance images
adapt the algorithm of Debevec
provides nine different picture settings using the
the parameters (aperture and
When EV is negative
and when it is positive
EV values into shutter speeds according to a reference time t
set for EV 0.
ranges(2EV).
exposure time t/4 t/2 2 t/2 t/ 2 t
exposure time t 2t 2t 2 2t 4t The results of this adapted algorithm are quite satisfactory.
The camera response function extracted seems reasonable
we avoid the problems of saturation and low dynamic range of
Computer Science at the University of Grenoble
She is currently doing a PostDoctoral at University
College of London in UK.
interests include inverse illumination
global illumination and rendering in general.
INRIA Rho
GRAVIR/IMAG-INRIA project since October <Year>1995</Year>
supervision of M.
and other projects.
visibility and shadow calculations
and the treatment of complex environments.
and Eurographics.
of expertise.
multiple 2D images
Roberts has published numerous articles about computer vision
is a regular participant in scientific conferences.
--TR
--CTR
Vincent Masselus , Philip Dutr , Frederik Anrys, The free-form light stage, Proceedings of the 13th Eurographics workshop on Rendering, June 26-28, 2002, Pisa, Italy
Enhua Wu , Qimin Sun , Xuehui Liu, Recovery of material under complex illumination conditions, Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, June 15-18, 2004, Singapore
Byong Mok Oh , Max Chen , Julie Dorsey , Frdo Durand, Image-based modeling and photo editing, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.433-442, August 2001
Samuel Boivin , Andre Gagalowicz, Image-based rendering of diffuse, specular and glossy surfaces from a single image, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.107-116, August 2001
Zhouchen Lin , Tien-Tsin Wong , Heung-Yeung Shum, Relighting with the Reflected Irradiance Field: Representation, Sampling and Reconstruction, International Journal of Computer Vision, v.49 n.2-3, p.229-246, September-October 2002
Raphael Grasset , Laurence Boissieux , Jean D. Gascuel , Dieter Schmalstieg, Interactive mediated reality, Proceedings of the Sixth Australasian conference on User interface, p.21-29, January 30-February 03, 2005, Newcastle, Australia
Oliver Bimber , Ramesh Raskar, Modern approaches to augmented reality, ACM SIGGRAPH 2005 Courses, July 31-August | computer augmented reality;virtual relighting;interactivity;common illumination;hierarchical radiosity;global illumination |
615161 | Concurrency control issues in nested transactions. | The concept of nested transactions offers more decomposable execution units and finer-grained control over concurrency and recovery than "flat" transactions. Furthermore, it supports the decomposition of a "unit of work" into subtasks and their appropriate distribution in a computer system as a prerequisite of intratransaction parallelism. However, to exploit its full potential, suitable granules of concurrency control as well as access modes for shared data are necessary. In this article, we investigate various issues of concurrency control for nested transactions. First, the mechanisms for cooperation and communication within nested transactions should not impede parallel execution of transactions among parent and children or among siblings. Therefore, a model for nested transactions is proposed allowing for effective exploitation of intra-transaction parallelism. Starting with a set of basic locking rules, we introduce the concept of "downward inheritance of locks" to make data manipulated by a parent available to its children. To support supervised and restricted access, this concept is refined to "controlled downward inheritance." The initial concurrency control scheme was based on S-X locks for "flat," non-overlapping data objects. In order to adjust this scheme for practical applications, a set of concurrency control rules is derived for generalized lock modes described by a compatibility matrix. Also, these rules are combined with a hierarchical locking scheme to improve selective access to data granules of varying sizes. After having tied together both types of hierarchies (transaction and object), it can be shown how "controlled downward inheritance" for hierarchical objects is achieved in nested transactions. Finally, problems of deadlock detection and resolution in nested transactions are considered. | Introduction
When multiple users access a database simultaneously, their data operations have to be coordinated in
order to prevent incorrect results and to preserve the consistency of the shared data. This activity is
called concurrency control and should provide each concurrent user with the illusion that he is referencing
a dedicated database. The classical transaction concept [Eswaran76] defines a transaction as the
unit of concurrency control, that is, the database management system (DBMS) has to guarantee isolated
execution for an entire transaction. This implies that its results derived in a multi-programming environment
should be the same as if obtained in some serial execution schedule. Other important transaction
properties are atomicity, consistency, and durability as defined in [H-rder83]. In a DBMS, the component
responsible for achieving these properties is transaction management which includes
concurrency control as a major function.
In current DBMSs, transaction management is typically designed with a single level control structure; its
implementation is optimized to execute short transactions with only a few data references [ Anon85].
Two-phase locking is, by far, the most common method for controlling concurrency among transactions
and has been accepted as a standard solution [Bernstein81, Gray78]. When running on a centralized
DBMS, transaction granularity as well as locking protocols usually obtain satisfactory performance; for
high performance transaction systems, special concurrency control methods are considered to be mandatory
to increase the level of parallelism [Gawlick85, Reuter82], however, without requiring changes to
the transaction concept.
When executing more complex transactions involving, for example, sequences of joins and sort operations
in a relational DBMS, it turns out that single level transactions do not achieve optimal flexibility and
performance. Especially in distributed systems, it is highly desirable to have more general control structures
supporting reliable and distributed computing more effectively. Major concerns are more decomposable
and finer grained control of concurrency and recovery. As a solution to these problems, the concept
of nested transactions was proposed by Moss [Moss85] where single level transactions are enriched
by an inner control structure. Such a mechanism allows for the dynamic decomposition of a
transaction into a hierarchy of subtransactions thereby preserving all properties of a transaction as a unit
and assuring atomicity and isolated execution for every individual subtransaction. As a consequence,
subtransactions may be distributed in a system among various (processor) nodes performing subtasks
of the entire transaction. These prime aspects of nested transactions - decomposition of a 'unit of work'
into subtasks and their distribution - lead to the following advantages in a computing system and, in par-
ticular, in a distributed DBMS:
Intra-transaction parallelism
The larger a transaction is, the more inherent parallelism may be anticipated during its execution. To take
advantage of this inherent concurrency in the application, suitable granules of concurrency control as
well as access modes (e.g. locking modes) are necessary. In environments enabling parallel execution,
the nested transaction concept embodies an appropriate control structure to support supervised and,
therefore safe intra-transaction concurrency, thereby increasing efficiency and decreasing response
time.
Intra-transaction recovery control
An uncommitted subtransaction can be aborted and rolled back without any side-effects to other transactions
outside its hierarchy. Hence, the concept of nested transactions contributes to a considerable
refinement of the scope of in-transaction UNDO as compared to single level transactions where UNDO-
recovery necessarily yielded the state of 'begin of transaction' (BOT). It may be further refined by adding
an appropriate savepoint concept to nested transactions [H-rder87, Rothermel89].
Explicit control structure
When parallel and asynchronous activities are to be coordinated for a single 'unit of work' (from an external
point of view), the introduction of a powerful explicit control structure allowing for the delegation of
pieces of work and their atomic execution appears to be mandatory. Such a structure will greatly reduce
the complexity of programming and enhance the reliability of transaction processing.
System modularity
Subtransactions facilitate a simple and safe composition of a transaction program whose modules may
be designed and implemented independently. This system modularity serves other design goals as well:
encapsulation (information hiding), failure limitation, and security.
Distribution of implementation
The concept of the nested transaction supports the implementation of distributed algorithms by a flexible
control structure for concurrent execution. Distribution of data and processing, in turn, have a major impact
on overall efficiency, in terms of both cost-effective use of hardware (special processors, I/O devic-
es) and responsiveness. Distribution also affects availability (replication of data). Hence, the robustness
of the system may be improved in various ways.
In a centralized DBMS, nested transactions have some uses, however, they do not exploit their full potential
due to the lack of resources. An obvious advantage is the clearer control structure for the execution
of complex transactions supporting the design of more reliable programs. It also allows for the isolated
rollback of an uncommitted subtransaction in the case of forced abort or transaction failure. When
serializability of transactions controlled by strict 2-phase locking protocols (or equivalent methods) is re-
quired, neither lock granules nor lock duration are affected by such an approach. Subtransactions do not
release their locks; they are inherited by their parent transaction.
For multi-layered centralized DBMS, some kind of multi-level transaction management was provided
where the subtransactions serve as control structures in the various layers. To gain a higher degree of
concurrency and more flexible control of lock granules, a so-called multi-level concurrency control was
introduced. Furthermore, isolated rollback of subtransactions can be guaranteed. In System R
this concept is used for two layers: locking is applied twice, on tuples until EOT (long tuple
locks) and on pages for the duration of each tuple operation (short page locks for actions). Since tuple
operations can be regarded as subtransactions and page locks are released before EOT of the parent
transaction, this technique has been called "open nested transaction". The problems involved were discussed
in [Gray81].
The generalization of open nested transactions for centralized systems - called multi-level transactions
was proposed in [Weikum86, Weikum84, Moss86, Been89] to allow for early release of locks at lower
levels of control; however, they rely on compensation operations for subtransactions to be applied in the
case of rollback recovery. A detailed description of all aspects of multi-level transaction management
including a discussion of performance issues is given in [Weikum91]. Here, we don't want to consider
this kind of multi-level structure and concurrency control for centralized DBMS operations.
Due to the salient properties supported by nested transactions, many researchers have focussed attention
on the design and implementation of them in distributed systems. Our approach is based on the pro-
posals, results and experiences of distributed systems' design, especially as reported in [Allchin83,
Jessop82, Liskov85, M-llerl83, Spector83, Walter84]; it tries to adjust the concept of nested transactions
and improves its use for distributed DBMS. Our prime goal is its investigation and its conceivable extension
for flexible intra-transaction parallelism. Due to space limitations we restrict our discussion to concurrency
control and deadlock detection issues. Recovery problems are dealt with in [Moss87,
H-rder87, Rothermel89]. To facilitate our discussion, we introduce a model for nested transactions; it is
designed so as to not prohibit parent/child- nor sibling-parallelism. In Sec. 3, the basic concurrency control
model invented by Moss [Moss85] is discussed. In some systems, it has been extended and refined
by the concept of downward inheritance enabling transactions to pass on locks to their child transac-
tions. In Sec. 4, we propose a number of generalizations and extensions for concurrency control in nested
transactions. The concept of controlled downward inheritance enables a parent to give its child access
to shared data and at the same time to restrict its mode of usage. Another refinement allows the
use of more general lock modes as compared to the simple S-X lock model. Hence, applications may
better adjust their synchronization needs. So far, all efforts are directed towards enhancement of concurrency
control in transaction hierarchies operating on 'flat' objects. Since every practical DBMS is
forced to use an object hierarchy to provide fine as well as coarse lock granules at reasonable cost, we
design a concurrency control protocol which combines object and transaction hierarchies as well as supports
controlled downward inheritance. We conclude and summarize our results in the final section.
2. A Model of Nested Transactions
The concurrency control techniques we are going to present in this paper are based on the nested trans-action
model introduced by Moss [Moss85]. A transaction may contain any number of subtransactions,
which again may be composed of any number of subtransactions - conceivably resulting in an arbitrarily
deep hierarchy of nested transactions. The root transaction which is not enclosed in any transaction is
called the top-level transaction (TL-transaction). Transactions having subtransactions are called par-
ents, and their subtransactions are their children. We will also speak of ancestors and descendants.
The ancestor (descendant) relation is the reflexive transitive closure of the parent (child) relation. We will
use the term superior (inferior) for the non-reflexive version of the ancestor (descendant). The set of
descendants of a transaction together with their parent/child relationships is called the transaction's
hierarchy. In the following, we will use the term 'transaction' to denote both TL-transactions and sub-transactions
The hierarchy of a TL-transaction can be represented by a so-called transaction tree. The nodes of the
tree represent transactions, and the edges illustrate the parent/child relationships between the related
transactions. In the transaction tree shown in Fig. 1, the root is represented by TL-transaction A. The
children of subtransaction C are D, F, and G, and the parent of C is B. The inferiors of C are D, E, F, and
G, and the superiors are B and A. Of course, the descendants and ancestors sets of C additionally contain
C itself. The hierarchy of C is depicted as the subtree spanned by C's descendants.
The properties defined for flat transactions are atomicity, consistency, isolated execution, and durability
(ACID-properties) [H-rder83]. In the nested transaction model, the ACID-properties are fulfilled
for TL-transactions, while only a subset of them are defined for subtransactions. A subtransaction appears
atomic to the other transactions and may commit and abort independently. Aborting a subtransaction
does not affect the outcome of the transactions not belonging to the subtransaction's hierarchy,
and hence subtransactions act as firewalls, shielding the outside world from internal failures. If the concurrency
control scheme introduced by Moss is applied, isolated execution is guaranteed for subtrans-
actions. However, to increase intra-transaction parallelism the enhanced schemes proposed in this pa-
Figure
1: Example of a Transaction Tree
I
A
F G hierarchy of C
per allow transactions belonging to the same TL-transaction hierarchy to share data in a controlled man-
ner. The durability of the effects of a committed subtransaction depends on the outcome of its superiors
even if it commits, aborting one of its superiors will undo its effects. A subtransaction's effects become
permanent only when its TL-transaction commits. The consistency property for subtransactions seems
to be too restrictive, as sometimes a parent transaction needs the results of several child transactions to
perform some consistency preserving actions.
To exploit the inherent potential of nested transactions and their advantages as stated in Sec. 1, the degree
of intra-transaction parallelism should be as high as possible. Two kinds of intra-transaction parallelism
can be defined, parent/child-parallelism and sibling-parallelism. If the first kind of parallelism
is supported, then a transaction may run in parallel to its children, while in the second kind siblings are
allowed to run concurrently. Using both these definitions, we are able to characterize four levels of intra-
transaction parallelism:
. Neither parent/child- nor sibling-parallelism: At any point in time there is at most one transaction
active in a TL-transaction hierarchy, i.e. there is no intra-transaction parallelism at all. Since all transactions
in a hierarchy are executed serially, no concurrency control among them is needed. For ex-
ample, if each transaction is executed by a single process and processes only communicate by
means of a (synchronous) remote procedure call mechanism, only this level of "parallelism" can be
provided.
. Only sibling-parallelism: If only siblings may be performed concurrently, then a transaction never
runs in parallel with its superiors. This kind of restricted parallelism enables a transaction to share
objects with its ancestors without further concurrency control. For example, in the ARGUS system
[Lisk85], the intra-transaction parallelism is restricted to this level.
. Only parent/child-parallelism: Since a transaction and its children may run concurrently but siblings
may not, in a TL-transaction hierarchy only the transactions along one path of the hierarchy may run
in parallel. This kind of restriction simplifies intra-transaction concurrency control in the sense that
only transactions residing in the same path have to be synchronized with each other. (This reason
hardly justifies such a system design).
. Parent/child- as well as sibling-parallelism: This level permits arbitrary intra-transaction parallel-
ism, i.e. in principle, all transactions of a TL-transaction hierarchy may be executed concurrently. Of
course, compared to the degrees of parallelism described above, this degree requires the most sophisticated
concurrency control scheme. For example, LOCUS [M-ller83] and CLOUDS [Allchin83]
support this level of parallelism.
As discussed so far, our transaction model does not contain any essential restrictions. Transactions may
either be performed entirely on a single processor site or may be distributed over multiple processors
located at one or more sites. Moreover, the model does not restrict the kind of data distribution implemented
by the underlying system, and hence our considerations apply for data sharing as well as for
data distribution approaches [Rahm92]. Since we focus on concurrency control concepts, introduction
of further refinements or implementation issues would only burden our discussion.
3. Basic Locking Rules for Nested Transactions
Locking as the standard method of concurrency control in DBMS has been used successfully for a variety
of applications over the past decade and longer. Therefore, it is reasonable to choose conventional
locking protocols as our starting point of investigation for nested transactions. Conventional locking protocols
offer two modes of synchronization - read, which permits multiple transactions to Share an object
at a time, and write, which gives the right to a single transaction for eXclusively accessing an object (e.g.
see [Gray78]). As far as concurrency control is concerned, our data model initially consists of disjoint
objects O i which are the lockable units.
In the next part of this section, we will summarize the locking scheme for nested transactions proposed
by Moss [Moss85]. This scheme only allows for upward inheritance of locks, i.e. a transaction can inherit
locks from its children, but not vice versa. In the last part, we will extend this scheme such that it supports
upward as well as downward inheritance. Both schemes presented in this section have been implemented
in several systems.
3.1 Upward Inheritance of Locks
Before describing the locking rules proposed by Moss, we have to introduce some terminology. Possible
lock modes of an object are NL-, S-, and X-mode. The null mode (NL) represents the absence of a lock
request for or a lock on the object. A transaction can acquire a lock on object O in some mode M; then
it holds the lock in mode M until its termination. Besides holding a lock a transaction can retain a lock.
When a subtransaction commits, its parent transaction inherits its locks and then retains them. If a trans-action
holds a lock, it has the right to access the locked object (in the corresponding mode), which is not
true for retained locks. A retained lock is only a place holder. A retained X-lock, denoted by r:X (as opposed
to h:X for an X-lock held), indicates that transactions outside the hierarchy of the retainer cannot
acquire the lock, but that descendants of the retainer potentially can. That is, if a transaction T retains
an X-lock, then all non-descendants of T cannot hold the lock in either X- or in S-mode. If T is a retainer
of an S-lock, it is guaranteed that a non-descendant of T cannot hold the lock in X-mode, but potentially
can in S-mode. As soon as a transaction becomes a retainer of a lock, it remains a retainer for that lock
until it terminates.
Having introduced this terminology, we can formulate the locking rules now:
Transaction T may acquire a lock in X-mode if
. no other transaction holds the lock in X- or S-mode, and
. all transactions that retain the lock in X- or S-mode are ancestors of T.
Transaction T may acquire a lock in S-mode if
. no other transaction holds the lock in X-mode, and
. all transactions that retain the lock in X-mode are ancestors of T.
R3: When a subtransaction T commits, the parent of T inherits T's (held and retained) locks. After that,
the parent retains the locks in the same mode (X or S) in which T held or retained the locks previ-
ously. 2
R4: When a transaction aborts, it releases all locks it holds or retains. If any of its superiors holds or
retains any of these locks they continue to do so.
Obviously, the rules stated above only allow for upward inheritance of locks, i.e. a transaction can only
inherit its children's locks, but not vice versa. The principle of upward inheritance is exemplified in Fig.
2, where we use the notions X- and S-sphere for describing the implications of this principle. The X-
2. Note, the inheritance mechanism may cause a transaction to (conceptually) retain several locks on the same
object. Of course, the number of locks retained by a transaction should be limited to one by only retaining the
most restrictive lock.
Figure
2: Upward Inheritance of Locks
R
R
r:S
a)
R retains X-lock after T acquired S-lock after EOT(T)
R
R
R retains X-lock after T acquired X-lock after EOT(T)
X-sphere S-sphere
sphere (S-sphere) of an object is defined to be the set of transactions that can potentially lock this object
in X-mode (S-mode). In Fig. 2a, the X-sphere of an object disappears entirely when transaction T acquires
an S-lock on this object, i.e. no transaction may acquire an X-lock on this object, while each trans-action
in R's hierarchy may lock the object in S-mode. After commit of T, a new X-sphere is established,
which consists of the descendants of T's parent transaction. In Fig. 2b, the X- as well as the S-sphere
disappear when T acquires an X-lock. When T commits, a new X- and S-sphere are established. In gen-
eral, a transaction acquiring a lock on an object may cause the object's X- or S-sphere to shrink, while
the termination of a transaction may cause them to grow.
The rules stated above only allow for upward inheritance at commit time, i.e. a transaction may not inherit
a child's locks before the latter commits. This restriction guarantees that transactions can see the effects
of committed children only, and hence are not affected by failures of children. Furthermore, this restriction
ensures that the subtransactions of a transaction tree are serializable. Allowing upward inheritance
before commit time would cause transactions to become dependent on the outcome of child transac-
tions, i.e. subtransactions would not act as firewalls anymore [H-rder87] such that application code within
a subtransaction had to cope with concurrency and recovery issues.
3.2 Downward Inheritance of Locks
We feel that especially the restrictions caused by only allowing upward lock inheritance prevent desirable
decompositions of transactions into a set of cooperating subtransactions. For example, assume an
application that navigates through an object base and updates each of the accessed objects. A desirable
decomposition of the above task is depicted in Fig. 3. Transaction T reads an object and determines the
next object to be accessed by applying the Next operation on the current object's content. Then T creates
a new child transaction, which asynchronously performs an Update operation on the current object,
while T reads in the next object, on which it acts as described above. This decomposition has some appealing
characteristics: (1) Update operations are performed in parallel. (2) If an update operation fails,
it does not affect the other operations. A failed operation can be restarted at a later point in time. (3) The
update operations are performed in isolation from each other. This is of particular importance if the up-date
of an object may imply updates on other objects. For example, the update of two different objects
may imply two updates of the same access path.
Figure
3: Decomposition of an Application
Update (O1)
Update (O2)
Update
Unfortunately, this decomposition is impossible if the basic locking rules proposed by Moss are applied.
To be able to perform the Next operation on an object, T must hold an R-lock on this object. Since T
must hold this R-lock until it commits, no child of T can ever acquire an X-lock on this object. In other
words, once an object has been read by T, it cannot be updated by T's children anymore.
The decomposition required in the example of Fig. 3 is possible as soon as downward inheritance of
locks is supported by the underlying locking scheme. In such a scheme, subtransactions may inherit
locks from superiors, where inheritance of a lock can only take place after the superior holding this lock
has explicitly offered this lock for downward inheritance. A transaction can offer a lock it holds to the
transactions in its hierarchy, which can then acquire the lock according to the locking rules stated above.
Consequently, the concept of downward inheritance allows a transaction to make all or a subset of its
locks available to its hierarchy. The locking rules proposed by Moss can easily be extended to support
downward inheritance by adding a new rule:
R5: A transaction T holding a lock can offer the lock (to the transactions in its hierarchy). After offering
the lock, T retains the lock in the same mode it held the lock before.
A transaction offering a lock (temporarily) disclaims the right to access the locked object and gives the
transactions in its hierarchy the opportunity to lock this object in any mode. Of course, in its hierarchy
there might be either at most one transaction holding the lock in X-mode or a number of transactions
holding the lock in S-mode. Since the transaction offering a lock still retains the lock in the mode it held
the lock before, no transaction from outside its hierarchy can lock the object in a mode that conflicts with
the mode of the retained lock. To become a holder again, the transaction must acquire the lock anew,
which only succeeds if rules R1 and stated above are fulfilled.
An example for applying the lock offering mechanism is illustrated in Fig. 4. When transaction R offers
the X-lock it holds, an S- and X-sphere comprising R's hierarchy is established for the corresponding
object, i.e. all descendants of R have the opportunity to lock the object either in S- or in X-mode. As depicted
in the example, the object's X- and S-sphere disappear when a descendant of R locks the object
in X-mode.
Figure
4: Downward Inheritance of Locks
R
R holding X-lock
R
after R offered X-lock
R
after T acquired X-lock
R
after EOT(T)
X-sphere S-sphere
If downward inheritance of locks is possible, the isolation property of transactions may be violated. While
transactions belonging to different TL-transaction hierarchies still cannot interfere, transactions of the
same hierarchy may share data. As a consequence, a transaction may see uncommitted data of supe-
riors. This, however, cannot lead to inconsistencies since the effects of the transaction are undone when
a superior aborts. On the other hand, a transaction may never see uncommitted data of inferiors, i.e.
subtransactions act as firewalls even if downward inheritance of locks is allowed.
A lock offering mechanism similar to the one described above has been implemented in the LOCUS system
[M-ller83]. Some kind of automatic downward inheritance is provided in the ARGUS system
[Liskov85]. In this particular approach, concurrency control is considerably simplified, since conflicts
among transactions in a hierarchical path are prevented by only allowing sibling-parallelism. Automatic
downward inheritance is then implicitly obtained by the rule that a transaction may acquire a lock if each
transaction holding this lock is a superior of it.
4. Enhanced Concurrency Control for Nested Transactions
By using the idea of downward inheritance we gain more flexibility of lock inheritance in a given trans-action
hierarchy, however, with poor control over its specific usage. For this reason, the kind of offering
concept has still some shortcomings in situations, where a transaction offering a lock desires to control
the mode in which its inferiors can hold the lock. For example, consider the access sequence shown in
Fig. 3 once more. With the additional rule R5 it is possible to make the desired decomposition. When
transaction T offers the X-lock for O1, O2, O3, etc., then its children C1, C2, C3, etc., can acquire and
hold the lock in any mode later on. However, it would be helpful if T could prevent some child Ci from
being able to hold the lock in X-mode in order to make sure that Ci cannot change the resp. Oi.
4.1 Controlled Downward Inheritance
The need for controlling the lock mode in which inferiors can access an offered object becomes more
obvious if we consider an example from a cooperative design environment [Bancilhon85, Kim84]. Fig. 5
shows a design task which is structured as a three-level transaction hierarchy. Assume, transaction B
generates an object O, describing the interface of a work piece. Transactions C and D, which are children
of B, design subparts of the work piece and therefore require read access to the interface descrip-
tion. To allow its children to read O, B must offer the lock which it holds on O. If there is no way to control
the mode in which children can hold the lock, one of the children may acquire the lock in X-mode which
has two undesirable consequences: First, the child can change O, and second, the child blocks its siblings
by preventing them from reading O.
To overcome these problems we suggest an extension of the locking rules introduced in the previous
section. In the scheme discussed previously, a transaction can offer locks that it holds to its inferiors. In
the extended scheme, we will replace the lock offering mechanism by primitives supporting the upgrading
and downgrading of locks:
Downgrade
A transaction T holding a lock in mode M can downgrade the lock to a less restrictive mode M'. After
downgrading the lock, transaction T holds the lock in mode M'and retains the lock in mode M. For ex-
ample, a transaction holding a lock in X-mode can downgrade the lock to mode S or NL 3 .
Upgrade
A transaction T holding a lock in mode M can upgrade the lock to a more restrictive mode M' if the following
condition is satisfied: No other transaction holds the lock in a mode conflicting with M', and all
transactions that retain the lock in a mode conflicting with M' are ancestors of T 4 . For example, a trans-action
T holding a lock in S-mode can upgrade the lock to mode X if no other transaction holds the lock
in X- or S-mode, and all transactions retaining the lock in X- or S-mode are ancestors of T.
In the extended scheme, holding and retaining a lock have exactly the same semantics as in Moss's
scheme. After downgrading a lock from mode M to mode M', a transaction holds the lock in mode M' and
retains it in mode M. Since the transaction retains the lock in M-mode, it prevents transactions outside
its hierarchy from holding the lock in a mode conflicting with M. On the other hand, since it holds the lock
in mode M' it keeps its inferiors from holding the lock in a mode conflicting with M'. That is, in contrast to
the offering of locks described in the basic scheme, downgrade allows a transaction to control how its
3. Note that downgrading to NL does not correspond to a general release of the lock. Release of such
locks is limited to the shere of the downgrader.
4. Note, this condition is equivalent to the condition that must be satisfied for a transaction acquiring a lock.
Figure
5: Decomposition of a Design Task
A
B:
r:X/h:S
B:
L(X) . acquire lock on O in X-mode;
L(S) . acquire lock on O in S-mode;
D(S) . downgrade lock on O to S-mode;
U(X) . upgrade lock on O to X-mode;
holds O in X-mode;
retains O in X-mode
and holds O in S-mode;
a) task structure b) concurrent work steps
C:
inferiors can hold a lock. For example, if a transaction T downgrades a lock from X- to S-mode, it prevents
transactions outside its hierarchy from holding the lock in any mode, and precludes its inferiors
from holding the lock in X-mode, but allows its inferiors to hold the lock in S-mode. Of course, downgrading
a lock to NL-mode is equivalent to offering a lock in the basic scheme.
As stated above, the holder of a lock can upgrade the lock to a mode which is more restrictive than its
current hold mode. This feature allows a transaction to upgrade a lock which it downgraded previously,
e.g. a transaction that downgraded a lock to S-mode could again upgrade the lock to mode X as soon
as its children have committed. Of course, transactions can also upgrade a lock without having downgraded
it before. In the following, we will describe the extended locking rules. Italics will be used to point
out the extensions added to Moss's scheme:
Transaction T may acquire a lock in X-mode or upgrade a lock it holds to mode X if
. no other transaction holds the lock in X- or S-mode, and
. all transactions that retain the lock in X- or S-mode are ancestors of T.
Transaction T may acquire a lock in S-mode if
. no other transaction holds the lock in X-mode, and
. all transactions that retain the lock in X-mode are ancestors of T.
ER3: When a subtransaction T commits, the parent of T inherits T's (held and retained) locks. After that,
the parent retains the locks in the same mode (X or S) as T held or retained them before.
ER4: When a top-level transaction commits, it releases all locks it holds or retains.
ER5: When a transaction aborts, it releases all locks it holds or retains. If any of its superiors hold or
retain any of these locks, they continue to do so.
holding a lock in X-mode can downgrade the lock to mode S or NL. After performing
the downgrade operation, T retains the lock in X-mode.
ER7: A transaction holding a lock in S-mode can downgrade the lock to mode NL. After performing the
downgrade operation, T retains the lock in S-mode.
The mode to which a transaction T downgrades a lock determines the modes in which the transactions
of T's hierarchy cannot hold the lock. If the downgraded mode is S, the transactions of T's hierarchy cannot
hold the lock in X-mode (since S conflicts with X). If the downgraded mode is NL, then the transactions
in T's hierarchy can potentially hold the lock in any mode.
Some examples may help to clarify the key issue of controlled downward inheritance. The effect of offering
an X-lock can be depicted in the scenario in Fig. 4. A similar scenario in Fig. 6 illustrates the downgrading
of an X-lock to mode S. (Downgrading of S-locks are handled in an analogous manner). The
essential issue observed in this example is that only S-locks may be granted within R's hierarchy, i.e. no
X-sphere is established when the lock is downgraded to mode S.
Given these extended locking rules, the problem described in the design environment example above
can be solved very easily (see Fig. 5). After having generated object O, transaction B downgrades the
X-lock it holds on O to mode S. Since it then holds the lock in S-mode, C and D are prevented from holding
the lock in X-mode which guarantees that they cannot change O nor block each other. Note that since
retains the lock in X-mode after downgrading the lock, transactions A and E cannot hold the lock in
any mode, i.e. A and E can neither read nor write O. After commit of C and D, B can upgrade the lock
once again.
4.2 Correctness Concerns
As stated in Section 3.1, upward lock inheritance at commit ensures that subtransactions act as firewalls
in case of a failure and that all subtransactions of a TL-transaction remain isolated. Since we have proposed
the concept of controlled downward inheritance, we like to discuss the impact of this concept on
the correctness of concurrent executions.
TL-transactions are serializable because
- each transaction in a TL-transaction tree locks each data object before accessing it
all locks hold by transactions in a transaction tree are released not before the TL-transaction commits
This locking protocol corresponds to strict 2-phase locking for TL-transactions. It determines their serialization
order by the time of their commit, as it holds for single-level transactions.
Figure
Controlled Downward Inheritance
R
R holding X-lock
U
R
r:X/h:S
U
R
r:X/h:S
U
R
r:X/h:S
U
after R downgraded
X- to S-lock
after U and T
acquired S-lock
after EOT(T)
S-sphere
Now let us discuss the visibility of data changes and their induced dependencies within a transaction
tree. In Moss's nested transaction model the following holds:
. A transaction may see changes only of those transactions that are committed and it depends
on 5 . We say a transaction T depends on a transaction T', if undoing the effects of T' causes
the abortion of T.
. Once a transaction T has seen a state of an object, this state will never be seen or changed
by another transaction before T commits.
In contrast to that, our model allows for controlled downward inheritance which makes uncommitted data
available to inferiors. For this reason, we observe the following properties.
. A transaction T may see changes only of
those transactions that are committed and T depends upon and
those transactions that are superiors of T
. An object state seen by a transaction T may be changed by inferiors of T.
A transaction may see changes of superiors only if these transactions have downgraded the corresponding
locks explicitly. That is, whether or not a transaction may see the effects of superiors can be controlled
by the application logic. In terms of failures a transaction seeing changes of superiors causes no
problems because, if a superior aborts, this transaction is aborted also. Note that once a transaction has
seen an object, the object cannot be changed by a superior (once more) before this transaction commits.
If a transaction downgrades a lock, it must be aware of the consequences of the reduced isolation.
Downgrading from S- to NL-mode may cause unrepeatable reads from the downgrading transaction's
point of view. With X-locks two cases must be considered: Downgrading from X- to NL-mode and from
X- to S-mode. In the first case, from the downgrader's point of view lost updates und unrepeatable reads
are possible in principle. However, a much more flexible cooperation is enabled where the correctness
of execution has to be enforced by application level protocols. In CSCW-like applications, it is even conceivable
that this kind of high-level control is based on so-called social protocols between end users. In
the latter case, which prevents the inferiors of the downgrading transaction from keeping the downgraded
lock in X-mode, neither unrepeatable reads nor lost updates can occur.
An important question is whether the firewall property of nested transactions is in some way affected by
the downgrading mechanism. A transaction downgrading a lock does not become dependent on the outcome
of its inferiors. When a child fails, its updates possibly on objects with downgraded locks are rolled
back. Therefore, the downgrading transaction is not affected and may create another child to do the
5. Remember the effects of a committed (sub)transaction only become permanent when its top-level
transaction commits. The reason why a transaction may only see data of committed transactions it depends
upon is to ensure that it is aborted when the effects of one of these transactions is wiped out due
to a failure. In the transaction hierarchy depicted in Fig. 5a, transaction E may see changes from trans-action
C not before B has committed. When B commits, E becomes dependent on C as a failure wiping
out effects of C causes A and of course E to be aborted.
work.
In summary, the fact that a transaction may see changes from superiors causes no problems nor is the
firewall property affected by the downgrading mechanism. Lost updates may only happen when locks
are downgraded from X- to NL-mode. In this case, which provides the highest degree of flexibility in
terms of cooperation, application-level concurrency control mechanisms are needed to ensure the required
form of correctness. Since the application itself can decide how and when to use the downgrading
mechanism, it can adapt the level of system supported isolation to its cooperation needs and its facilities
for application-specific concurrency control.
4.3 Generalization of Lock Modes
Thus far, we have described and refined a concurrency control scheme for S-X locks on 'flat', non-overlapping
objects (e.g. tuples or relations); in particular, we have developed a mechanism for controlled
downward inheritance of locks in nested transactions. Closer consideration reveals that the lock modes
(comprising only S and X so far) may be enriched by special modes to better adapt concurrency control
to access patterns in practical applications. For example, tailored lock modes for frequent kinds of object
access could be helpful to more effectively exploit the inherent parallelism of concurrent transactions.
Furthermore, the use of semantic knowledge could greatly optimize some contention patterns of data
access. However, this requires enhanced lock modes; in particular, it presupposes the ability to introduce
user-defined lock modes, e.g. see [Allchin83, Schwarz84].
Such a refinement of lock modes may be easily integrated into our model presented so far. Assume that
the data model remains unchanged. Then, our locking rules stated in Sec. 4.1 for S-X schemes can be
generalized for basic and/or user-defined lock modes as follows:
Transaction T may acquire a lock in mode M or upgrade a lock it holds to mode M if
. no other transaction holds the lock in a mode that conflicts with M, and
. all transactions that retain the lock in a mode conflicting with M are ancestors of T.
GR2: When a subtransaction T commits, the parent of T inherits T's (held and retained) locks. After that,
the parent retains the locks in the same mode as T held or retained them before.
GR3: When a top-level transaction commits, it releases all locks it holds or retains.
GR4: When a transaction aborts, it releases all locks it holds or retains. If any of its superiors hold or
retain any of these locks, they continue to do so.
GR5: A transaction T holding a lock in mode M can downgrade the lock to a (less restrictive) mode M'.
After downgrading the lock, T retains it in mode M.
The locking rules stated above allow upward as well as controlled downward inheritance for arbitrary
lock modes. If rule GR5 were omitted, we would get a generalization of Moss's scheme which only provides
for upward inheritance.
4.4 Use of Hierarchical Locks in Nested Transactions
Let us now reconsider our underlying data model which has some serious drawbacks for realistic concurrency
control situations. In particular, the flat object structure which requires disjoint lockable units of
a given granule makes it impractical for large databases when small granules are needed for some transactions
and larger ones for others. To improve selective access to data granules of varying sizes, hierarchical
locking schemes have been proposed. In our context, hierarchically structured objects introduce
a certain complexity, as we then have to deal with orthogonal transaction and data hierarchies.
As mentioned earlier, locking of disjoint partitions of a given size is insufficient for performance reasons
in most applications. The choice of lockable units affects locking overhead of a transaction (space for
lock control blocks, time to request and release locks) as well as concurrency among transactions.
Hence, it implies a dichotomy of increased concurrency using fine lockable units and higher cost for lock
management. While small granules are appropriate for 'simple' transactions accessing a few tuples, they
are intolerable (and hard to implement) for 'complex' transactions accessing a large fraction of the data-
base. Assume, for example, a sequential scan of a relation with 10 6 tuples; having only tuples as lockable
units would require 10 6 consecutive lock requests and storing of just as many lock control blocks
(of course, in main memory for performance reasons). Hence, coarser granularity locks are sometimes
more natural and efficient, e.g. when sorting or reorganizing a relation.
These arguments should convince every DBMS designer that an object hierarchy for locking purposes
has to be provided. In fact, every 'practical' DBMS supports such a hierarchy of typically 2, 3, or 4 levels;
e.g. System R has a generic 4-level hierarchy: database - segment - relation - tuple [Astrahan76].
An appropriate hierarchical locking scheme was proposed for flat transactions in [Gray76]. Two key
ideas allowed for the design of a scheme that could be adapted to a transaction's needs for either locking
a few items using a fine lockable unit or locking larger sets of items with larger lock granules:
. A node R in a hierarchy can be locked explicitly. As a result, its entire subtree is implicitly locked, too.
. A transaction locking part of the hierarchy places 'Intention mode' locks along the path to R to avoid
a situation where an ancestor node of R is locked in an incompatible mode as compared to R. I-locks
merely serve as place holders, signalling the fact that locking of a subtree is done at a lower level of
the hierarchy, thereby preventing incompatible locks from being granted for the corresponding nodes.
Besides the known modes S and X, an Intention Share mode (IS) and an Intention eXclusive mode (IX)
were introduced to express a transaction's intent to read and to update or read an object at a lower level
of the hierarchy, respectively. A further refinement is the Share and Intention eXclusive mode (SIX)
which grants an S-lock for the entire subtree to a transaction. In addition, it indicates the transaction's
intention to request X-locks explicitly for 'finer' object granules later on. The following table taken from
[Gray76] shows the compatibilities among request/lock modes which derive from these semantics:
For a comprehensive discussion of the precise effects of the lock modes and their compatibilities we refer
the reader to the seminal work of J. Gray [Gray78].
Basic Locking Rules for Object Hierarchies
We have now introduced the essential ingredients of both generalized locking rules for nested transactions
and appropriate lock modes for an object hierarchy. How can we combine both together? We start
with the basic concurrency control model where only upward inheritance is allowed. For the transaction
hierarchy, our generalized rules GR1 - GR4 apply. Furthermore, when acquiring a lock on an object O,
we have to consider additional rules resulting from the object hierarchies.
As opposed to flat objects, according to Gray et al. an approach for controlling concurrent access to an
object hierarchy has to obey the following rules:
1. Instead of locking an object directly, every transaction has to observe a strict hierarchical protocol
requesting appropriate locks from root to leaf in the object hierarchy - in the following denoted as
root-to-leaf rule. A lock is granted at each level according to the compatibilities expressed in the
above table. As soon as a lock is obtained, a transaction may request another appropriate lock at the
same or at the next lower level.
2. Level-to-level transitions should obey the following constraints called level-to-level rules:
. IS held at a node only allows IS and S to be requested on descendant nodes.
. IX granted for a node carries the privilege to request IS, IX, S, SIX and X at the next level.
. S and X allow read and write access (respectively) to all descendants of the node without further
locking.
. SIX carries the privileges of S and IX; hence, while S mode allows read only access to all de-
scendants, write access at lower levels may be requested by IX or X at the next level.
IS
IX
Mode of lock
Compatibility
Mode of
request
NL IS IX S SIX X
yes yes yes yes yes yes
yes yes yes yes yes no
yes yes yes no no no
yes yes no yes no no
yes yes no no no no
yes no no no no no
As far as acquiring locks is concerned, the rules obtained for the transaction hierarchy and the object
hierarchy must be satisfied independently. Following the root-to-leaf rule, transactions must request their
locks from root to leaf in the object hierarchy. Whether or not a lock for an object may be granted in a
particular mode is decided according to the level-to-level rules, the generalized locking rules (GR1 -
GR4), and the lock mode compatibilities depicted in the table above. Since the rules introduced for the
object hierarchy are independent of the underlying transaction model and the rules for both hierarchies
are applied independently, our protocol and that proposed in [Gray76] for flat transactions only differ in
the rules implied by the transaction model.
An example may clarify the issues involved in lock retainment for object hierarchies. In the following sce-
its hierarchical locks for X-access on relation R to its parent P at EOT(T 1 ). After having
retained the locks, P and its inferiors T i are qualified to acquire read or write access on R or to tuples
of R. The following table shows the locks of T 2 and T 3 for obtaining write and read access on tuples of R.
Upgrading and Downgrading Hierarchical Locks
Although we have succeeded in tying together both hierarchy types (transaction and object hierarchies),
we have so far obtained only a more economical and efficient solution of the concurrency problem than
compared to the basic approach in Sec. 3.1. Since we cannot make a transaction's objects available to
its inferiors, all arguments discussed earlier apply. Therefore, it is desirable to enable controlled downward
inheritance in the presence of object hierarchies, too.
Assume, for example, a transaction P holds an SIX-lock on a relation R and wants to permit write access
to tuples of R by its inferiors. Using the same kind of inheritance mechanism as in Sec. 4.1, P has to
downgrade its lock on the object to an appropriate mode. To do so, P retains the SIX-lock on R (r:SIX)
and holds R in IX-mode (h:IX). Note that r:SIX only prevents incompatible locks on R from being granted
to non-descendants, but not to inferiors.
Let us examine whether such a straightforward approach may be applied. In the scenario depicted by
the following table, P holds R in SIX-mode and some tuples of R in X-mode. We assume that P downgrades
the SIX-lock on R. Requesting a lock by an inferior T implies that T obeys the root-to-leaf and the
level-to-level rules. Hence, as soon as T has acquired appropriate locks for the ancestors of R, it can
request a compatible lock for R. The presented scenario is meant to serve as a counter-example for ar-
Database DB h:IX r:IX h:IX h:IS
Segment S h:IX r:IX h:IX h:IS
Relation R h:X r:X h:IX h:IS
Tuples i h:X on t 1 h:S on t 3
h:X on t 2 h:S on t 4
Object hierarchy Before
bitrary inheritance of hierarchical locks and aims at clarifying a new issue: Inheritance of objects in data
hierarchies. It shows that P holds some locks at the tuple level while it has downgraded the corresponding
lock at the relation level to NL-mode, i.e. without particular protection.
In the illustrated situation, T acquires an SIX-lock on R giving read access to all tuples of R. On the other
hand, P has still some tuples locked in X-mode, namely t 1 and t 2 . These exclusively locked tuples would
be read by T, since read access to tuples of R need not be checked by T anymore. Even worse, write-write
interference on the same tuples could occur, if T had locked R in X-mode. Of course, the sketched
examples may cause severe consistency problems. These anomalies would not occur if the lock on relation
R together with all locks on its tuples had been downgraded. Control given by the hold-mode alone
would not guarantee the desired consistency as exemplified by only downgrading R to S-mode.
The key observation in the example above is that downgrading a lock without considering the whole object
hierarchy may lead to inconsistencies. The same can be shown for upgrading locks in object hierar-
chies. For example, if a transaction has locked a database in IS-mode and upgrades an S-lock it holds
on a segment of this database to X-mode, similar inconsistencies may occur. Obviously, to prevent violations
of the level-to-level rules, upgrading or downgrading of a lock may enforce upgrade or downgrade
operations on other locks held in the object hierarchy.
When a transaction T upgrades a lock held on an object O within an object hierarchy, it might be necessary
to also upgrade locks of T held on superior objects of O in order to satisfy the level-to-level rules.
For example, if T holds an IS-, IS-, and S-lock on a database, a segment of this database and a relation
of this segment, respectively, the level-to-level rules enforce the upgrading of both IS-locks to IX-mode
before the relation lock can be upgraded to X-mode. Since upgrading the locks on an object and superior
objects are not performed in an atomic manner, upgrading should be done in a root-to-leaf direction. Of
course, an upgrade operation can only take place if the generalized locking rules GR1 to GR4 are ful-
filled. Otherwise, it is blocked which may cause deadlocks to occur (see Sec. 5).
Since upgrading a lock on an object O converts the mode of the locks to a more restrictive one, the level-
to-level rules are not violated as far as locks on inferior objects of O are concerned. However, due to the
upgrade operation locks held by the upgrading transaction on inferior objects of O may become useless.
For example, when a lock on relation R is upgraded from SIX- to X-mode (lock escalation [Bernstein87]),
all locks held by the upgrading transaction on individual tuples of R are not needed anymore. A clean
approach to handling those useless locks is to release them as part of the upgrade operation. An actual
Object hierarchy P P T
after downgrade using downgraded lock
Database DB h:IX h:IX h:IX
Segment S h:IX h:IX h:IX
Relation R h:SIX r:SIX/- h:SIX
Tuples i h:X on t 1 h:X on t 1 h:X on t 3
h:X on t 2 h:X on t 2 h:X on t 4
implementation may optimize this cleanup process using pragmatic arguments (e.g. see System R
[Astrahan76]).
Downgrading a lock held by a transaction T on an object O is confined to the subhierarchy having O as
the root object. Superiors of O in the object hierarchy are not involved since downgrading cannot violate
the level-to-level rules as far as superiors of O are concerned. However, with respect to the objects in
its subhierarchy, downgrading the lock on O may cause a violation of the level-to-level rules: if T holds
a lock on a subobject O' of O, then after downgrading, the mode of the lock held on O' may violate the
level-to-level rules. For example, assume T holds an IX-lock on a relation R and an X-lock on a tuple t
of R. If T downgrades its lock on R from IX to IS, then T's X-lock on t is not consistent anymore with the
lock mode of its parent object. As a consequence, downgrading the lock on O may require downgrading
locks held by T on objects in the subhierarchy of O, such that the level-to-level rules are satisfied. In the
example above, this would require downgrading T's lock on tuple t to S- or NL-mode. The following table,
derived from the level-to-level rules, lists for each possible mode to which the lock on O can be down-
graded, the modes in which T can hold locks on the objects in the subhierarchy of O without violating
the level-to-level rules. For example, if the lock is downgraded to IS-mode, T can hold subobjects of O
either in NL-, IS- or S-mode. If subobjects are held in more restrictive modes, the locks on these objects
must be downgraded to one of the listed modes. Note, since downgrading an entire subhierarchy cannot
be done atomically, downgrading should be performed in a leaf-to-root direction.
By observing these rules, consistency-preserving downward inheritance of locks may be easily achieved
by P in the previous example by downgrading the tuples t 1 and t 2 before downgrading relation R. Control
of lock usage is then possible by downgrading to the appropriate modes. In the following scenario, the
locks in the subhierarchy of relation R have been downgraded to different modes which allows for selective
control of access to R's subobjects.
Object O of transaction T
downgraded to mode
IS
IX
Consistent modes for locks of T
on subobjects of O
NL, IS, S
NL, IS, IX, SIX, S, X
Object hierarchy P P T
after downgrade using downgraded lock
Database DB h:IX h:IX h:IX
Segment S h:IX h:IX h:IX
Relation R h:SIX r:SIX/h:IX h:SIX
Tuples i h:X on t 1 r:X/h:S on t 1 h:S on t 1
h:X on t 2 r:X/- on t 2 h:X on t 2
Downgrade of an intention mode (IS, IX, SIX) implies subsequent downgrades of locks on subobjects in
order to satisfy the level-to-level rules. This, however, can be avoided by restricting downgrade operations
to S- and X-locks. If T holds a lock on object O in S- or X-mode, then the entire subhierarchy of O
is locked implicitly for T, that is, T does not hold any locks on subobjects of O. Hence, downgrade of O
does not involve downgrading locks on lower levels of O's subhierarchy.
Let us summarize our findings for controlled downward inheritance of locks in a data hierarchy. In gen-
eral, downgrading of entire subtrees is necessary for hierarchical objects to guarantee consistency of
downward inheritance in nested transactions. That is, if an M-lock held by a transaction T on an object
O is downgraded, it might be necessary to downgrade locks held by T on inferiors of O in order to satisfy
the level-to-level rules. If downgrading is allowed only on X- and S-locks, then the downgrading of a lock
never involves locks held on lower levels of the object hierarchy, which simplifies the downgrade mechanism
substantially.
5. Deadlock Detection in Nested Transactions
Lock protocols are pessimistic, that is, they are blocking lock requests of data currently granted to another
transaction in a conflicting mode, and, therefore, not immune to deadlocks. Deadlocks may occur
among transactions belonging to various TL-transactions and even among subtransactions within a single
transaction hierarchy. For deadlock detection, we mainly follow the basic approach sketched in
[Moss85], which allows to identify existing deadlocks. In addition, we propose the maintenance of further
information (waits-for-retained-locks relation) to detect opening-up deadlocks as early as possible.
Deadlocks in nested transactions can be resolved by the concepts known for single-level transactions
extended by some mechanisms tailored to the properties of the nested structure [Moss85, Rukov91].
When a transaction acquires a lock for some data which is incompatible with a lock held by another
transaction, the requesting transaction is deactivated: a direct wait for the lock holder occurs. All direct
waits are maintained in a waits-for-lock relation in order to detect deadlocks. Using this waits-for-lock
relation, deadlock detection can be performed immediately when a transaction is blocked or after some
elapsed time. A deadlock exists if and only if a cycle is found in the waits-for-lock relation. For single-level
transactions, such a cycle is composed of direct waits (or waits for lock) only.
As we have seen in Sec. 3.1, nested transactions have an inner structure which determines along which
paths locks are inherited and whether retained locks can be acquired. Assume a subtransaction R waits
for a lock held by a subtransaction T. After commit of T, its locks are inherited and retained by its parent
transaction P (see Fig. 7). Now, lock requests from transactions in P's (X-or S-) sphere can be served.
Transaction R outside P's sphere, however, cannot acquire retained locks; for this reason, it has to wait
for the retained locks of P. Such waits for retained locks are indirect waits. They propagate along the
ancestor hierarchy of P. In the following, we will introduce two different waiting relationships.
Waits-for- retained-locks: A lock requestor R directly waits for a lock holder T if the mode of the requested
lock is in conflict with the lock mode hold by T. Let Q be the highest ancestor of T that is not an ancestor
of R. Then, R indirectly waits for all ancestors of T up to Q until they commit. Those wait relationships
are called waits-for-retained-lock.
This wait rule implies if requestor R is not in the same TL-transaction as T, then R must wait for the retained
locks of T's TL-transaction which are released at its commit.
Waits-for-commit: Since a waiting lock requestor R cannot proceed with its work, all ancestors of R may
have to wait. In Fig. 7, U cannot commit until R does and U's parent cannot commit until U does, that is,
all ancestors of R cannot commit before R does.We denote this kind of wait relationship by waits-for-
commit, which can be represented by the parent-child relationships, as outlined in Fig. 7.
Due to this dependency, an ancestor of R may wait for all transactions for which R directly or indirectly
waits. Of course, waiting may be broken up as soon as T or one of its ancestors abort. As illustrated in
Fig. 7, R directly waits for T and indirectly waits (for retained locks) for P, ., Q. Furthermore, since S,
., U wait for commit of R, they also wait for T, P, . Q.
In Sec. 4.4, hierarchical locking was employed to nested transactions. The key observation exhibited
that object and transaction hierarchies are orthogonal. As a consequence, further aspects are not added
to deadlock detection when hierarchically composed objects are used. As illustrated by Fig. 7, waits-for
relations occur among transactions; thus, the rules of the hierarchical locking protocol do not interfere-
with the waits-for-lock and waits-for-commit relationships as long as the root-to-leaf and the level-to-level
rules are observed.
U
waits-for-lock
waits-for-commit
Fig. 7: Lock and commit waits
5.1 Detection of existing deadlocks
In order to handle deadlock detection in nested transactions successfully, we have to combine the various
waits-for relations. Considering the waits-for-lock relation, only direct-wait deadlocks can be found,
as indicated in Fig. 8a, whereas other kinds of deadlocks cannot be detected. This is true no matter
whether a deadlock occurs within a TL-transaction or among subtransactions of various TL-transactions.
The cycle in Fig. 8a consists of direct waits only, that is, all transactions in the cycle cannot proceed any-
more. In Fig. 8b, however, another kind of cycle is encountered; this situation does not mean that
progress has stopped everywhere in the cycle. T waits for a lock of Q; Q, ., P may proceed for some
time, but cannot commit without aborting T. Since T must be rolled back anyway, the best decision is to
detect and resolve this ancestor-descendant deadlock immediately. A request of T causing a lock wait
on its ancestor Q can be detected only by using the combined waits-for-lock and waits-for-commit relation
information.
Situations such as illustrated by Fig. 8b would be frequently caused when descendants refer to exclusively
used data in an uncoordinated way. Controlled downgrading of locks, however, provides a mechanism
to avoid such cycles, that is, application knowledge is applied to reduce the possibility of a dead-lock
involving lock and commit waits. Coordinated work requires that a parent P should downgrade the
lock on an object O currently granted to P, before it creates a child T to do some work on O . Then, T
can acquire the lock for O in a non-conflicting mode without causing a blocking situation. Downgrading
enables deadlock-free cooperation, but cannot enforce it; if T requests the lock in a mode more restrictive
than the offered one, a deadlock may arise.
Upgrading a lock may lead to wait situations and therefore to deadlocks as they occur in single-level
transactions. Assume in Fig.7 that T and R already hold an S-lock on object O. Now, if R upgrades the
lock to mode X, R has to wait until a direct ancestor of R retains the lock or, if T and R are not in the
same TL-transaction, until T's TL-transaction has committed. Hence, our wait rule applies to lock up-
grades, too.
a) direct-wait deadlock
Fig. 8: Deadlock situations
R
b) ancestor-descendant deadlock
5.2 Detection of opening-up deadlocks
The combined use of waits-for-lock and waits-for-commit relations turned out to be sufficient for nested
transactions to detect existing cycles embodying direct-wait or ancestor-descendant deadlocks. Since a
waits-for-lock relationship is only represented between the requestor and the holder of a lock (or, after
commit of the lock holder, the current retainer), waits-for-retained-lock relationships between the requestor
and all ancestors of the holder (retainer) are not explicitly established in the waits-for informa-
tion. For nested transactions, however, these waits-for-retained-lock relationships should be taken into
account to provide for early deadlock detection. This may save a lot of useless work as shown in the
scenario of Fig. 9.
Fig. 9 represents a deadlock-free situation, since transactions T , D, and possibly others can proceed
with their work. Rwaits for D and G for T to obtain the requested locks. All other waits indicated are waits-
for-commit. R indirectly waits for the oldest ancestor of D that is not an ancestor of R which is A. On the
other hand, G indirectly waits for V. If we evaluate this information (R - A, G - V), we are able to immediately
detect a cycle to be opening up.
An optimistic attitude would not care about such an opening-up deadlock, since an abort of any transaction
involved would eventually avoid the actual occurrence of the deadlock. For example, the abort of
any transaction in Fig.9 resolves the opening-up deadlock before all progress ceases within the TL-
transactions V and A. However, transaction aborts are regarded as exceptions and should not be taken
into account as a remedy to break opening-up deadlock cycles.
In contrast, a pessimistic approach usually saves work. If we use the transitive waits-for-commit and
waits-for-retained-lock relationships of all ancestors, e.g., of V on R and A on G as well as R on A and
G on V, we can construct a direct (future) cycle between V and A and can roll back either V or A. How-
ever, deadlock detection and resolution at the level of the highest non-common ancestors of the transactions
which have caused the conflict may not be appropriate. Deadlock resolution is typically based
on transaction rollback and should affect minimal data granules or work lost.
U
R
A
F
G
Fig. 9: Opening-up deadlock among nested transactions
For this reason, special measures should be used to determine an opening-up deadlock as early as possible
and at the suitable level in the nested transaction hierarchies. In addition to waits-for-lock and waits-
for-commit relations, the waits-for-retained-lock relationships have to be included in the waits-for infor-
mation. For example in Fig.9, the relationships R - C, ., R - B, R - A, as well as G - P,., G - Q,
have to be represented in order to successfully search for opening-up cycles. Once an opening-
up deadlock is detected, all transactions involved have to be considered to determine a low-cost victim
for rollback. Since rollback of a parent transaction implies rollback of all its inferiors (committed and un-
committed), rollback of a child transaction is always cheaper than that of the corresponding parent trans-
action. For this reason, rollback of a lock holder (retainer) or a lock requestor is always cheaper than
their ancestors in a potential cycle; hence, the set of transactions to choose a rollback victim from is the
set of lock holders (retainers) and lock requestors. In Fig. 9, this set of candidates is D,T and R,G, respectively
Note in contrast to [Moss85], in our transaction model these transactions must not be leaves in the current
transaction tree. However, the same methods and cost measures could be applied for breaking up
the cycle, in principle. Since candidate transactions can occur arbitrarily in the transaction hierarchy, resource
estimation involving the evaluation of subtrees may become much more complicated.
To summarize, waits-for-retained-lock relationships are evaluated only to detect opening-up deadlocks
as early as possible. Since the candidate transactions for breaking up the cycle are lock holders (retain-
er) and lock requestors, the mechanisms for deadlock resolution can be derived from those provided for
single-level transactions.
Early detection of opening-up deadlocks saves transaction work. However as discussed above, the additional
representation and management of waits-for-retained-lock relationships require some overhead.
If deadlocks are infrequent, a particular system implementation has to take this trade-off into account.
6. Comparison of some System Implementations
In the following, we will compare some systems implementing nested transactions with regard to the degree
of parallelism supported, the applied concurrency control schemes and the way deadlocks are
treated. In particular, we will consider Argus [Liskov88, Liskov87], Camelot [Spector88, EpingerS91],
Clouds [Ahamad87, Dasgupta89], Eden [Almes85, Pu85] and LOCUS [M-ller83, Weinstein85]. The table
below summarizes the results.
While Camelot, Clouds, Eden and LOCUS allow for parent/child as well as sibling parallelism, ARGUS
does not permit a parent transaction to run in parallel with its children, resulting in simpler locking rules
[Liskov87]. All the systems considered are based on two-phase locking, however only Clouds and LOCUS
support downward inheritance. While the downward inheritance scheme in LOCUS requires a lock
holder to explicitly state when downward inheritance may potentially take place, transaction in CLOUDS
are allowed to share the locks of their ancestors in a totally uncontrolled manner [Allchin83] When a
transaction closes a file in LOCUS, the lock held on this file becomes a retained lock. Although it supports
downward inheritance by means of an explicit offering mechnism, the LOCUS scheme is uncontrolled
in the sense that a lock holder offering a lock cannot control in which mode its descendants may
acquire this lock. None of the five systems supports controlled downward inheritance, nor do they support
object hierarchies. Argus, Camelot and Clouds implement deadlock resolution based on a timeout
mechanism, whereas Eden applies a wound-wait deadlock avoidance scheme [Rosenkrantz78]. LOCUS
neither performs deadlock detection, nor implements an avoidance scheme. However, it provides
an interface to operating system data permitting a system process to detect deadlock by constructing a
wait-for-graph. In this manner, different deadlock resolution strategies may be implemented
[Weinstein85].
7. Conclusions
We have presented an investigation of concurrency control in nested transactions. The focus of our paper
has primarily been on achieving a high degree of intra-transaction parallelism within nested transactions
by using locking protocols.
Parent/Child Sibling Downward Controlled Object Deadlock
Parallelism Parallelism Inheritance Downward Hierarchy Avoidance and
Inheritance Support Detection
Argus no yes no no no timout based
resolution
Camelot yes yes no no no timeout based
resolution
Clouds yes yes yes no no timeout based
resolution
Eden yes yes no no no wound-wait
avoidance scheme
LOCUS yes yes yes no no neither resolution
nor avoidance
Our initial concurrency control mechanism for nested transactions was based on S-X locking protocols
on flat objects which seriously limited parent/child parallelism. Therefore, the concept of downward inheritance
was introduced and refined to controlled downward inheritance in order to enable a transaction
to restrict the access mode of its inferiors for an object. Controlled downward inheritance turned out to
be a useful concept for achieving safe parent/child cooperation on data structures to be read or written
in a shared manner.
Practical applications sometimes have a need for specialized lock modes as well as multi-level object
hierarchies offering efficient ways to lock granules of varying sizes. Therefore, we have generalized the
locking rules for nested transactions to be applied for richer access modes on flat objects. Most impor-
tantly, this kind of generalization was a prerequisite for the integration of transaction and object hierar-
chies, since the appropriate use of object hierarchies implied suitable access modes beyond S- and X-
locks. As a result, we could combine both types of hierarchies in a general concurrency control model
and then could enhance the model again using the concept of controlled downward inheritance, for the
even richer set of access modes.
Finally, we studied the principles of deadlock detection in nested transactions. In contrast to single-level
transactions where the waits-for-lock relation is sufficient to search for waiting cycles among transac-
tions, detection of all deadlocks in nested transactions further requires the maintenance of the waits-for-
commit relation and its combined use with the waits-for-lock relation. If deadlocks are frequently antici-
pated, opening-up deadlocks, which may span transaction trees, should be detected as early as possible
to save transaction work. For this purpose, we have additionally introduced the waits-for-retained-lock
relation.
Acknowledgements
C. Mohan shared his great knowledge and experience on concurrency control with us. We would like to
thank him for his contributions which led to essential simplifications and clarifications of the concepts
proposed in the paper. We would also like to thank J. Palmer and P. Schwarz and the referees for their
helpful comments on this paper.
--R
The Eden system: A technical review.
Fault tolerant computing in object based distributed systems.
An Architecture for Decentralized Systems.
System R: Relational Approach to Database Management.
A Model for Concurrency in Nested Trans-action Systems
Concurrency Control in Distributed Database Sys- tems
Concurrency Control and Recovery in Database Systems
A Model of CAD Transactions.
The Clouds distributed operating system.
The Notions of Consistency and Predicate Locks in a Database System.
Camelot and Avalon - A Distributed Transaction Facility
Camelot And Avalon
"Hot Spots"
Notes on Database Operating Systems.
The Transaction Concept: Virtues and Limitations.
Granularity of Locks and Degrees of Consistency in a Shared Data Base.
Principles of Transaction-Oriented Database Recovery
Concepts for Transaction Recovery in Nested Transac- tions
The EDEN Transaction Based File System.
Nested Transactions for Engineering Design Databases.
Implementation of ARGUS
The ARGUS Language and System.
Distributed programming in ARGUS.
Transaction Management in R
Nested Transactions: An Approach to Reliable Distributed Computing.
Nested Transaction Mechanism for LOCUS.
Nested Transactions for General Objects: The Eden Implementa- tion
A Framework for Workload Allocation in Distributed Transaction Processing Systems.
Concurrency on High-Traffic Data Elements
ARIES/NT: A Recovery Method Based on Write-Ahead Logging for Nested Transactions
System Level Concurrency Control for Distributed Database Systems.
Hierarchical Deadlock Detection for Nested Transactions.
Synchronizing Shared Abstract Types.
Transactions: A Construct for Reliable Distributed Computing.
A Flexible
Nested Transactions with Multiple Commit Points: An Approach to the Structure of Advanced Database Applications.
A Theoretical Foundation of Multi-Level Concurrency Control
Principles and Realization Strategies of Multilevel Transaction Manage- ment
Transactions and Synchronization in a Distributed Operating System.
--TR
Principles of transaction-oriented database recovery
Synchronizing shared abstract types
Nested transactions: an approach to reliable distributed computing
Concurrency control and recovery in database systems
A theoretical foundation of multi-level concurrency control
Abstraction in recovery management
A measure of transaction processing power
Implementation of Argus
Concepts for transaction recovery in nested transactions
Distributed programming in Argus
A model for concurrency in nested transactions systems
ARIES/NT: a recovery method based on write-ahead logging for nested transactions
Principles and realization strategies of multilevel transaction management
Camelot and Avalon
A framework for workload allocation in distributed transaction processing systems
System level concurrency control for distributed database systems
System R
Transactions and synchronization in a distributed operating system
Concurrency Control in Distributed Database Systems
The notions of consistency and predicate locks in a database system
Concurrency on high-traffic data elements
A Transaction Mechanism for Engineering Design Databases
Nested Transactions with Multiple Commit Points
Architectural Issues of Transaction Management in Multi-Layered Systems
Log-Based Recovery for Nested Transactions
Notes on Data Base Operating Systems
The Argus Language and System
A nested transaction mechanism for LOCUS
--CTR
Erhard Rahm, Parallel query processing in shared disk database systems, ACM SIGMOD Record, v.22 n.4, p.32-37, Dec. 1993
M. Patio-Martnez , R. Jimnez-Peris , S. Arvalo, Implementing transactions using Ada exceptions: which features are missing?, ACM SIGAda Ada Letters, v.XXI n.3, September 2001
Hong-Ren Chen , Y. H. Chin, Scheduling Value-Based Nested Transactions in Distributed Real-Time Database Systems, Real-Time Systems, v.27 n.3, p.237-269, September 2004
Elisa Bertino , Barbara Catania , Elena Ferrari, A nested transaction model for multilevel secure database management systems, ACM Transactions on Information and System Security (TISSEC), v.4 n.4, p.321-370, November 2001
Kunal Verma , John A. Miller , Boanerges Aleman-Meza, Designing a high-performance database engine for the 'Db4XML' native XML database system, Journal of Systems and Software, v.69 n.1-2, p.87-104, 01 January 2004
Laurent Dayns , Grzegorz Czajkowski, Lightweight flexible isolation for language-based extensible systems, Proceedings of the 28th international conference on Very Large Data Bases, p.718-729, August 20-23, 2002, Hong Kong, China
Stefan Deloch , Theo Hrder , Nelson Mattos , Bernhard Mitschang , Joachim Thomas, Advanced data processing in KRISYS: modeling concepts, implementation techniques, and client/server issues, The VLDB Journal The International Journal on Very Large Data Bases, v.7 n.2, p.79-95, May 1998
C. Mohan, Repeating History Beyond ARIES, Proceedings of the 25th International Conference on Very Large Data Bases, p.1-17, September 07-10, 1999
Alexander Thomasian, Concurrency control: methods, performance, and analysis, ACM Computing Surveys (CSUR), v.30 n.1, p.70-119, March 1998
Klaus R. Dittrich , Hans Fritschi , Stella Gatziu , Andreas Geppert , Anca Vaduva, SAMOS in hindsight: experiences in building an active object-oriented DBMS, Information Systems, v.28 n.5, p.369-392, July
Norman W. Paton , Oscar Daz, Active database systems, ACM Computing Surveys (CSUR), v.31 n.1, p.63-103, March 1999 | locking;object hierarchies;concurrency control;nested transactions |
618430 | Animals with Anatomy. | Applying anatomical and physiological principles to model and animate animals achieves greater realism. Underlying components represent bones, muscles, and soft tissue; for speed and simplicity, we can model these from ellipsoids. Muscles stretch across joints, and their orientations, sizes, and shapes change during joint motion. A polygonal skin is automatically generated from the underlying structures. The skin mesh adjusts itself to changes in position under the influence of neighboring skin points and connections to the underlying anatomy. Much of the process is automated but under the control of user-defined parameters. Manipulation and animation of these models occur at comfortable interactive speeds on graphics workstations. | Introduction
Animals are beautiful, varied, and common, and we would like to see more, and more realistic, ones in
computer-generated images and animations. In general, computer graphics achieves greater realism by developing
methods that simulate the real world, rather than using adhoc methods that just appear somewhat
realistic. To this end, we are developing modeling and animation approaches based on anatomical and physiological
principles.
Our method models any animals that have a jointed endoskeleton ("inside" skeleton) moved by muscles
and covered by a flexible skin. This includes, basically, all the higher vertebrates, and some of the inverte-
brates. While we admit to a partiality toward other animals, our approach can be used to model humans
as well. There is actually a remarkable similarity in the structure of most vertebrates, so a well-designed
model of one animal can be used as a basis for various individuals of the same species, or even very different
species. We can also design entirely new and fantastic animals.
Creating a truly realistic model of any animal is a daunting task. This research provides a fundamental
paradigm that can be extended gradually to include whatever level of physical realism is desirable. At present,
all underlying parts (bones, muscles, and soft tissue) are represented as ellipsoids, which are actually quite
a natural shape for many body parts and allow us to interact with our models at comfortable interactive
speeds. The polygonal skin is automatically created based on underlying components and moves with them
when joints move.
We feel our contributions so far are: (1) partially automated or interactive creation of underlying parts
from the basic structure; (2) sharable model descriptions for expediting creation of different individuals and
species; (3) an underlying structure of individual bones and individual muscles that extend across joints and
reshape as joints move; and (4) an automatically extracted skin that moves naturally during motion.
This paper presents an overview of our new method. Color images and animations illustrating the method
can be viewed on the Internet at http://www.cse.ucsc.edu/~wilhelms/fauna.html.
Background
Many research areas have contributed results useful for modeling and animating animals, and only a small
subset can be referenced here. Much of the work has concentrated on modeling humans, especially their
faces [BPW93, LTW93, TW90]. Some of the best animals have been created for commercial advertising and
movies, and details of the methods used are not published. Two excellent recent examples are the polar
bears of Rhythm and Hues [Car94] and the dinosaurs of Jurassic Park [SD93]. Both were initially digitized
from real physical models.
2.1 Model Construction
Chadwick, Haumann, and Parent presented a method for layered construction of flexible animated characters
[CHP89]. A polygonal skin model was associated either with a freeform deformation abstractly representing
muscle and fatty tissue or with the skeletal hierarchy itself. The deformation was affected by kinematics
based on joint angles, dynamics based on mass-spring model of control points, or interactive sculpting by
the user. Unlike our own model, they did not simulate individual muscles that cross joints, and the skin was
embedded in the deformations, not flexibly attached to underlying structures.
Mark Henne also used a layered approach [Hen90]. A bicubic surface skin was pushed outward by
localized "velocity" fields. These implicit fields represented bones, fat, and muscles, and were connected to
a hierarchical body structure. Each skin control point was affected by neighboring skin points and by an
anchor which was the point's original position in the rest state. Anchor points were displaced when joint
angles changed. Simulated muscles were only attached to a single segment in the hierarchy, and individual
muscles were not modeled.
Creating natural-looking skin and deformations across joints is a particularly difficult problem and this
is where many animal models fail [BZ91]. Gourret, Magnenat-Thalmann, and Thalmann used finite-element
theory to model the hand during grasping [GMTT89]. Their method simulated shape changes during motion,
including effects of contact between flesh and objects, using solid elements such as cubes, prisms, and
tetrahedra and a spring-like model. Magnenat-Thalmann and Thalmann also presented an approach using
joint-local deformations (JLD's) to create realistic body deformations at joints [MTT91]. JLD's are geometric
operators which cause deformations based on the angular values of the joint, and parameters such as flexion
axis of the joint and amount of inflation that should occur during motion. Here the main interest was visual
realism and speed; no attempt is made to accurately model the internal hand anatomy.
Chen and Zeltzer presented a biomechanically based muscle model using a finite element method to
realistically simulate a few individual muscles [CZ92]. The muscles were modeled using polyhedral meshes,
and a biomechanical contaction model used to calculate non-linear forces at mesh points. The simulated
muscles showed good correspondence to actual measured muscle behavior. This research concentrated on
accurately simulating individual muscle behavior. Overlying skin was not modeled. The cost of the simulation
was not discussed, but may be prohibitive for purposes of whole body muscle simulation. At some point,
hopefully, computational resources will make it possible for animation and biomechanical simulation to
converge.
2.2 Implicit Surfaces and Isosurfaces
An implicit surface and an isosurface are really equivalent, each being a two-dimensional surface of
constant value within a three-dimensional scalar field, i.e., the set of points x where F Those
in computer graphical modeling tend to use the term "implicit surface" and the three-dimensional field is
usually a simulated density field around geometric objects. Those in scientific visualization tend to use the
term "isosurface" and the field is usually provided as discrete scalar values on a grid; data values represent
samples from an underlying field function that is usually not given. The isosurface for a given threshold
value in the field can be extracted by a wide variety of approaches [NH93]. Blinn wrote a seminal paper on
implicit surface modeling which created a "blobby man" by extracting a surface from around an articulated
skeleton, but underlying anatomy wasn't modeled [Bli82].
Our method uses a "marching-cubes" approach [LC87], which is simple, fast, and sufficient for present
purposes. A cell of the grid is defined by eight corner points. An isosurface crosses the cell if some of the
data values at these corners are above the threshold and others are below. Linear interpolation along cell
edges provides points on the isosurface. These are connected together using a table to produce a polygonal
mesh surface.
Modeling Approach
An animal, for the purposes of this model, is a structure of individual bones, muscles, and generic other
tissue ("stuffing") covered by a flexible skin. Our sample model will be the zuni cat, which has 27 segments,
bones, 128 muscles, and 13 stuffings. The skin mesh has 5518 vertices. Figure 5 shows the bones of
the cat. Figure 7 shows the underlying components on the left and the polygonal skin mesh with texture
mapping on the right.
3.1 The Model Hierarchy and Geometry
Our animal models are represented, as is common, by a hierarchy or tree. The root segment is attached
via a six degree-of-freedom joint to the world, and other joints allow three rotational degrees of freedom. A
body segment is a part of the body with a rigid underlying skeleton (such as the upper arm, the lower leg,
the head, etc.) that is attached to other segments via joints. Each segment (except the root segment which
is free to move in the world) has one parent segment and zero or more child segments. A well-designed
hierarchy can represent most of the vertebrates with few changes, because most vertebrates have a similar
number of arms, legs, fingers, toes, etc. and differ in minor points like the number of vertebrae.
The segments, bones, muscles, stuffing, and the animal as a whole are each described in terms of local
coordinate systems. The geometric relation between these coordinate frames is known and used for adjusting
body parts during motion and drawing the model. Of course, all body parts must be converted to the world
coordinate system for drawing. As a convention, when body parts have a distinct longitudinal axis, this is
the Z-axis in the local system. Figure 1 shows a hierarchy and underlying components for a simple structure.
Each body segment is described in a local segment coordinate system. The origin of this coordinate
system is the proximal joint of the segment, connecting it to its parent segment. The coordinate system
of the root body segment is also the whole animal coordinate system. The skin is described in the animal
coordinate system, but, because it maintains connections to other body parts as well, it is also described in
local coordinate systems (see Section 3.5).
Bones, muscles, and stuffing each have their own local bone, muscle, or stuffing coordinate systems. These
components are all modeled as ellipsoids, and each ellipsoid has its own local ellipsoid coordinate system. A
geometrical transformation places ellipsoids in the body, muscle, or stuffing coordinate system of which they
are a part. Other geometrical transformations place bones, muscles, and stuffing in the segment coordinate
frame to which they are attached.
The relationship between body segments is stored both in a rest geometry and a state geometry. The
rest geometry specifies the rest origin and orientation of each segment's coordinate frame on its parent
segment. The state geometry represents rotations of segments relative to their parents and the position and
orientation of the whole animal in the world. These changes are represented by the state geometry. Bones
and stuffing just move with their segments during state changes. Muscles and skin, however, have more
complex responses (Sections 3.4 and 3.5).
3.2 Ellipsoids
The basic primitive is the ellipsoid, specified by the equation:
a 2
The (x; lengths of the ellipsoid are (a; b; c). Ellipsoids can be succinctly stored, and solving the
equation for a particular 3D point (x; indicates if the point is in (f(x;
or outside (f(x;
Ellipsoids used in modeling are centered at the origin of their own local coordinate frame. Matrices
representing the relationship between the local ellipsoid coordinate frame and the world coordinate frame
are stored with each ellipsoid for rapid display and for adjustment of skin points (see Section 3.5). Muscle
ellipsoids may change shape, so their volume (v) and the ratio (r) of their X and Y axis lengths are also
stored (see Section 3.4). (The (x; lengths of the ellipsoid are (a; b; c).) The longitudinal ((Z) axis
length of the muscle ellipsoids change during joint motion because the relative positions of the muscle end
points change. Because the muscle volume should remain relatively constant, the X and Z axis lengths
needed to be scaled accordingly, maintaining the ratio of X=Y from the rest position.
4-abc
3 (2)
3.3 Bones and Stuffing
A typical bone is modeled as a thin center ellipsoid with two bulbous spheres at each end, but the three
ellipsoids can be rescaled and moved relative to each other for other shapes (see Figure 5). Notice, for
example, the three ellipsoids that form a rib and the three flat ellipsoids that form the scapula (the shoulder
blade). Default bones lie along the longitudinal axes of segments.
"Stuffing" refers to ellipsoids used to simulate general soft tissue important for shaping the body, such
as in the abdomen and thorax, and is also used to add features such as ears, nose and eyes. Stuffing consists
of a single ellipsoid, and appears purple in the color images (see Figure 7). We avoid the term "soft tissue"
because it doesn't deform.
3.4 Muscles
The animal body is covered by model muscles that are simplified versions of the muscles found in a real
animal. Muscles are anchored to body parts at an origin (the proximal attachment, closest to the center
of the body) and an insertion (the distal attachment, farther from the body center). Origin and insertion
remain at fixed positions in their local segment frames, and muscles reshape themselves to lie between these
attachments when joints move. The muscle coordinate frame has its origin at the origin point of the muscle,
and its Z-axis extends toward the insertion point of the muscle. Three ellipsoids representing two tendons
and a muscle body between are lined up along this Z-axis (Figure 2.)
The muscles are created by the user interactively. The process is simplified by the use of standard defaults.
When the user requests a new muscle, she specified only the segments with which the origin point and the
insertion point are associated. The origin and insertion points are assumed to lie along the longitudinal axis
of the segment, 10% along the distance of the bounding box of the segment in this direction. The tendons
and muscle body are created to just span the distance between origin and insertion. This distance represents
the rest length of the muscle.
muscles for the whole body can also be requested, and, in that case, four muscles are created
around each joint. These muscles are similar to a muscle requested singly, but in this case the four muscles
are arranged symmetrically around the segments that they attach to by placing their origin and insertion
points slightly away from the longitudinal axes of the proximal and distal segments in the +X, \GammaX , +Y ,
and \GammaY directions. This simulates abductor, adductor, flexor, and extensor muscles.
The user can interactively move the origin and insertion, change the insertion segment, and reshape the
muscle as desired. Muscle creation and modification are done in the rest state, and define the relaxed, muscle
rest state.
The system also stores the muscle present state, which takes in account the present state geometry. The
origin and insertion points of the muscle in their local segment frames remain constant, but the relation of
these frames may have changed. The present state modifies the muscle so that it lines up between origin
and insertion points in their new positions, and reshapes the muscle ellipsoids so that volume and the X=Y
axis ratios are maintained.
First, the muscle origin and insertion points are transformed into the origin segment coordinate frame
and a vector between them is found. The length of this vector is the new muscle length (l new ), which may
be longer or shorter than the rest length (l rest ). The Z-axis rest lengths (c) of the ellipsoids making up the
muscle must be scaled to fit this new length, and their other (a; b) rest axes lengths adjusted to maintain a
constant ratio
volume (v). These changes are figuratively represented in Figure 2. They are
accomplished as follows:
new
l rest
c (4)
r 3v
4c new r-
new r (6)
This rescaling is applied independently to each of the three muscle ellipsoids. If the new length is shorter
than the rest length, this causes the muscle to bulge; and if larger, to become thinner.
Next, a transformation defining the relation of the new longitudinal muscle axis to the origin segment
coordinate frame must found. This can be done using techniques described in standard graphics texts for
rotating a given vector into another arbitrary vector. First, find a local coordinate system for which the
muscle axis vector is the Z-axis by taking a cross-product of the axis vector with a non-parallel vector, and
take the cross-product of the axis vector with the new vector. Then, normalize axes and use them as rows
in a rotation matrix.
Finally, muscle ellipsoids are placed along this longitudinal axis at appropriate distances from the muscle
origin point to line up the tendons with the muscle body between them.
These adjustments are quite fast and muscles can be animated and shown changing shape in close to
realtime. Figure 4 show a structure created almost totally automatically (stuffing was added interactively)
from default parameters, showing the muscles (red), tendons and bones (white), stuffing (purple), and a skin
mesh covering them.
User-defined parameters control the automatic generation and movement of the skin. A polygon mesh
rest skin is generated (Section 3.6), and each skin vertex is associated with its nearest anchor ellipsoid
(Section 3.7). During motion, the positions of skin vertices are adjusted under the influence of neighboring
skin vertices and their attachment to the anchor ellipsoid (Section 3.8). Various skin characteristics can be
changed under user control (Section 3.9).
3.6 Skin Generation
The rest skin is generated in three steps: (1) sample the region around the body in its rest position to create
a scalar 3D data volume (voxelization); (2) filter the volume; and (3) extract a polygonal isosurface of a
designated threshold from it. Notice that the skin need only be generated once, in the body rest position.
The polygon vertices extracted are attached to underlying components and are automatically repositioned
when the body moves.
First, a volume of data points on a rectilinear grid of a user-specified resolution is created over the animal
model in its rest configuration. Each grid point is tested to see if it lies within any ellipsoid by converting it
to the coordinate frame of the ellipsoid and testing its location with the ellipsoid equation (Equation 1). If
the grid point is in any ellipsoid, it is given a positive scalar value; otherwise, it is given a value of zero.
To produce a smooth skin at a reasonable distance from the underlying parts, this volume is then filtered
some number of times (five is often good). A Gaussian filter with a default decay of 2 (which can be changed
by the user) is used.
We calculate the filter using the equation below. The one center point ((i; j; k)) is scaled by w 3
, the six
1-adjacent points (e.g., (i , the twelve 2-adjacent points by w 1 (e.g., (i and the
eight 3-adjacent points (e.g., (i
. (In other words, n-adjacent means one away from
the center point in each of n coordinate dimensions.)
(decay
The filtered value finally used is the maximum of the filtered value from the algorithm and the original
grid value. This ensures that internal grid points never lose value and positive values spread outward from
the original points included in the model.
In the third step, an isosurface of a user-defined threshold is extracted from the filtered volume to produce
a polygonal skin model [LC87]. This is represented as a list of vertices representing points on the skin surface
together with outward-pointing normal vectors at those points; a list of polygons each specified as a list of
pointers into the vertex list; and a list of edges describing connectivity between vertices. The edges act like
springs and adjust the positions of vertices when the body moves. The edge length when extracted is the
rest length for these "springs."
Our approach is quite a bit simpler than some other voxelization and filtering approaches described in
the literature (e.g., [WK94]). However, unlike most voxelization approaches, we are not interested in an
exact representation of the objects sampled (the component ellipsoids in this case), but rather a sense of
where they are. The ellipsoids are not a perfect representation of a real body with the skin removed, but
only an approximation. The filtering step is essential to "blur" this approximation, making it possible to
create a smooth isosurface at some distance from the underlying components. The user has control over the
amount of filtering, and the process is fast enough to allow interactive experimentation.
3.7 Anchoring
Once the default skin has been found, it must be anchored to appropriate body parts, so that when the
body moves, the skin automatically moves as well. Each skin vertex is anchored to the ellipsoid to which it
is closed, at the near point on that ellipsoid. To find this anchor, skin points which are originally in world
space are converted to the frame of the ellipsoids with to which they might be anchored, and the closest
point on any of them found. Figure 3 shows important components in anchoring and moving skin.
The solution of the ellipsoid equation for a given point unfortunately doesn't give the distance to the
nearest point on the ellipsoid, so we use an iterative Newton-Raphson approach. The derivative of the
ellipsoid equation at the nearest point on the ellipsoid to the skin point represents a vector between the skin
point and its nearest point. We find a parametric line equation representing that vector such that when the
parameter we are at the skin point, and taking small steps along the line dt brings us toward the
near point. The ellipsoid equation is also parameterized by t; we refer to it as g(t). Let be the
skin point in the ellipsoid coordinate frame and (a; b; c) be the ellipsoid axis lengths. The parameter t is
initialized to zero and dt is initialized to a small fraction of the value of f(0). The following iteration occurs
until the absolute value of dt is acceptably small or ten iterations have occurred.
s
s
s
s
s
s
The final point (x; is the nearest point on the ellipsoid to the skin point and is called the anchor
point. The distance to the ellipsoid is found from these two points. The ellipsoid nearest the skin point
becomes the anchor ellipsoid for that point. The position of the skin point itself in its initial rest state is
the virtual anchor.
The virtual anchor and the skin point are stored both as world space positions and as parameterized
positions in the local ellipsoid frame. A parameterized local position is found by dividing the actual local
position by the ellipsoid axis lengths to yield values between \Gamma1 and 1. As the axis lengths for muscles may
change due to joint changes, points that are rescaled back to the local space correctly take into account these
changes.
3.8 Skin Adjustment During Motion
Much of an animal's skin is rather loosely attached to underlying parts. Rigid attachment of the skin to body
segments, particularly near joints, is a major reason why many animal models do not appear realistic. Our
modeled skin moves under the influence of neighbor skin regions and variably stiff attachments to underlying
parts. Often these parts are muscles, which themselves move relative to the skeleton when the body moves.
After a joint movement has occurred, the initial positions of each skin point in world space are found by
transforming the last position of the skin point in its ellipsoid local coordinate frame to world space using
the new relationship of local space to world space. This produces a good approximation of a stable position
for the point.
Next, each skin point is iteratively adjusted in world space taking into account the influence of neighboring
skin points which are connected to it by edges and its virtual anchor. The number of iterations can be set
by the user, and it is possible to iterate continuously in the background when not actively interacting with
the program. In practice, however, movement of skin points due to iteration is not visible, even for large
motions, after about five iterations, and for small motions, after two or three.
We use the virtual anchor (the rest position of a skin point) rather than the anchor on the ellipsoid because
it gives better visual results. Henne also used the rest position as the anchor [Hen90]. If the ellipsoid anchor
is used, the point can rotate around the anchor and find a stable position very close to or even inside the
ellipsoid. While collision detection can take care of this, it is expensive.
The change in position for each skin point is the sum of changes in position caused by each of the edges
to which it is attached. Let P s be the 3D position in world space of the skin point being adjusted and Pn
be the position in world space of the neighbor point connected to it by an edge. Let l r be the rest length
for that particular edge; and let l p be the present length of that edge. Let k be the "spring constant" for
the edge, which controls how strongly it is drawn back to its rest position. Then, the change in position due
to this edge is dp. In the very unusual case where l when the edge is between skin points (two points
coincide), the displacement is set to be a small amount in a set direction.
l p
The new position for the point P merely the old position nudged by the displacements caused
by its i edges, each of which has displacement dp i . This worldspace position is also converted back to a
parameterized local position for future use.
Collisions between skin points and ellipsoids can be checked for, and points displaced to avoid them
when they occur, but they slow the skin adjustment. Because virtual anchors tend to keep points from
colliding with their own ellipsoids, and the underlying tissues are not normally displayed under an opaque
skin, small interpenetrations have little visual effect. We don't do ellipsoid-ellipsoid collision detection or
collision detection with external objects at this time.
3.9 Skin Modification
The skin automatically generated as explained above may not be quite what is desired. For example, one
might like it to be at a greater distance from and more loosely attached to the abdomen, but close and
tightly attached to the skull. Therefore, the user can interactively adjust various parameters to get the
proper regional skin properties. Adjustment may occur to skin over the entire body, over an active segment,
or to an interactively chosen set of skin points. Two of the most useful adjustment parameters are the rest
distances of skin points from their anchor ellipsoids and the "spring constant" (k) controlling the strength of
the pull between skin points or between skin points and their virtual anchors. The actual resting positions of
skin points can be interactively adjusted, but in most cases it is more effective and easier to alter relationships
between skin and underlying structure than to tweak points.
In the figures, we demonstrate our method using a few "animals." A simple two-armed structure illustrates
what can be done using default parameters and automatic component generation (Figure 4). The "zuni cat"
is closer to a model that might be used for real animation (Figures 5-9). The toad was created from cat
description files in about an hour (Figure 11). being developed.) The final model shows a
human hand (Figure 12).
The models are defined by several ascii files describing the hierarchy structure, rest geometry, state
geometry, bones, muscles, stuffing, and skin parameters, as well as a binary file for the skin polygon mesh.
This makes it possible to share descriptions easily. For example, the main difference between the toad and
the cat is the rest geometry, the shape of the head, and the lack of the tail segments.
Images and animations were done on an SGI Reality Engine with a 150 MHz processor. Calculating new
skin positions is "interactive" in that it is fast enough to give a feel for the motion, but not really realtime.
A facility exists to save skin positions for realtime playback.
The zuni cat is our most interesting model so far (Figures 5-9). It consists of 27 segments, 39 bones,
123 muscles, and 13 stuffings. The sample volume was of resolution 83x83x83 and it produced a skin mesh
of 5518 vertices and 6787 polygons using five filter iterations. The skin mesh was used as automatically
generated, except that two points at the tips of the ears were displaced upwards to produce pointed ears,
and the points around the head were drawn closer to the skull ellipsoid by reducing their anchor lengths.
Also, points on the trunk of the body and upper limbs were given very loose anchor connections (small values
of k to virtual anchors) to simulate the extremely baggy nature of cat skin in those regions.
Generation of the volume, filtering, and skin extraction took about a minute of elapsed time for the cat.
Adjustment and animation of the skin can be done at 3.3 frames per second using 1 adjustment iteration,
frames per second using 5 adjustment iterations, or about 0.75 frames per second using 1 adjustment
iteration with collision detection and response. However, visual differences between images using these
different parameter settings are subtle.
The figures show the cat and its underlying components in various positions. Figure 6, in particular,
shows the bent left leg of the cat. At left are shown bones, muscles, stuffing and skin mesh, with blue vectors
showing connections from skin points to anchor points on ellipsoids, and red vectors showing displacements
of skin points from virtual anchor points. At right is shown the texture-mapped polygonal skin in this
position.
5 Discussion and Future Work
As a first pass on this new modeling approach, we feel the results were very satisfactory. The simplicity
of our model and the rather ad hoc nature of the muscles is the main problem at this point. We need a
more flexible animal model that includes most of the bones and joints of the typical vertebrate. A human
has 206 bones, though many are fused or have little motion (such as the skull, wrist, and foot). Our next
model will include most or all the vertebrae, including all the tail vertebrae of the cat. We also need more
realistic muscles. The origins and insertions of actual muscles are well documented, so the process of muscle
creation should be automated from this information. Though a more complex model would slow down the
program some, work on our present models suggests having two or three times more components would still
be interactively pleasant to work with. Adjustment of the skin is the major bottleneck, not that of the
underlying anatomy.
We would also like to use more complex primitives than ellipsoids. For bones, and possibly muscles and
stuffing, meshes can be extracted using isosurface programs from CT scan data or created using modeling
programs. Because of the remarkable similarity between many animals, we believe the work invested in
creating good polygonal models will allow us to convert these descriptions between different individuals and
species fairly easily.
Other methods of skin extraction should be pursued; for example, shrink wrapping a surface onto the
underlying components, or creating an isosurface at a given distance from underlying component using
tracking. The present method works satisfactorily for animals with relatively loose skin, such as the cat, but
less well where the skin is more tightly attached to the body, as in humans. The latter would more easily
accommodate different levels of detail. Decimation methods to reduce the number of skin points in large flat
regions would also expedite adjustment.
At present, the model is kinematic, and joint position changes cause the muscle changes, while, in reality,
it is muscle changes that move joints. It would be interesting to explore a physical simulation model based on
muscle contraction using this model. muscle models could store information concerning comfortable lengths
and amount of force that they can apply. This could be used to implement joint limits and to help determine
natural and acceptable movements.
Our texture mapping program for coloring skin is quite rudimentary. We will explore methods to better
control mapping and to generate more realistic texture maps, including 3D textures for fur and hair.
Overall, we think this research provides a new and successful basic paradigm for animal modeling using
an approximation to real animal anatomy. We believe this work can be gradually extended in a number of
directions, such as those mentioned above, to create animal models of great realism.
Acknowledgments
List processing software by Yumi Tsuji and Allen Van Gelder was used in this software package. Allen Van
Gelder contributed many helpful suggestions concerning filtering, skin motion, and texture-mapping. Mark
Henne of Lucasfilm and Pauline Tso of Rhythm & Hues provided helpful pointers to related literature. Brad
Smith went beyond the call of duty to keep the machine running. aexture maps were taken from modeling
software provided as a donation from Alias Incorporated. This research was supported in part by an NSF
grant CCR-8958590.
--R
A generalization of algebraic surface drawing.
Simulating humans
Making Them Move.
Commercial spot cola bears.
Layered construction for deformable animated characters.
Pump it up: Computer animation based model of muscle using the finite element method.
Simulation of object and human skin deformations in a grasping task.
A constraint-based skin model for human figure animation
Marching cubes: A high resolution 3D surface construction algorithm.
Constructing physics-based facial models of individuals
Human body deformations using joint- dependent local operators and finite element theory
Vector quantization for Volume
The Making of Jurassic Park.
--TR
--CTR
Caroline Larboulette , Marie-Paule Cani , Bruno Arnaldi, Dynamic skinning: adding real-time dynamic effects to an existing character animation, Proceedings of the 21st spring conference on Computer graphics, May 12-14, 2005, Budmerice, Slovakia
Y. M. Tang , K. C. Hui, The effect of tendons on foot skin deformation, Computer-Aided Design, v.39 n.7, p.583-597, July, 2007
J. Peter C. Markush , Gary J. Grimes , Jonathan R. Merril, Investigations toward Using VRML for Distributed Medical Collaboration, Presence: Teleoperators and Virtual Environments, v.9 n.4, p.383-393, August 2000
Jane Wilhelms , Allen Van Gelder, Anatomically based modeling, Proceedings of the 24th annual conference on Computer graphics and interactive techniques, p.173-180, August 1997
Michael Pratscher , Patrick Coleman , Joe Laszlo , Karan Singh, Outside-in anatomy based character rigging, Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 29-31, 2005, Los Angeles, California | computer animation;computer modeling;computer graphics;animal and skin modeling |
620236 | Implications of Classical Scheduling Results for Real-Time Systems. | Important classical scheduling theory results for real-time computing are identified. Implications of these results from the perspective of a real-time systems designer are discussed. Uni- processor and multiprocessor results are addressed as well as important issues such as future release times, precedence constraints, shared resources, task value, overloads, static versus dynamic scheduling, preemption versus non-preemption, multiprocessing anomalies, and metrics. Examples of what scheduling algorithms are used in actual applications are given. | Introduction
Every real-time systems designer should be familiar with a set of important classical scheduling
theory results, i.e., those results largely taken from the literature in complexity theory and
operations research. While knowledge of these results rarely provides a direct solution for
the designer, the implications of the results provide important insight in choosing a good
design and scheduling algorithm for the system, and in avoiding very poor or even erroneous
choices. The literature in scheduling theory is so vast, that we make no pretense at being
comprehensive. In this paper, a minimum set of results, together with their implications,
is presented. For example, the scheduling theory results presented include: Jackson's rule,
Smith's rule, McNaughton's theorem, Liu and Layland's rate monotonic rule, Mok's theorems,
and Richard's anomalies. Besides learning what these important results are, we want the
reader to be able to answer, at least, the following questions:
This work was done while the first author was on sabbatical from the Computer Science Dept. at the Univ.
of Massachusetts.
y This work has been supported, in part, by NSF under grants IRI 9208920 and CDA 8922572, by ONR under
grant N00014-92-J-1048, and by the IRI of Italy.
ffl what do we really know about earliest deadline scheduling,
ffl what is known about uni-processor real-time scheduling problems,
ffl what is known about multiprocessing real-time scheduling problems,
ffl what anomalous behavior can occur and can it be avoided,
ffl where is the boundary between polynomial and NP-hard scheduling problems,
ffl what task set characteristics cause NP-hardness,
what type of bounds analysis is useful for real-time systems,
ffl what is the impact of overloads on the scheduling results,
ffl how does the metric used in the theory impact the usefulness of the result in a real-time
computing system, and
ffl what different results exist for static and dynamic scheduling?
There are so many dimensions to the scheduling problem that there is no accepted tax-
onomy. In this paper we divide the scheduling theory between uni-processor (section 2) and
multiprocessor (section results. In the uni-processor section we begin with independent
tasks, then consider precedence constraints, shared resources, and overload. In the multiprocessor
case, since most results address precedence and shared resources together, we divide the
work between static and dynamic algorithms.
Preliminaries
Before presenting the major scheduling results a few basic concepts must be clearly understood.
Here we discuss the differences between static, dynamic, off-line and on-line scheduling as well
as various metrics and their implications. NP-complete and NP-hard, terms used throughout
the paper, are defined.
2.1 Static versus Dynamic Scheduling
Most classical scheduling theory deals with static scheduling. Static scheduling refers to the
fact that the scheduling algorithm has complete knowledge regarding the task set and its
constraints such as deadlines, computation times, precedence constraints, and future release
times. This set of assumptions is realistic for many real-time systems. For example, real-time
control of a simple laboratory experiment or a simple process control application might have a
fixed set of sensors and actuators, and a well defined environment and processing requirements.
In these types of real-time systems, the static scheduling algorithm operates on this set of tasks
and produces a single schedule that is fixed for all time. Sometimes there is confusion regarding
future release times. If all future release times are known when the algorithm is developing
the schedule then it is still a static algorithm.
In contrast, a dynamic scheduling algorithm (in the context of this paper) has complete
knowledge of the currently active set of tasks, but new arrivals may occur in the future, not
known to the algorithm at the time it is scheduling the current set. The schedule therefore
changes over time. Dynamic scheduling is required for real-time systems such as teams of
robots cleaning up a chemical spill or in military command and control applications. As we
will see in this paper very few theoretical results are known about real-time dynamic scheduling
algorithms.
Off-line scheduling is often equated to static scheduling, but this is wrong. In building
any real-time system, off-line scheduling (analysis) should always be done regardless of whether
the final runtime algorithm is static or dynamic. In many real-time systems, the designers
can identify the maximum set of tasks with their worst case assumptions and apply a static
scheduling algorithm to produce a static schedule. This schedule is then fixed and used on-line
with well understood properties such as, given that all the assumptions remain true, all tasks
will meet the deadlines. In other cases, the off-line analysis might produce a static set of
priorities to use at run time. The schedule itself is not fixed, but the priorities that drive the
schedule are fixed. This is common in the rate monotonic approach (to be discussed later).
If the real-time system is operating in a more dynamic environment, then it is not feasible
to meet the assumptions of static scheduling (i.e., everything is known a priori). In this case an
algorithm is chosen and analyzed off-line for the expected dynamic environmental conditions.
Usually, less precise statements about the overall performance can be made. On-line, this same
dynamic algorithm executes.
Generally, a scheduling algorithm (possibly with some modifications) can be applied to
static scheduling or dynamic scheduling and used off-line or on-line. The important difference
is what is known about the performance of the algorithm in each of these cases. As an example,
consider earliest deadline first (EDF) scheduling. When applied to static scheduling we know
that it is optimal in many situations (to be enumerated below), but when applied to dynamic
scheduling on multiprocessors it is not optimal, in fact, it is known that no algorithm can be
optimal.
2.2 Metrics
Classical scheduling theory typically uses metrics such as minimizing the sum of completion
times, minimizing the weighted sum of completion times, minimizing schedule length, minimizing
the number of processors required, or minimizing the maximum lateness. In most cases,
deadlines are not even considered in these results. When deadlines are considered, they are
usually added as constraints, where, for example, one creates a minimum schedule length, subject
to the constraint that all tasks must meet their respective deadline. If one or more tasks
miss their deadlines, then there is no feasible solution. Which of these classical metrics (where
deadlines are not included as constraints) are of most interest to real-time systems designers?
The sum of completion times is generally not of interest because there is no direct assessment
of timing properties (deadlines or periods). However, the weighted sum is very important when
tasks have different values that they impart to the system upon completion. Using value is
often overlooked in many real-time systems where the focus is simply on deadlines and not a
combination of value and deadline. Minimizing schedule length has secondary importance in
possibly helping minimize the resources required for a system, but does not directly address
the fact that individual tasks have deadlines. The same is true for minimizing the number of
processors required. Minimizing the maximum lateness metric can be useful at design time
where resources can be continually added until the maximum lateness is less than or equal to
zero. In this case no tasks miss their deadlines. On the other hand, the metric is not always
useful because minimizing the maximum lateness doesn't necessarily prevent one, many, or
even ALL tasks from missing their deadlines. See Figure 1.
d1 d2 d3 d4 d5
d1 d2 d3 d4 d5
maximum lateness
maximum lateness
The first schedule
minimizes the max.
lateness, but all
tasks miss their
deadline
The second schedule
has a greater max.
lateness, but four
tasks out of five
complete before their
deadlines
Figure
1: Minimizing Maximum Lateness Example
Rather than these above mentioned metrics much real-time computing work minimizes
the number of tasks that miss deadlines or looks for optimal algorithms defined in the following
manner: An optimal scheduling algorithm is one which may fail to meet a deadline only if no
other scheduling algorithm can. In this paper, all of the above metrics will be mentioned,
either because they are directly applicable to real-time systems, or to show where even though
a nice theoretical result exists, there is limited applicability to real-time systems.
Related to metrics is the complexity of the various scheduling problems themselves. As
we shall see, many scheduling results are NP-complete or NP-hard. NP is the class of all
decision problems that can be solved in polynomial time by a nondeterministic machine. A
recognition problem R is NP-complete if R 2 NP and all other problems in NP are polynomial
transformable to R. A recognition or optimization problem R is NP-hard if all problems in
NP are polynomial transformable to R, but we can't show that R 2 NP .
3 Uni-processor Systems
In general we follow the notation of [18], in which the problem definition has the form ff j fi j fl,
where ff indicates the machine environment (in this section of the paper indicating
a uni-processor machine), fi indicates the job characteristics (preemptable, nonpreemptable,
independent, precedence constrained, deadline, etc.) and fl indicates the optimality criterion
(maximum lateness, total tardiness, etc. Note that the optimality criterion depends on the
metric chosen, which strongly relies on the system objectives and the task model.
3.1 Preemption vs NonPreemption: Jackson's Rule
Suppose there are n independent jobs (the words job, process and task will be used interchangeably
just as they are throughout the scheduling literature), with each job j having a
processing time
and a due date d j
. For any given sequence of scheduling, each job will have a
defined completion time C j too. Let us define the lateness of a job j as L
we want to minimize the maximum lateness assuming the jobs are executed nonpreemptively,
that is we want to solve the problem
where "1" stands for single machine, "nopmtn" stands for nonpreemption and the objective
function to minimize is
A very simple solution to this problem, the earliest due date (EDD) algorithm is as follows:
Theorem 3.1 (Jackson's Rule [16]). Any sequence is optimal that puts the jobs in order of
nondecreasing due dates. 2
The proof of the theorem can be given by a simple interchange argument [18], but
presenting that argument here is beyond the scope of this paper. At first, this result may not
seem too useful to a real-time systems designer because we often require that no task miss its
deadline. But, since this is a static scheduling algorithm and if the maximum lateness is greater
than zero, then the designer knows that he must increase the computing power of his system
to meet the requirements of missing no deadlines. Further, as we shall see, EDD is optimal in
many other situations also. Note that since all tasks are known and ready to execute at time
zero, preemption would not improve the situation.
If our real-time system requires a more sophisticated programming model, one of the
first extensions to consider is the introduction of release times. We say that a job j has release
time r j if its execution cannot start before time r j . Unfortunately, the problem above extended
with release times, that is
is NP-hard [19].
In this case we obtain a great benefit if we permit jobs to be preempted at any instant.
In fact, the problem
is easy, that is, an algorithm for its solution exists and has polynomial complexity. Again the
algorithm is based on the Jackson's rule, slightly modified in order to take the release times
into account:
Theorem 3.2 Any sequence that at any instant schedules the job with the earliest due date
among all the eligible jobs (i.e., those whose release time is less then or equal to the current
time) is optimal with respect to minimizing maximum lateness. 2
The result again can be easily proven by an interchange argument. The proof obtained
in this way is very similar to the "time slice swapping" technique used in [9] and [24] to show
the optimality of the earliest deadline first (EDF from now on) and the least laxity first (LLF)
algorithms, respectively.
One implication of these results is that when practical considerations do not prevent
us from using it, preemption usually gives greater benefit than nonpreemption in terms of
scheduling complexity. Unfortunately, when we deal with shared resources in real-time systems
we have to address critical sections and one technique is to create nonpreemptable code; this
again creates an NP-hard problem.
Another implication of these theorems is that the minimization of maximum lateness
implies optimality even when all deadlines must be met, because the maximum lateness can
be required to be less than or equal to zero. In fact, the very well-known paper by Liu
and Layland [21] focussed on this aspect of EDF scheduling for a set of independent periodic
processes, showing that a full processor utilization is always achievable and giving a very simple
necessary and sufficient condition for the schedulability of the tasks:
where T j is the period of the task j.
The EDF algorithm has also been shown to be optimal under various stochastic condi-
tions. All of these results imply that EDF works well under many different situations. Recently,
variations of EDF are being used in multimedia applications, robotics, and real-time databases.
Note, however, that in none of the above classical results for EDF is precedence constraints,
shared resources, or overloads taken into account. We address these aspects in subsequent
sections.
Another very important and common area for real-time scheduling is the scheduling of
periodic tasks. Here the rate monotonic algorithm is often used. This algorithm assigns to
each task a static priority inversely proportional to its period, i.e., tasks with the shortest
periods get the highest priority. For a fixed set of independent periodic tasks with deadlines
the same as the periods, we know:
Theorem 3.3 (Liu and Layland [21]) A set of n independent periodic jobs can be scheduled
by the rate monotonic policy if
are the period and
worst case execution time, respectively.
For large n we obtain the utilization bound of 69% meaning that as long as the CPU
utilization is less than 69% all tasks will make their deadlines. This is often referred to as the
schedulability test. If deadlines of periodic tasks can be less than the period the above rule is
no longer optimal. Rather we must use a deadline monotonic policy [20] where the periodic
process with the shortest deadline is assigned the highest priority. This scheme is optimal in
the sense that if any static priority scheme can schedule this set of periodic processes then the
deadline monotonic algorithm can. Note that deadline monotonic is not the same as pure EDF
scheduling because tasks may have different periods and the assigned priorities are fixed. The
rate monotonic algorithm has been extended in many ways the most important of which deals
with shared resources (see Section 3.3), and schedulability tests have been formulated for the
deadline monotonic algorithm [1].
The rate monotonic scheduling algorithm has been chosen for the Space Station Freedom
Project, the FAA Advanced Automation System (AAS), and has influenced the specification
of the IEEE Futurebus+. The DoD's 1991 Software Technology Strategy says that the Rate
Monotonic Scheduling is a "major payoff " and "system designers can use this theory to predict
whether task deadlines will be met long before the costly implementation phase of a project
begins." In 1992 the Acting Deputy Administrator of NASA stated, "Through the development
of Rate Monotonic Scheduling, we now have a system that will allow (Space Station) Freedom's
computers to budget their time, to choose between a variety of tasks, and decide not only which
one to do first but how much time to spend in the process." Rate monotonic is also useful for
simple applications, such as the real-time control of a simple experiment that might contain
sensors whose data must be processed periodically, or a chemical plant that has a large
number of periodic tasks and a few alarms. These alarms can be treated as periodic tasks
whose minimum interarrival time is equal to its period, and then static scheduling, using the
rate monotonic algorithm, can be applied.
3.2 Precedence Constraints
In many systems of practical interest we do not expect tasks to be independent, but rather
cooperate to achieve the goal of the application. Cooperation among tasks is achieved by
various types of communication semantics. Depending on the chosen semantics, application
tasks experience precedence constraints or blocking, or both, while accessing shared resources.
precedence relation among tasks makes the scheduling problem more complex. Since not all
tasks are ready to be scheduled at the same time, the simple EDF rule is no longer optimal.
In the following, precedence constraints will be expressed with the notation i ! j, or
with their associated digraph G(V,E) where V is the set of tasks and E the set of edges, an
edge connecting tasks i,j if task i precedes task j.
The simple scheduling problem of a set of tasks with no-preemption, identical arrival
time and a precedence relation among them, described as,
was solved by Lawler [17] with an EDF-like algorithm that works backwards, starting from
the leaf tasks in the precedence graph.
The algorithm works as follows: the scheduling list is built starting from the bottom in
reverse topological order, and adding to the list on each step, the task having the minimum
value for the chosen metric and whose successors have been scheduled. Lawler's algorithm is
optimal, and runs in O(n 2 ).
Lawler's algorithm gives a solution for tasks having identical start time. Unfortunately,
this is not sufficient for all systems of practical interest where periodic tasks or dynamically
arising tasks do not have a common start time. The problem of non-preemptive scheduling
of jobs with different release times and general precedence constraints is not a simple one, in
fact, the problem
and the corresponding
were proven to be NP-hard by Lenstra [19].
The NP hardness of the general precedence constrained problem is a major obstacle for
non-preemptive scheduling, in spite of the fact that optimal results or polynomial algorithms
exist for similar problems, where some of the general assumptions are constrained. For example,
a polynomial algorithm was found for unit computation time tasks and arbitrary precedence
constraints.
The most interesting results related to precedence constraints are those obtained working
on sub-classes of the general precedence relation. Polynomial algorithms have been found for
precedence relations in the form of intrees, that is when every task has no more than one
predecessor, or outtrees, when tasks have no more than one successor, or when the precedence
relation is a series-parallel graph. It is easy to show how the intree and outtree cases are
included in the more general class of series-parallel graphs. The series-parallel graph is the
most interesting subset of the general precedence relation for which optimality results have
been found. A series-parallel graph is defined recursively this way:
ffl G(fj,g,0) is a series - parallel graph
are series-parallel graphs than
are series - parallel graphs
or alternatively, a graph is a series-parallel graph only if its transitive closure does not contain
the Z graph. A Z graph is a graph that contains as a subgraph 4 nodes f i,j,k,l g with only
the following edges
Figure
graphically depicts intrees and outtrees (series-parallel graphs) and a Z graph
(not a series parallel graph). Efficient solutions exist for series-parallel graphs, but they do not
exist for a Z graph. Unfortunately, Z graphs arise in practice. Details on these results follow.
Theorem 3.4 (Lawler [18]) Given any set of tasks related by a series-parallel precedence graph
an optimal solution exists for every cost function that admits a string interchange relation. 2
Formally, a cost function has a string interchange relation if, given two strings of jobs ff
and fi and a quasi total order - among them, the following relation holds:
Intuitively, this formula means that a cost function admits a string interchange relation
when a lower value is obtained when individual tasks of lower value are scheduled first. The
theorem says that if we are interested in minimizing or maximizing a cost function that admits a
Intree Outtree Z-graph
Figure
2: Precedence Relations
string interchange relation (e.g., minimizing lateness), it is possible to find an optimal schedule
in polynomial time for every set of tasks related by a series-parallel precedence graph.
The algorithm which solves this problem works with a decomposition tree, that is the
tree that shows how sub-graphs are connected by the series or parallel relation to form the
global precedence graph. The decomposition tree can be found in O(j
the number of nodes and A the number of edges. The algorithm starts from the tasks having
no successor in the decomposition tree, and, for every node, calculates a string sequence by
combining the strings of jobs coming from the sons. The final node, representing the original
graph, is reached when the whole optimal scheduling list has been computed.
A common feature of this algorithm, as is also found in other similar algorithms from the
literature dealing with intrees or outtrees, is that they work on the precedence graph (or on
the related decomposition tree), starting from jobs with no successors or no predecessors, and
build a sequence of sub-optimal schedules. This technique can be useful in various scheduling
heuristics.
To what extent can Lawler's optimal algorithm for series-parallel graphs, and even other
optimal algorithms which work only on intrees or outtrees, help us in real-time systems?
Unfortunately, some high level communication semantics found in programming languages,
give rise to precedence constrained jobs with Z graphs, meaning that these optimal algorithms
don't apply and heuristics need to be used. One example of how a Z graph arises is a simple
pair of tasks linked by an asynchronous send with synchronous receive. See Figure 3. Note
that remote procedure calls (RPC) do not give rise to Z graphs.
If preemption is allowed, classical results go further in providing solutions for general
precedence constraints. Preemption reduces the complexity of the scheduling problem of precedence
related tasks with different arrival times. The problem is, in fact, solvable in O(n 2 ) by
Baker's algorithm [2]
Asynch_Send(.)
Synch_Receive(.)
process A process B process A process B
message
Figure
3: Program Example That Gives Rise to Z Graph
Baker's procedure is recursive and because of its computational complexity it seems
suited for off-line scheduling. Due to the difficulty of describing the algorithm and space
limitations, we do not describe the algorithm here. However, an important feature of the
algorithm is that the number of preemptions is limited to n-1 where n is the number of jobs,
thus making the preemption overhead bounded. In all practical situations the scheduling and
preemption overheads must be bounded and taken into account. We rarely see this issue
addressed in classical scheduling theory.
In the above solutions, a scheduling list is explicitly created. Another technique is to
encode the precedence relations into the parameters used by the scheduling algorithm, for
example, into deadlines and release times.
Blazewicz [4] shows how to adjust deadlines so that precedence constraints are encoded
in the deadlines a priori, and at run time you simply use EDF scheduling. His result comes
from the fact that task deadlines depend on their deadlines and successors' deadlines, while
task start times depend on their own start time and predecessors' start times. The theorem
assumes no shared resources among tasks.
Theorem 3.5 (Blazewicz [4]) EDF is optimal for tasks that have a general precedence relation
and different release dates if deadlines and start times are revised according to the following
formulas:
d
starting from the tasks having no successor and processing on every step those tasks whose
successors have been processed
r
starting from the tasks having no predecessor. 2
This result allows us to transform a set of dependent tasks into a set of independent ones
obtaining an equivalent problem under the EDF policy. The optimality of the technique of the
revised deadlines and arrival dates has been used in both on-line [7] and off-line algorithms
[24].
Unfortunately, the optimality of this technique is again lost if tasks with precedence
constraints also share resources in an exclusive way. Moreover, if arbitrary protocols are
used to access shared resources, the revision of tasks' deadlines and release times is no longer
sufficient to guarantee the correct ordering of jobs without additional constraints. The general
problem of scheduling a set of tasks with precedence constraints and arbitrary resource conflicts
is NP-hard.
Some off-line algorithms face the NP hardness of the general problem trying to find
acceptable solutions by means of heuristics, branch and bound techniques and so on. An
example is given by the algorithm by Xu and Parnas [32] where on every step a sub-optimal
schedule is obtained. There are even examples of on-line systems driven by heuristics as the
Spring system [26] where the scheduling list is built on-line.
3.3 Shared Resources
Shared resources are commonly used in multitasking applications. While in general purpose
systems this is a well-known problem solved, for example, with mutual exclusion primitives,
in real-time systems a straightforward application of this solution does not hold. Defining a
run-time scheduler as totally on-line if it has no knowledge about the future arrival times of
the tasks, the following has been proven:
Theorem 3.6 (Mok [24]). When there are mutual exclusion constraints, it is impossible to
find a totally on-line optimal run-time scheduler. 2
The proof is simply given by an adversary argument. Furthermore, the same author
showed a much more negative result:
Theorem 3.7 (Mok [24]). The problem of deciding whether it is possible to schedule a set of
periodic processes which use semaphores only to enforce mutual exclusion is NP-hard. 2
A transformation of the 3-partition problem to this scheduling problem is shown to prove
the theorem.
In Mok's opinion "the reason for the NP-hardness of the above scheduling problem lies
in the possibility that there are mutually exclusive scheduling blocks which have different
computation times." A confirmation of this point of view is that the problem of minimizing
the maximum lateness of n independent unit-time jobs with arbitrary release times, that is,
is easy [18]. Moreover, if we add precedence constraints and we want to minimize the maximum
completion time (makespan), that is, we want to solve
the problem is still easy [11]. The algorithm that solves it makes use of forbidden regions,
intervals of time during which no task can start if the schedule is to be feasible. The idea is
that because of the nonpreemption, scheduling a task at a certain point in time could force
some other late task to miss its deadline.
At this point several choices are possible. One of them, followed by Mok, is to enforce the
use of mutually exclusive scheduling blocks having the same computation time, and another,
for example, by Sha et al. [27] and Baker [2], is to efficiently find a suboptimal
solution with a clever allocation policy, guaranteeing at the same time a minimum level of
performance.
The former solution is called Kernelized Monitor. The key idea is to assign the processor
in time quantums of length q such that
where is the length of the i-th critical section. In other words the grain of the system is
made coarser. Furthermore, the ready times and the deadlines of the tasks can be previously
modified according to some partial order on the tasks. Adjusting the EDF scheduler with the
technique of the forbidden regions mentioned above, the following theorem can be proven:
Theorem 3.8 (Mok [24]). If a feasible schedule exists for an instance of the process model
with precedence constraints and critical sections, then the kernelized monitor scheduler can be
used to produce a feasible schedule. 2
In [27] Sha et al. introduce the Priority Ceiling Protocol (PCP), an allocation policy for
shared resources which works with a Rate Monotonic scheduler. Successively Chen and Lin [5]
extend the utilization of the protocol to an EDF scheduler.
The main goal of this, as other similar protocols, is to bound the usually uncontrolled
priority inversion, a situation in which a higher priority job is blocked by lower priority jobs
for an indefinite period of time (recall that a block can occur if a job tries to enter a critical
section already locked by some other job). Finding a bound to priority inversion allows to
evaluate the worst case blocking times eventually experienced by the jobs, so that they can
be accounted for in the schedulability guaranteeing formulas. In other words this means to
evaluate the worst case loss of performance.
The key ideas behind the PCP is to prevent multiple priority inversions by means of early
blocking of tasks that could cause priority inversion, and to minimize as much as possible the
length of the same priority inversion allowing a temporary rise of the priority of the blocking
task. This is done in the following way: define the ceiling of a critical section as the priority
of the highest priority task that currently locks or could lock the critical section, and allow
the locking of a critical section only if the priority of the requesting task is higher than the
ceiling of all critical sections currently locked. In case of blocking, the task that holds the lock
inherits the priority of the requesting task until it leaves the critical section.
The following properties have been shown:
ffl A job can be blocked at most once before it enters its first critical section.
ffl The PCP prevents the occurrence of deadlocks.
Of course, the former property is used to evaluate the worst case blocking times of the
jobs.
In [2] Baker describes a similar protocol, the Stack Resource Policy (SRP), that handles
a more general situation in which multiunit resources, both static and dynamic priority
schemes, and sharing of runtime stacks are all allowed. The protocol relies on the following
two conditions:
ffl To prevent deadlocks, a job should not be permitted to start until the resources currently
available are sufficient to meet its maximum requirements.
ffl To prevent multiple priority inversion, a job should not be permitted to start until the
resources currently available are sufficient to meet the maximum requirement of any
single job that might preempt it.
The key idea behind this protocol is that when a job needs a resource not available, it is
blocked at the time it attempts to preempt, rather than later, when it actually may need the
shared resource. The main advantages of this earlier blocking are to save unnecessary context
switches and the possibility of a simple and efficient implementation of the SRP by means of
a stack.
In summary, dealing with shared resources in a real-time system is of utmost importance.
The classical results given in this section provide a good means for handling resources in a uni-
processor. Many researchers feel that these techniques do not work well in multiprocessors nor
in distributed systems. For such systems shared resources are typically addressed by on-line
planning algorithms [26, 28, 33], or by static schedules developed with off-line heuristics. Both
of these alternative approaches avoid blocking over shared resources by scheduling competing
tasks at different points in time.
3.4 Overload and Value
EDF and LLF algorithms have been shown to be optimal with respect to different metrics.
However, in overload conditions, these algorithms perform very poorly. Experiments carried
out by Locke [22] and others have shown that both EDF and LLF rapidly degrade their
performance during overload intervals. This is due to the fact that such algorithms give the
highest priority to those processes that are close to missing their deadlines.
A typical phenomenon that may happen with EDF when the system is overloaded is the
"domino effect," since the first task that misses its deadline may cause all subsequent tasks
to miss their deadlines. In such a situation, EDF does not provide any type of guarantee on
which tasks will meet their timing constraints. This is a very undesirable behavior in practical
systems, since in real-world applications intermittent overloads may occur due to exceptional
situations, such as modifications in the environment, arrival of a burst of tasks, or cascades of
system failures. As a real world example, this situation could cause a flexible manufacturing
application to produce no completed products by their deadlines.
In order to gain control over the tardy tasks in overload conditions, a value is usually
associated with each task, reflecting the importance of that task within the set. When dealing
with task sets with values, tasks can be scheduled by the Smith' rule.
Theorem 3.9 (Smith's rule [29]) Finding an optimal schedule for
is given by any sequence that puts jobs in order of non decreasing ratios ae
Smith's rule resembles the common shortest processing time first (SPT) rule and is
equivalent to it when all tasks have equal weights. However, it is not sufficient to solve the
problem of scheduling with general precedence constraints. The problems
turn out to be NP complete [19] and the same is true even for the simpler ones
Interesting solutions had been found for particular kind of precedence relations, in fact, optimal
polynomial algorithm had been found for the problems
Unfortunately, in real-time systems the precedence constraints imposed on tasks are often
more general. A heuristic was proposed in the Spring project, where deadline and cost driven
algorithms are combined together with rules to dynamically revise values and deadlines in
accordance with the precedence relations [6].
A number of heuristic algorithms have also been proposed to deal with overloads [30]
[13] which improve the performance of EDF.
Baruah, et al. [3] have shown that there exists an upper bound on the performance of
any on-line (preemptive) algorithm working in overload conditions. The "goodness" of an on-line
algorithm is measured with respect to a clairvoyant scheduler (one that knows the future),
by means of the competitive factor, which is the ratio r of the cumulative value achieved by
the on-line algorithm to the cumulative value achieved by the clairvoyant schedule. The value
associated with each task is equal to the task's execution time if the task request is successfully
scheduled to completion; a value of zero is given to tasks that do not terminate within their
deadline. According to this metric, they proved the following theorem:
Theorem 3.10 (Baruah, et. al. [3]) There does not exist an on-line scheduling algorithm
with a competitive factor greater than 0.25.
What the theorem says is that no on-line scheduling algorithm can guarantee a cumulative
value greater than 1=4th the value obtainable by a clairvoyant scheduler. These bounds
are true for any load, but can be refined for a given load. For example, if the load is less than
1 then the bound is 1, as the load just surpasses 1 then the bound drops immediately to .385,
for loads from greater than 1 up to 2 the bound gradually drops from .385 to .25, and then for
all loads greater than 2 the bound is .25.
It is worth pointing out that the above bound is achieved under very restrictive assump-
tions, such as all tasks in the set have zero laxity, the overload can have an arbitrary (but finite)
duration, task's execution time can be arbitrarily small, and task value is equal to computation
time. Since in most real world applications tasks characteristics are much less restrictive, the
1=4th bound has only a theoretical validity and more work is needed to derive other bounds
based on more knowledge of the task set.
3.5 Summary of Uni-processor Results
Many basic algorithms and theoretical results have been developed for scheduling on uni-
processors. Many of these are based on earliest deadline scheduling or rate monotonic schedul-
ing. Extensions of these results to handle precedence and resource sharing have occurred.
Because of this work, designers of real-time systems have a wealth of information concerning
uni-processor scheduling. What is still required are more results on scheduling in overload and
for fault tolerance (although fault tolerance usually requires multiple processors as well). It
is also necessary to develop a more integrated and comprehensive scheduling approach that
addresses periodic and aperiodic tasks, preemptive and non-preemptive tasks in the same sys-
tem, tasks with values, and combined CPU and I/O scheduling, to name a few issues. As an
example, the operational flight program of the A-7E aircraft has 75 periodic and 172 aperiodic
processes with significant synchronization requirements. Extensions to rate monotonic that
integrate periodic and aperiodic tasks could be used for such an application.
4 Multi-processor Real-Time Scheduling
More and more real-time systems are relying on multiprocessors. Unfortunately, less is known
about how to schedule multiprocessor based real-time systems than for uni-processors. This is
partly due to the fact that complexity results show that almost all real-time multiprocessing
scheduling is NP-hard, and partly due to the minimal actual experience that exists with such
systems so even the number of heuristics that exist is relatively low. In spite of the negative
implications that complexity analysis provides, it is important to understand these complexity
results because
ffl understanding the boundary between polynomial and NP-hard problems can provide
insights into developing useful heuristics that can be used as a design tool or as an
on-line scheduling algorithm,
ffl understanding the algorithms that achieve some of the polynomial results can again
provide a basis upon which to base such heuristics,
ffl fundamental limitations of on-line algorithms must be understood to better create robust
systems and to avoid operating under misconceptions, and
ffl serious scheduling anomalies can be avoided.
In this section we present multiprocessing scheduling results for deterministic (static)
scheduling both with and without preemption, for dynamic on-line scheduling with and without
preemption, identify various anomalies, and briefly discuss the similarity of this problem to
bin packing. Important implications of the theory are stressed throughout the section and a
summary of the global picture of multiprocessor real-time scheduling is given.
4.1 Deterministic (Static) Scheduling
4.1.1 Non-preemptive Multiprocessing Results
Let our model of multiprocessing be that there are a set of P processors, T tasks, and R
resources. The processors are identical. Each task has a worst case execution time of - , is
non-preemptive, and tasks may be related by a partial order indicating that, e.g., task T(i)
must complete before task T(j). It is important to note that in most of the scheduling theory
results, tasks are considered to have constant execution time. For most computer applications
tasks never have constant execution time so we must understand the implication of this fact.
For example, this fact gives rise to one of the interesting multiprocessing anomalies of real-time
scheduling (see section 4.3). For each resource R(k) there is a number which indicates how much
of it exists. Tasks can then require a portion of that resource. This directly models a resource
like main memory. It can also model a mutually exclusive resource by requiring the task to
access 100% of the resource. The complexity results from deterministic scheduling theory for
multiprocessing where tasks are non-preemptive, have a partial order among themselves, have
resource constraints (even a single resource constraint), and have a single deadline show that
almost all the problems are NP-complete. To delineate the boundary between polynomial and
NP-hard problems and to present basic results that every real-time designer should know, we
list the following theorems without proof and compare them in Table 1. The metric used in
the following theorems is the amount of computation time required for determining a schedule
which satisfies the partial order and resource constraints, and completes all required processing
before a given fixed deadline.
Theorem 4.1 (Coffman and Graham [8]). The multiprocessor scheduling problem with 2
processors, no resources, arbitrary partial order relations, and every task has unit computation
time is polynomial. 2
Theorem 4.2 (Garey and Johnson [10]). The multiprocessor scheduling problem with 2 pro-
cessors, no resources, independent tasks, and arbitrary computation times is NP-complete. 2
Theorem 4.3 (Garey and Johnson [10]). The multiprocessor scheduling problem with 2 pro-
cessors, no resources, arbitrary partial order, and task computation times are either 1 or 2
units of time is NP-complete. 2
Proc. Res. Ordering Comp T. Complexity
Arbitrary Unit Polynomial
Arbitrary 1 or 2 Units NP-Comp
Forest Unit NP-Comp
Forest Unit Polynomial
Arbitrary Unit NP-Comp
Table
1: Summary of Basic Multiprocessor Scheduling Theorems
Theorem 4.4 (Garey and Johnson [10]). The multiprocessor scheduling problem with 2 pro-
cessors, 1 resource, a forest partial order, and each computation time of every task equal to 1
is NP-complete. 2
Theorem 4.5 (Garey and Johnson [10]). The multiprocessor scheduling problem with 3 or
more processors, one resource, all independent tasks, and each tasks computation time equal
to 1 is NP-complete. 2
Theorem 4.6 (Hu [15]). The multiprocessor scheduling problem with n processors, no re-
sources, a forest partial order, and each task having a unit computation time is polynomial.Theorem 4.7 (Ullman [31]). The multiprocessing scheduling problem with n processors, no
resources, arbitrary partial order, and each task having a unit computation time is NP-complete.From these theorems we can see that for non-preemptive multiprocessing scheduling
almost all problems are NP-complete implying that heuristics must be used for such problems.
Basically, we see that non-uniform task computation time and resource requirements cause
NP-completeness immediately. An implication of these results is that designs which use only
local resources (such as object based systems and functional language based systems) and
schedule based on a unit time slot have significant advantages as far as scheduling complexity
is concerned. Of course, few if any real-time systems have unit tasks and any attempt to
carve up a process into unit times creates difficult maintenance problems and possibly wasted
processing cycles when tasks consume less than the allocated unit of time. Note that the
above results refer to a single deadline for all tasks. If each task has a deadline the problem is
exacerbated.
4.1.2 Preemptive Multiprocessing Real-Time Scheduling
It is generally true that if the tasks to be scheduled are preemptable, then the scheduling
problem is easier, but in certain situations there is no advantage to preemption. The following
classical results pertain to multiprocessing scheduling where tasks are preemptable, i.e.,
Theorem 4.8 (McNaughton [23]). For any instance of the multiprocessing scheduling problem
with P identical machines, preemption allowed, and minimizing the weighted sum of completion
times, there exists a schedule with no preemption for which the value of the sum of computation
times is as small as for any schedule with a finite number of preemptions. 2
So here we see an example, for a given metric, that there may be no advantage to
preemption. However, to find such a schedule with or without preemption is NP-hard. Note
that if the metric is the sum of completion times, then the shortest processing time first greedy
approach solves the problem and is not NP. Here again, there is no advantage to preemption.
This result can have an important implication when creating a static schedule; we certainly
prefer to minimize preemption for practical reasons at run time, so knowing that there is no
advantage to preemption, a designer would not create a static schedule with any preemptions.
Theorem 4.9 (Lawler [18]). The multiprocessing problem of scheduling P processors, with
task preemption allowed and where we try to minimize the number of late tasks is NP-hard. 2
This theorem indicates that one of the most common forms of real-time multiprocessing
scheduling, i.e.,
where U j are the late tasks, requires heuristics.
4.2 Dynamic Multiprocessor Scheduling
There are so few real-time classical scheduling results for dynamic multiprocessing scheduling
that we treat preemptive and non-preemptive cases together.
First, consider that under certain conditions in a uni-processor, dynamic earliest deadline
scheduling is optimal. Is this algorithm optimal in a multiprocessor? The answer is no.
Theorem 4.10 (Mok [24]). Earliest deadline scheduling is not optimal in the multiprocessor
case. 2
To illustrate why this is true consider the following example. We have 3 tasks to execute
on 2 processors. The task characteristics are given by task-number(computation time, dead-
Scheduling by earliest deadline would execute T 1 on P1
and T 2 on P2 and then T 3 misses its deadline. However, if we schedule T 3 first, on P1, then T 1
and T 2 on P2, all tasks make their deadlines. An optimal algorithm does exist for the static
version of this problem (all tasks exist at the same time) if one considers both deadlines and
computation time [14], but this algorithm is too complicated to present here.
Now, if dynamic earliest deadline scheduling for multiprocessors is not optimal, the next
question is whether any dynamic algorithm is optimal, in general. Again, the answer is no.
Theorem 4.11 (Mok [24]). For two or more processors, no deadline scheduling algorithm can
be optimal without complete a priori knowledge of 1) deadlines, 2) computation times, and
start times of the tasks. 2
This implies that any of the classical scheduling theory algorithms which requires knowledge
of start times can not be optimal if used on-line. This also points out that we cannot hope
to develop an optimal on-line algorithm for the general case. But, optimal algorithms may
exist for a given set of conditions. One important example of this situation is assuming that
all worst case situations exist simultaneously. If this scenario is schedulable, then it will also
be schedulable at run time even if the arrival times are different because those later arrivals
can't make conditions any worse. When such a worst case analysis approach is not possible
for a given system, usually because such sufficient conditions cannot be developed or because
ensuring such conditions are too costly, more probabilistic approaches are needed. A number of
good heuristics exist for dynamic multiprocessor scheduling and we are beginning to see much
needed stochastic analysis of these conditions. It is especially valuable to be able to create algorithms
that operate with levels of guarantee. For example, even though the system operates
stochastically and non-optimally, it might be able to provide a minimum level of guaranteed
performance.
As mentioned, various heuristics exist for real-time multiprocessor scheduling with resource
constraints [26]. However, in general, these heuristics use a non-preemptive model. The
advantages of a non-preemptive model are few context switches, higher understandability and
easier testing than for the preemptive model, and avoidance of blocking is possible. The main
disadvantage of the non-preemptive model is the (usually) less efficient utilization of the pro-
cessor. Heuristics also exist for a preemptive model [33]. The advantages of a preemptive model
are high utilizations and low latency at reacting to newly invoked work. The disadvantages are
many context switches, difficulty in understanding the run time execution and its testing, and
blocking is common. All these heuristics, whether for the preemptive or non-preemptive cases,
are fairly expensive in terms of absolute on-line computation time compared to very simple
algorithms such as EDF, so this sometimes requires additional hardware support in terms of
a scheduling chip.
As mentioned earlier overload and performance bounds analysis are important issues.
Now assume we have a situation with sporadic tasks, preemption permitted, and if the task
meets its deadline then a value equal to the execution time is obtained, else no value is obtained.
Let the system operate in both normal and overload conditions. Let there be 2 processors.
Theorem 4.12 (Baruah, et. al. [3]). No on-line scheduling algorithm can guarantee a cumulative
value greater than one-half for the dual processor case. 2
As for the bounds results for the uni-processor case (presented in Section 3.4), the implications
of this theorem are very pessimistic. As before, some of the pessimism arises because
of the assumptions made concerning the lack of knowledge of the task set. In reality, we do
have significant knowledge (such as we know the arrival of new instances of periodic tasks,
or because of flow control we may know that the maximum arrival rate is capped, or know
the minimum laxity of any task in the system is greater than some value). If we can exploit
this knowledge, then the bounds may not be so pessimistic. We require more algorithms that
directly address the performance of a multiprocessing system in overload conditions.
4.3 Multiprocessing Anomalies
Designers must be aware of several important anomalies, called Richard's anomalies, that can
occur in multiprocessing scheduling so that they can be avoided. Assume that a set of tasks are
scheduled optimally on a multiprocessor with some priority order, a fixed number of processors,
fixed execution times, and precedence constraints.
Theorem 4.13 (Graham [12]). For the stated problem, changing the priority list, increasing
the number of processors, reducing execution times, or weakening the precedence constraints
can increase the schedule length. 2
An implication of this result means that if tasks have deadlines, then the accompanying
increase in schedule length due to the anomaly can cause a previously valid schedule to become
invalid, i.e., tasks can now miss deadlines. It is initially counter intuitive to think that adding
resources such as an extra processor, or relaxing constraints such as less precedence among
tasks, or less execution time requirements can make things worse. But, this is the insidious
nature of timing constraints and multiprocessing scheduling. An example can best illustrate
why this theorem is true. Consider an optimal schedule where we now reduce the time required
for the first task T1 on the first processor. This means that the second task T2 on that processor
can begin earlier. However, doing this may now cause some task on another processor to block
over a shared resource and miss its deadline, where had not executed earlier then no blocking
would have occurred and all tasks would have made their deadlines (because it was originally
an optimal schedule). See Figure 4.
Schedule length
Schedule length
Task 2 and Task 4 share the
same resource in exclusive mode
Tasks are statically allocated:
Task 1 and Task 2 on processor 1;
Task 3, Task 4 and Task 5 on processor 2.
Figure
4: One Example of Richard's Anomalies
It is especially important to note that for most on-line scheduling algorithms we must
deal with the problem of tasks completing before their worst case times. A simple solution
that avoids the anomaly is to have tasks that complete early simply idle, but this can often be
very inefficient. However, algorithms such as [28] strive to reclaim this idle time, but carefully
address the anomalies so that they will not occur.
4.4 Similarity to Bin Packing
Another tremendously active area of scheduling research is in bin packing algorithms. Each
bin (processor) has a maximum capacity and boxes (jobs or tasks) placed in the bins require
some percentage of the capacity. The goal is either, given a fixed number of bins, pack them
with jobs so as to minimize the maximum length of any bin, or rather fill the bins to capacity
minimizing the number of bins required. The bins are the computers of a multiprocessor which
provide a computing capacity up to the deadline of the set of jobs. Jobs require some amount
of processing time. In real-time scheduling it is usually assumed that memory requirements
are implicitly met. The common algorithms are best fit (BF), first fit (FF), first fit decreasing
and best fit decreasing (BFD). The latter two algorithms arrange the list of jobs into
a nonincreasing list with respect to capacity requirements, and then apply first fit or best fit,
respectively. Theoretical bounds exist to describe, e.g., the minimum number of bins required.
The worst case bounds for FF and BF for large task sets are (17=10)L where L is the
optimal (minimum) number of bins [12]. For FFD the bound is (11=9)L and it is known that
the bound of BFD is less than or equal to the FFD bound [12]. This work is of limited value
for real-time systems since we have only a single deadline and other issues such as precedence
constraints and other real considerations are not taken into account. However, some useful
implications are
ffl we can know about the worst case and avoid it by design,
ffl we can obtain an estimate on the number of processors required for our application, and
ffl since average behavior is also important and since we are doing this analysis off-line,
if good packing is not achieved then we can permute the packing using average case
information, put constraints on job sizes, etc. Bin packing results should be extended
and incorporated into real-time design tools.
4.5 Summary of Multiprocessor Results
Most multiprocessor scheduling problems are NP, but for deterministic scheduling this is not
a major problem because either the specific problem is not NP-complete and we can use a
polynomial algorithm and develop an optimal schedule, or we can use off-line heuristic search
techniques based on what classical theory implies. These off-line techniques usually only have to
find feasible schedules not optimal ones. Many heuristics perform well in the average case and
only deteriorate to exponential complexity in the worst (rare) case. Good design tools would
allow users to provide feedback and redesign the task set to avoid the rare case. So the static,
multiprocessor, scheduling problem is largely solved in the sense that we know how to proceed.
We must point out, however, good tools with implemented heuristics are still necessary and
many extensions that treat more sophisticated sets of task and system characteristics are still
possible. On-line multiprocessing scheduling must rely on heuristics and would be substantially
helped by special scheduling chips. Any such heuristics must avoid Richard's anomalies [28].
Better results for operation in overloads, better bounds which account for typical a priori
knowledge found in real-time systems, and algorithms which can guarantee various levels of
performance are required. Dynamic multiprocessing scheduling is in its infancy.
5 Conclusion
Classical scheduling theory provides a basic set of results of use to real-time systems designers.
Many results are known for uni-processors and very few for multi-processors. Complexity,
fundamental limits, and performance bounds for important scheduling problems are known.
Anomalies that must be avoided have been identified. It is still necessary for real-time designers
to take these basic facts and apply them to their problem - a difficult engineering problem in
many cases. Many new results are needed that deal more directly with metrics of interest to
real-time applications and with more realistic task set characteristics than is typical for much
of the theory presented here.
Many issues are outside the scope of this paper including distributed scheduling, integration
of cpu scheduling with communication scheduling, with I/O scheduling, groups of tasks
with a single deadline, placement constraints and the impact of this placement on the run time
scheduling, fault tolerance needs, other kinds of timing requirements besides simple deadlines
and periods, integration of critical and non-critical tasks, and the interaction of scheduling
algorithms with the system design and implementation including run time overhead. Most of
these areas are wide open areas for research.
--R
"Hard Real-Time Scheduling: The Deadline Monotonic Approach,"
"Stack-Based Scheduling of Real-time Processes,"
"On the Competitiveness of On-Line Real-Time Task Scheduling,"
"Scheduling Dependent Tasks with Different Arrival Times to Meet Dead- lines,"
"Dynamic Priority Ceilings: A Concurrency Control Protocol for Real-Time Systems,"
"Dynamic Scheduling of Groups of Tasks with Precedence Constraints in Distributed Hard Real-Time Systems, "
"Dynamic Scheduling of Real-Time Tasks Under Precedence Constraints,"
"Optimal Scheduling for Two-Processor Systems,"
"Control Robotics: the Procedural Control of Physical Processes,"
"Complexity Results for Multiprocessor Scheduling Under Resource Constraints,"
"Scheduling Unit-Time Tasks with Arbitrary Release Times and Deadlines,"
Bounds on the Performance of Scheduling Algorithms
"Earliest Deadline Scheduling for Real-Time Database Systems,"
"Some Simple Scheduling Algorithms,"
"Parallel Scheduling and Assembly Line Problems,"
"Scheduling a Production Line to Minimize Maximum Tardiness,"
"Optimal Sequencing of a Single Machine Subject to Precedence Con- straints,"
"Recent Results in the Theory of Machine Scheduling,"
"Optimization and Approximation in Deterministic Sequencing and Scheduling: A Survey,"
"On the Complexity of Fixed Priority Scheduling of Periodic, Real-Time Tasks,"
"Scheduling Algorithms for Multiprogramming in a Hard-Real- Time Environment,"
"Best-effort Decision Making for Real-Time Scheduling,"
"Scheduling With Deadlines and Loss Functions,"
"Fundamental Design Problems of Distributed Systems for the Hard-Real- Time Environment,"
"An n Job, One Machine Sequencing Algorithm for Minimizing the Number of Late Jobs,"
"Efficient Scheduling Algorithms For Real-Time Multiprocessor Systems,"
"Priority Inheritance Protocols: An Approach to Real-Time Synchronization,"
"Resource Reclaiming in Multiprocessor Real-Time Systems,"
"Various Optimizers for Single Stage Production,"
"Transient Overloads in Fault-Tolerant Real-Time Systems,"
"Polynomial Complete Scheduling Problems,"
"Scheduling Processes with Release Times, Deadlines, Precedence, and Exclusion Relations,"
"Preemptive Scheduling Under Time and Resource Constraints,"
--TR
--CTR
Kaiyu Chen , Sharad Malik , David I. August, Retargetable static timing analysis for embedded software, Proceedings of the 14th international symposium on Systems synthesis, September 30-October 03, 2001, Montral, P.Q., Canada
Anup K. Bhattacharjee , K. Ravindranath , A. Pal , R. Mall, DDSCHED: a distributed dynamic real-time scheduling algorithm, Progress in computer research, Nova Science Publishers, Inc., Commack, NY, 2001
Anup K. Bhattacharjee , K. Ravindranath , A. Pal , R. Mall, DDSCHED: a distributed dynamic real-time scheduling algorithm, Progress in computer research, Nova Science Publishers, Inc., Commack, NY, 2001
David Bartholomew Stewart , Richard Mortier, Virtual private machines: user-centric performance, Proceedings of the 11th workshop on ACM SIGOPS European workshop: beyond the PC, September 19-22, 2004, Leuven, Belgium
Laura E. Jackson , George N. Rouskas, Deterministic Preemtive Scheduling of Real-Time Tasks, Computer, v.35 n.5, p.72-79, May 2002
Enrico Vicario, Static Analysis and Dynamic Steering of Time-Dependent Systems, IEEE Transactions on Software Engineering, v.27 n.8, p.728-748, August 2001
Karl J. Gramp, A comparison of different tasking architectures used in mobile satellite communication ground station software, Proceedings of the conference on TRI-Ada '96: disciplined software development with Ada, p.23-28, December 03-07, 1996, Philadelphia, Pennsylvania, United States
Geoff Coulson, A Configurable Multimedia Middleware Platform, IEEE MultiMedia, v.6 n.1, p.62-76, January 1999
Dirk Ziegenbein , Jan Uerpmann , Rolf Ernst, Dynamic response time optimization for SDF graphs, Proceedings of the 2000 IEEE/ACM international conference on Computer-aided design, November 05-09, 2000, San Jose, California
Darko Kirovski , Miodrag Potkonjak, System-level synthesis of low-power hard real-time systems, Proceedings of the 34th annual conference on Design automation, p.697-702, June 09-13, 1997, Anaheim, California, United States
Chunho Lee , Miodrag Potkonjak , Wayne Wolf, System-Level Synthesis of Application Specific Systems using A* Search and Generalized Force-Directed Heuristics, Proceedings of the 9th international symposium on System synthesis, p.2, November 06-08, 1996
Louchka Popova-Zeugmann , Matthias Werner, Extreme Runtimes of Schedules Modelled by Time Petri Nets, Fundamenta Informaticae, v.67 n.1-3, p.163-174, January 2005
Babak Hamidzadeh , Yacine Atif, Dynamic Scheduling of Real-Time Tasks, by Assignment, IEEE Concurrency, v.6 n.4, p.14-25, October 1998
Megerian , Milenko Drinic , Miodrag Potkonjak, Watermarking integer linear programming solutions, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
Hsung-Pin Chang , Ray-I Chang , Wei-Kuan Shih , Ruei-Chuan Chang, GSR: A global seek-optimizing real-time disk-scheduling algorithm, Journal of Systems and Software, v.80 n.2, p.198-215, February, 2007
Sanjoy K. Baruah , Jayant R. Haritsa, Scheduling for Overload in Real-Time Systems, IEEE Transactions on Computers, v.46 n.9, p.1034-1039, September 1997
Babak Hamidzadeh , Yacine Atif , Krithi Ramamritham, To Schedule or to Execute: Decision Support and PerformanceImplications, Real-Time Systems, v.16 n.2-3, p.281-313, May 1999
Sorin Manolache , Petru Eles , Zebo Peng, Schedulability analysis of multiprocessor real-time applications with stochastic task execution times, Proceedings of the 2002 IEEE/ACM international conference on Computer-aided design, p.699-706, November 10-14, 2002, San Jose, California
Yanbing Li , Wayne H. Wolf, Hardware/software co-synthesis with memory hierarchies, Readings in hardware/software co-design, Kluwer Academic Publishers, Norwell, MA, 2001
Jayanta K. Dey , James Kurose , Don Towsley, On-Line Scheduling Policies for a Class of IRIS (Increasing Reward with Increasing Service) Real-Time Tasks, IEEE Transactions on Computers, v.45 n.7, p.802-813, July 1996
D. G. Waddington , D. Hutchison, Resource partitioning in general purpose operating systems: experimental results in Windows NT, ACM SIGOPS Operating Systems Review, v.33 n.4, p.52-74, Oct. 1999
Wan Yeon Lee , Sung Je Hong , Jong Kim, On-line scheduling of scalable real-time tasks on multiprocessor systems, Journal of Parallel and Distributed Computing, v.63 n.12, p.1315-1324, December
Zhang , Yiping Fan , Miodrag Potkonjak , Jason Cong, Gradual Relaxation Techniques with Applications to Behavioral Synthesis, Proceedings of the IEEE/ACM international conference on Computer-aided design, p.529, November 09-13,
Yung-Chia Lin , Yi-Ping You , Chung-Wen Huang , Jenq Kuen Lee , Wei-Kuan Shih , Ting-Ting Hwang, Energy-aware scheduling and simulation methodologies for parallel security processors with multiple voltage domains, The Journal of Supercomputing, v.42 n.2, p.201-223, November 2007
Marco Di Natale , John A. Stankovic, Scheduling Distributed Real-Time Tasks with Minimum Jitter, IEEE Transactions on Computers, v.49 n.4, p.303-316, April 2000
Victor C. S. Lee , Kam-Yiu Lam , Ben Kao, Priority Scheduling of Transactions in Distributed Real-TimeDatabases, Real-Time Systems, v.16 n.1, p.31-62, Jan. 1999
Martin Trngren, Fundamentals of Implementing Real-Time Control Applicationsin Distributed Computer Systems, Real-Time Systems, v.14 n.3, p.219-250, May 1, 1998
P. D. V. Van Der Stok , A. H. T. Janssen-Raemaekers, Real-Time Atomic Multicast Algorithms Implemented on a Shared Memory Multiprocessor, Real-Time Systems, v.24 n.1, p.55-91, January
Ray-I Chang , Wei-Kuan Shih , Ruei-Chuan Chang, Real-Time Disk Scheduling for Multimedia Applications withDeadline-Modification-Scan Scheme, Real-Time Systems, v.19 n.2, p.149-168, Sept. 2000
Miguel Felder , Mauro Pezz, A formal design notation for real-time systems, ACM Transactions on Software Engineering and Methodology (TOSEM), v.11 n.2, p.149-190, April 2002
Peter A. Buhr , Ashif S. Harji , Philipp E. Lim , Jiongxiong Chen, Object-oriented real-time concurrency, ACM SIGPLAN Notices, v.35 n.10, p.29-46, Oct. 2000
Bharadwaj Veeravalli , Xiaolin Li , Chi Chung Ko, On the Influence of Start-Up Costs in Scheduling Divisible Loads on Bus Networks, IEEE Transactions on Parallel and Distributed Systems, v.11 n.12, p.1288-1305, December 2000
Antonio Pessoa Magalhes , Joo Gabriel Silva, Stabilizing Pre-Run-Time Schedules With the Help of GraceTime, Real-Time Systems, v.17 n.1, p.65-86, July 1999
Jia Xu, On Inspection and Verification of Software with Timing Requirements, IEEE Transactions on Software Engineering, v.29 n.8, p.705-720, August
Hadad , Sarit Kraus , Yakov Gal , Raz Lin, Temporal Reasoning for a Collaborative Planning Agent in a Dynamic Environment, Annals of Mathematics and Artificial Intelligence, v.37 n.4, p.331-379, April
Miodrag Potkonjak , Wayne Wolf, A methodology and algorithms for the design of hard real-time multitasking ASICs, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.4 n.4, p.430-459, Oct. 1999 | scheduling theory;multiprocessor scheduling;uniprocessor scheduling;real-time |
623900 | Low-Latency Communication Over ATM Networks Using Active Messages. | Recent developments in communication architectures for parallel machines have reduced the communication overheads and latencies by over an order of magnitude. This paper examines whether these techniques can carry over to clusters of workstations connected by an ATM network even though clusters use standard operating system software, are equipped with network interfaces optimized for stream communication, do not allow direct protected user-level access to the network, and use networks without reliable transmission or flow control. In a first part, this paper describes the differences in communication characteristics between clusters of workstations built from standard hardware and software components and state-of-the-art multiprocessors. A second part evaluates a prototype implementation of the low-latency Active Messages communication model on a Sun workstation cluster interconnected by an ATM network. Measurements show application-level round-trip latencies of about 50 microseconds for small messages which is roughly comparable to the Active Messages implementation on the Thinking Machines CM-5 multiprocessor. | Introduction
The shift from slow broadcast-based local area networks
to high bandwidth switched network architectures is
making the use of clusters of workstations 1 as platforms
for parallel processing more and more attractive. While
a number of software packages [5,6] already support
1. The term cluster is used here to refer to collections of workstation-class
machines interconnected by a low-latency
high-bandwidth network.
parallel processing on today's workstations and net-
works, the communication performance is over two
orders of magnitude inferior to state-of-the art multiprocessors
. As a result, only embarassingly parallel applications
(i.e., parallel applications that essentially never
communicate) can make use of such environments. Networking
technologies such as ATM[1] offer the opportunity
to close the gap: for example, ATM cells are
roughly the same size as messages on multiprocessors, it
takes only a few microseconds to send or receive a cell,
ATM switches can be configured to provide bisection
bandwidths comparable to parallel machine networks,
and routing latencies are on the order of microseconds. 3
However, to date this communication potential has not
been available at the application level.
From a purely technical point of view, the gap between
clusters of workstations and multiprocessors is certainly
closing and the distinction between the two types of systems
is becoming blurred. Differences remain: in partic-
ular, the design and construction of multiprocessors
allows better integration of all the components because
2. This paper focuses exclusively on scalable multiprocessor
architectures and specifically excludes bus-based shared-memory
multiprocessors.
3. Current ATM switches have latencies about an order of
magnitude higher than comparable multiprocessor net-
works, however, this difference does not seem to be inherent
in ATM networks, at least not for local area switches.
This document was created with FrameMaker 4.0.2
they can be designed to fit together. In addition, the
sharing of physical components such as power supplies,
cooling and cabinets has the potential to reduce cost and
to allow denser packaging. While the debate over the
significance of these technological differences is still
open, it is becoming clear that the two approaches will
yield qualitatively similar hardware systems. Indeed, it
is possible to take a cluster of workstations and load system
software making it look almost identical to a multi-
processor. This means that a continuous spectrum of
platforms spanning the entire range from workstations
on an Ethernet to state-of-the-art multiprocessors can
become available, and that any distinction between multiprocessors
and clusters will be more and more arbitrary
from a technical point of view.
From a pragmatic point of view, however, significant
differences are likely to remain. The most important
attraction in using a cluster of workstations instead of a
multiprocessor lies in the off-the-shelf availability of all
its major hardware and software components. This
means that all the components are readily available, they
are familiar, and their cost is lower because of economies
of scale leveraged across the entire workstation
user community. Thus, even if from a technical point of
view there is a continuous spectrum between clusters
and multiprocessors, the use of off-the-shelf components
in clusters will maintain differences.
In fact, the use of standard components in clusters raises
the question whether these can be reasonably used for
parallel processing. Recent advances in multiprocessor
communication performance are principally due to a
tighter integration of programming models, compilers,
operating system functions, and hardware primitives. It
is not clear whether these advances can be carried over
to clusters or whether the use of standard components is
squarely at odds with achieving the level of integration
required to enable modern parallel programming mod-
els. Specifically, new communication architectures such
as distributed shared memory, explicit remote memory
access, and Active Messages reduced the costs from
hundreds to thousands of microseconds to just a few
dozen precisely through the integration of all system
components. These new communication architectures
are designed such that network interfaces can implement
common primitives directly in hardware, they
allow the operating system to be moved out of the critical
communication path without compromising protec-
tion, and they are well suited for high-level language
implementation.
This paper examines whether the techniques developed
to improve communication performance in multiproces-
sors, in particular, Active Messages, can be carried over
to clusters of workstations with standard networks and
mostly standard system software. This paper assumes
the current state of the art technology in which clusters
using ATM networks differ from multiprocessors in
three major aspects
. clusters use standard operating system software
which implies less coordination among individual
nodes, in particular with respect to process scheduling
and address translation,
. ATM networks do not provide the reliable delivery
and flow control that are taken for granted in multi-processor
networks, and
. network interfaces for workstations optimize
stream communication (e.g., TCP/IP) and are less
well integrated into the overall architecture (e.g.,
connect to the I/O bus instead of the memory bus).
In comparing communication on clusters and multiprocessors
this paper makes two major contributions:
. first, it analyzes, in Section 2, the implications that
the differences between clusters and multiprocessors
have on the design of communication layers
similar to those used in multiprocessors, and
. second, it describes, in Section 3, the design of an
Active Messages prototype implementation on a
collection of Sun workstations interconnected by an
ATM network which yields application-to-applica-
tion latencies on the order of 20-s.
The use of Active Messages in workstation clusters is
briefly contrasted to other approaches in Section 4 and
Section 5 concludes the paper.
Technical Issues
Collections of workstations have been used in many different
forms to run large applications. In order to establish
a basis for comparison to multiprocessors, this
paper limits itself to consider only collections of work-stations
(called clusters) which consist of a homogeneous
set of machines, dedicated to run parallel
applications, located in close proximity (such as in the
same machine room), and interconnected by an ATM
network. Such a cluster can be employed in a large variety
of settings. The cluster could simply provide high-performance
compute service for a user community to
run large parallel applications.
A more typical setting would be as computational
resource in a distributed application. One such example,
the Stormcast weather monitoring system in Norway,
runs on a very large collection of machines spread
across a large portion of the country, but uses a cluster
of a few dozen workstations in a machine room (without
high speed network in this case) to run compute-intensive
weather prediction models and to emit storm warn-
1. A discussion of differences in fault isolation characteristics
is beyond the scope of this paper.
ings. The availability of low-latency communication
among these workstations would enable the use of parallel
programming languages and of more powerful parallel
algorithms, both of which require a closer coupling
among processors than is possible today.
Concentrating on the compute cluster offers the largest
potential for improvement because the latency over the
long-haul links is dominated by speed-of-light and net-work
congestion issues and because the wide area communication
is comparatively better served by today's
distributed computing software. Note that this paper
does not argue that running concurrent applications in a
heterogeneous environment, across large distances, and
on workstations that happen to be sitting idle is not an
interesting design point (it in fact has been used success-
fully), but that the set of communication issues occurring
in such a context cannot be compared to those in a
multiprocessor.
Given that the applications for clusters considered here
exhibit characteristics similar to those on multiproces-
sors, the programming models used would be compara-
ble, if not identical, to those popular for parallel
computing. This includes various forms of message
passing (e.g., send/receive, PVM), of shared memory
(e.g., cache coherent shared memory, remote reads and
writes, explicit global memory), and of parallel object
oriented languages (e.g., numerous C++ extensions).
On parallel machines several proposed communication
architectures have achieved the low overheads, low
latencies, and high bandwidths that are required for high
performance implementations of the above programming
models. In particular, cache coherent shared mem-
remote reads and writes, and Active Messages offer
round-trip communication within a few hundred instruction
times, so that frequent communication on a fine
granularity (such as on an object by object or cache line
basis) remains compatible with high performance. In
these settings, the overhead of communication, that is,
the time spent by the processor initiating communica-
tion, is essentially the cost of pushing message data into
the network interface at the sending end and pulling it
out at the receiving end. Virtually no cycles are spent in
any protocol handling as all reliability and flow control
are handled in hardware. The operating system need not
be involved in every communication operation because
the network interface hardware can enforce protection
boundaries across the network.
The above communication architectures cannot be
moved in a straightforward manner from multiprocessors
to clusters of workstations with ATM networks
because of three major differences between the two:
ATM networks offer neither reliable delivery nor flow
control, ATM network interfaces provide no support for
protected user-level access to the network, and the
workstation operating systems do not coordinate process
scheduling or address translation globally. Coping
with these differences poses major technical challenges
and may eventually require the integration of some mul-
tiprocessor-specific features into the clusters. The following
three subsections present the nature of these
differences in more detail and discuss the resulting
issues.
2.1 Reliability and flow control in the network
In multiprocessor networks, flow control is implemented
in hardware on a link-by-link basis. Whenever
the input buffer of a router fills up, the output of the up-stream
router is disabled to prevent buffer overflow. The
flow control thus has the effect of blocking messages in
the network and eventually, as the back-pressure propa-
gates, the sending nodes are prevented from injecting
further messages. This mechanism guarantees that messages
are never dropped due to buffer space limitations
within the network or at the receiving end. In addition,
the electrical characteristics of the network are designed
to ensure very low error rates, such that the use of a simple
error detection and correction mechanism (imple-
mented in hardware) can offer the same reliability
within the network as is typical of the processing nodes
themselves.
In contrast, an ATM network does not provide any form
of flow control and does not offer reliable delivery.
Instead, higher protocol layers must detect cell loss or
corruption and cause their retransmission. While this
partitioning of responsibilities may be acceptable in the
case of stream-based communication (e.g., TCP/IP,
video, audio) it is questionable in a parallel computing
setting.
The flow control and the error detection and correction
in multiprocessor networks serve to cover four causes of
message loss: buffer overflow in the receiving software,
buffer overflow in the receiving network interface,
buffer overflow within the network, and message corruption
due to hardware errors. In an ATM network,
simple window based end-to-end flow control schemes
and a per-message CRC (as used in AAL-5) can cover
the first and last cases 1 of cell loss. In addition, preventing
buffer overflow in the receiving network interface
can be achieved by ensuring that the rate at which cells
can be moved from the interface into main memory is at
least as large as the maximal cell arrival rate. Preventing
buffer overflow within the network, however, is not
realistically possible using end-to-end flow control. This
is particularly a problem in a parallel computing setting
in which all nodes tend to communicate with all other
nodes in both highly regular and irregular patterns at
1. Although some transmission media may cause burst errors
which cannot be corrected by most CRC codes.
unpredictable intervals. The degree of contention within
the network therefore cannot be measured or predicted
with any accuracy by either the sender or the receiver
and communication patterns which result in high contention
will result in high cell loss rates causing extensive
retransmissions.
Traditional flow control schemes used in stream-based
communication avoid fruitless retransmission storms by
dynamically reducing the transmission rate on connections
which experience high cell loss rates. This works
in these settings because, following the law of large
numbers, contention in a wide area network does not
tend to vary instantaneously and therefore the degree of
contention observed in the recent past is a good predictor
for contention in the near future.
As an illustration of the difficulties in a parallel computing
setting, consider the implementation of a parallel
sort. The most efficient parallel sort algorithms [3] are
based on an alternation of local sorts on the nodes and
permutation phases in which all nodes exchange data
with all other nodes. These permutation phases serve to
move the elements to be sorted "towards" their correct
position, The communication patterns observed are
highly dynamic and their characteristics depend to a
large degree on the input data. If at any point the
attempted data rate into a given node exceeds the link
rate, then the output buffers at up-stream switches will
start filling up. Because the communication patterns
change very rapidly (essentially with every cell), it is
futile to attempt to predict contention, and given the all-
to-all communication pattern, the probability of internal
contention among seemingly unrelated connections is
high.
Beyond the problems caused by contention and the
resulting retransmissions, the lack of reliable delivery
guarantee in ATM networks imposes a certain overhead
on the communication primitives. Specifically, the
sender must keep a copy of each cell sent until a corresponding
acknowledgment is received, in case the cell
must be retransmitted. This means that messages cannot
be transferred directly between processor registers and
the network interface (as is possible on the CM-5 [12]),
rather, a memory copy must be made as well.
2.2 User-level access to the network interface
Recently, multiprocessor communication architectures
have achieved a significant reduction of the communication
overhead by eliminating the operating system from
the critical path. In order not to compromise security,
the network interface must offer some form of protection
mechanism. In shared memory models, the memory
management unit is extended to map remote memory
into the local virtual user address space such that the
operating system can enforce security by managing the
address translation tables. Message-based network interfaces
contain a node address translation table which
maps the user's virtual node numbers onto the physical
node address space. Again, the operating system
enforces security by controlling the address translation,
thereby preventing a process from sending a message to
an arbitrary node. The current generation of message
based network interfaces only control the destination
node address and therefore require that all processes of a
parallel program run at the same time. The next generation
adds the sending process id to each message allowing
the receiving network interface to discriminate
between messages destined for the currently running
process, that can retrieve these message directly, and
messages for dormant processes, which must be queued
(typically by the operating system) for later retrieval.
In contrast, the network interfaces available for workstations
do not yet incorporate any form of protection
mechanism. Instead, the operating system must be
involved in the sending and reception of every message.
The connection based nature of ATM networks would
principally allow the design of a protection mechanism
to limit the virtual circuits a user process has access to
(the operating system would still control virtual circuit
set-up). But because the architecture of the networking
layers in current operating systems does not seem to be
set-up to allow user-level network interface access, it
appears unlikely that network interfaces with these features
will become commonplace soon. The challenge in
any high-performance communication layer for clusters
is, thus, to minimize the path through the kernel by judiciously
coordinating the user-kernel interactions.
2.3 Coordination of system software across all communicating
nodes
In almost all communication architectures the message
reception logic is the critical performance bottleneck. In
order to be able to handle incoming messages at full net-work
bandwidth, the processing required for each arriving
message must be minimized carefully. The trick
used in multiprocessor systems to ensure rapid message
handling is to constrain the sender to only send messages
which are easy to handle.
In shared memory systems this is done by coordinating
the address translation tables among all processing
nodes such that the originating node can translate the
virtual memory address of a remote access and directly
place the corresponding physical memory address into
the message. The set of communication primitives is
small and fixed (e.g., read and write) and by forcing the
sender to perform the complicated part of a remote
memory access (namely the protection checks and the
address translation) the handling of a request is relatively
simple to implement 1 . If the virtual address were
sent, the receiving node could discover that the
requested virtual memory location had been paged out
to disk with the result that the handling of the message
would become rather involved.
In Active Messages on multiprocessors the scheduling
of processes is assumed to be coordinated among all
nodes such that communicating processes execute
simultaneously on their respective nodes. This guarantees
that messages can be handled immediately on
arrival by the destination process itself. In order to
accomplish this, the sender of an Active Message specifies
a user-level handler at the destination whose role it
is to extract the message from the network and integrate
it into the ongoing computation. The handler can also
implement a simple remote service and send a reply
Active Message back. However, in order to prevent
deadlock the communication patterns are limited to
requests and replies, e.g., a handler of a reply message is
not allowed to send any further messages. An implementation
of Active Messages typically reserves the first
word of each message for the handler address, and the
handler at the receiving end is dispatched immediately
on message arrival to dispose of the message. The fact
that the message layer can call upon the handlers to deal
with messages in FIFO order simplifies the buffering
considerably over that required by more traditional message
passing models such as PVM, MPI, or NX. These
models allow processes to consume messages in arbitrary
order and at arbitrary times forcing the communication
architecture to implement very general buffer and
message matching mechanisms at high cost.
In clusters the fact that the operating systems of the individual
nodes are not nearly as coordinated contradicts
the assumption that messages can always be consumed
quickly upon arrival. In the case of Active Messages the
destination process might have been suspended and cannot
run the handler, and in a shared memory model the
memory location requested might not be mapped.
Although exact coordination is not possible without
major changes to the operating system core, an implementation
of either communication model is likely to be
able to perform some coordination among nodes on its
own and to influence the local operating system accord-
ingly. This may allow the communication layer to
assume that in the common case everything works out
fine, but it must be able to handle the difficult cases as
well.
2.4
Summary
Even though superficially a cluster of workstations
appears the be technically comparable to a multiproces-
1. Cache coherent shared memory stretch this characterization
given that the cache in the receiving node essentially performs
another address translation which may miss and
require additional communication with other nodes to complete
the request.
sor, the reality is that key characteristics are different
and cause significant implementation difficulties: the
very comparable raw hardware link bandwidths, bisection
bandwidths, and routing latencies conceal the lack
in clusters of flow control, reliability, user-level network
access, and operating system coordination.
These shortcomings will inevitably result in lower communication
performance; their quantitative effect on
performance is evaluated in the next section which presents
a prototype implementation of Active Messages on
a cluster of Sun workstations. However, the lack of
flow-control in ATM networks poses a fundamental
problem: can catastrophic performance degradation
occur due to significant cell loss in particular communication
patterns?
3 SSAM: a SPARCstation Active Messages
Prototype
The SSAM prototype implements the critical parts of an
Active Messages communication architecture on a cluster
of SPARCstations connected by an ATM network.
The primary goal is to evaluate whether it is possible to
provide a parallel programming environment on the
cluster that is comparable to those found on multipro-
cessors. The prototype is primarily concerned with providing
performance at par with parallel machines, while
addressing the handicaps of ATM networks that have
been identified in the previous section. In particular:
. the prototype provides reliable communication to
evaluate the cost of performing the necessary flow-control
and error checking in software,
. it minimizes the kernel intervention to determine
the cost of providing protection in software, and
. the buffering is designed to tolerate arbitrary context
switching on the nodes.
At this time only a limited experimental set-up
(described below) is available such that the prototype
cannot provide information neither on how cell losses
due to contention within the network affect perfor-
mance, nor on how the scheduling of processes can be
coordinated to improve the overall performance of parallel
applications.
3.1 Active Messages Communication Architecture
The Active Messages communication architecture [4]
offers simple, general purpose communication primitives
as a thin veneer over the raw hardware. It is
intended to serve as a substrate for building libraries that
provide higher-level communication abstractions and
for generating communication code directly from a par-
allel-language compiler. Unlike most communication
layers, it is not intended for direct use by application
programmers and really provides lower-level services
from which communication libraries and run-time systems
can be built.
The basic communication primitive is a message with
an associated small amount of computation (in the form
of a handler) at the receiving end. Typically the first
word of an Active Message points to the handler for that
message. On message arrival, the computation on the
node is interrupted and the handler is executed. The role
of the handler is to get the message out of the network,
by integrating it into the ongoing computation and/or by
sending a reply message back. The buffering and scheduling
provided by Active Messages are extremely primitive
and thereby fast: the only buffering is that involved
in actual transport and the only scheduling is that
required to activate the handler. This is sufficient to support
many higher-level abstractions and more general
buffering and scheduling can be easily constructed in
layers above Active Messages when needed. This minimalist
approach avoids paying a performance penalty
for unneeded functionality.
In order to prevent deadlock and livelock, Active Message
restricts communication patterns to requests and
replies, i.e., the handler of a request message is only
allowed to send a reply message and a reply handler is
not allowed to send further replies.
3.1.1 SSAM functionality
The current implementation is geared towards the sending
of small messages which fit into the payload of a
single ATM cell. Eight of the 48 available bytes of payload
in an ATM cell are used by SSAM to hold flow-control
information (16 bits), the handler address (32
bits), and an AAL3/4 compatible checksum (16 bits).
The remaining 40 bytes hold the Active Message data.
The C header file for the interface to SSAM is shown in
Figure
1. To send a request Active Message, the user
places the message data into a per-connection buffer
provided by SSAM and calls SSAM_10 with a connection
identifier and the remote handler address.
SSAM_10 adds the flow-control information and traps
to the kernel to have the message injected into the net-
work. It also polls the receiver and processes incoming
messages. At the receiving end, the network is polled by
SSAM_10 or SSAM_poll (the latter only polls the net-
work) and all messages accumulated in the receive
FIFO are moved into a buffer. SSAM then calls the
appropriate handler for each message, passing as arguments
the originating connection identifier, the address
of the buffer holding the message, and the address of a
buffer for a reply message. The handler processes the
message and may send a reply message back by placing
the data in the buffer provided and returning the address
of the reply handler (or NULL if no reply is to be sent).
The current prototype does not use interrupts, instead,
the network is polled every time a message is sent. This
means that as long as a process is sending messages it
will also handle incoming ones. An explicit polling
function is provided for program parts which do not
communicate. Using interrupts is planned but not implemented
yet.
3.1.2 Example: implementing a remote read with SSAM
The sample implementation of a split-phase remote double-word
read is shown in Figure 2. The readDouble
function increments a counter of outstanding reads, formats
a request Active Message with the address to be
read as well as information for the reply, and sends the
message. The readDouble_h handler fetches the
remote location and sends a reply back to the read-
Double_rh reply handler which stores the data into
memory and decrements the counter. The originating
processor waits for the completion of the read by busy-waiting
on the counter at the end of readDouble. A
split-phase read could be constructed easily by exposing
the counter to the caller, who could proceed with computation
after initiating the read and only wait on the
counter when the data is required.
3.2 Experimental set-up
The experimental set-up used to evaluate the performance
of the prototype SSAM implementation consists
of a 60Mhz SPARCstation-20 and a 25Mhz SPARCsta-
running SunOS 4.1. The two machines are connected
via Fore Systems SBA-100 ATM interfaces using
a 140Mb/s TAXI fiber. The interfaces are located on the
Sbus (a 32-bit I/O bus running at 20 or 25Mhz) and pro-
Figure
1: C interface for SPARCstation Active Messages
SPARCstation ATM Active Messages */
/* Initialize Active Messages */
extern int SSAM_init(void);
Active Message handlers */
(int connection, void *in_buf);
(*SSAM_req_handler)(int connection,
void *in_buf, void *reply_buf);
/* Buffers to send messages */
#define SSAM_MAXCONN (32)
extern void *SSAM_reqbuf[SSAM_MAXCONN];
extern void SSAM_10(int connection,
SSAM_req_handler handler);
/* Poll the network explicitly */
extern void SSAM_poll(void);
vide a 36-cell deep output FIFO as well as a 292-cell
input FIFO. To send a cell the processor stores 56 bytes
into the memory-mapped output FIFO and to receive a
cell it reads 56 bytes from the input FIFO. A register in
the interface indicates the number of cells available in
the input FIFO.
Figure
2: Sample remote read implementation using SSAM
Remote read of
static volatile int read_cnt = 0;
struct {
double *src, *dest;
double data[4];
/* Read 32 bytes from remote node */
void read32(int conn, double *src,
double *dest)
{ read32_msg
read_cnt++;
SSAM_10(conn, read32_h);
/* Read request handler */
static SSAM_reply_handler
read32_h(int conn, read32_msg *in,
read32_msg *out)
{ double
out->dest
{
} else {
/* non double-word aligned code omitted */
return read32_rh;
/* Read reply handler */
static void
read32_rh(int conn, read32_msg *in)
{ double *dest = in->dest;
{
} else {
/* non double-word aligned code omitted */
read_cnt-;
Note that the network interface used is much simpler
and closer to multiprocessor NIs than most second-generation
ATM interfaces available today. The only function
performed in hardware, beyond simply moving
cells onto/off the fiber, is checksum generation and
checking for the ATM header and an AAL3/4 compatible
payload. In particular, no DMA or segmentation and
reassembly of multi-cell packets is provided.
3.3 SSAM implementation
The implementation of the SPARCstation ATM Active
Messages layer consists of two parts: a device driver
which is dynamically loaded into the kernel and a user-level
library to be linked with applications using SSAM.
The driver implements standard functionality to open
and close the ATM device and it provides two paths to
send and receive cells. The fast path described here consists
of three trap instructions which lead directly to
code for sending and receiving individual ATM cells.
The traps are relatively generic and all functionality specific
to Active Messages is in the user-level library
which also performs the flow-control and buffer man-
agement. A conventional read/write system call interface
is provided for comparison purposes and allows to
send and receive cells using a "pure" device driver
approach.
The traps to send and receive cells consist of carefully
crafted assembly language routines. Each routine is
small (28 and 43 instructions for the send and receive
traps, respectively) and uses available registers care-
fully. The register usage is simplified by the Sparc archi-
tecture's use of a circular register file, which provides a
clean 8-register window for the trap. By interfacing
from the program to the traps via a function call, arguments
can be passed and another 8 registers become
available to the trap.
The following paragraphs describe the critical parts of
the SSAM implementation in more detail.
3.3.1 Flow-control
A simple sliding window flow control scheme is used to
prevent overrun of the receive buffers and to detect cell
losses. The window size is dimensioned to allow close
to full bandwidth communication among pairs of processors
In order to implement the flow control for a window of
size w, each process pre-allocates memory to hold 4w
cells per every other process with which it communi-
cates. The algorithm to send a request message polls the
receiver until a free window slot is available and then
injects the cell into the network, saving it in one of the
buffers as well in case it has to be retransmitted. Upon
receipt of a request message, the message layer moves
the cell into a buffer and, as soon as the corresponding
process is running, calls the Active Message handler. If
the handler issues a reply, it is sent and a copy is held in
a buffer. If the handler does not generate a reply, an
explicit acknowledgment is sent. Upon receipt of the
reply or acknowledgment, the buffer holding the original
request message can be reused. Note how the distinction
between requests and replies made in Active
Messages allows acknowledgments to be piggy-backed
onto replies.
The recovery scheme used in case of lost or duplicate
cells is standard, except that the reception of duplicate
request messages may indicate lost replies which have
to be retransmitted. It is important to realize that this
flow control mechanism does not really attempt to minimize
message losses due to congestion within the net-
work: the lack of flow-control in ATM networks
effectively precludes any simple congestion avoidance
scheme. Until larger test-beds become available and the
ATM community agrees on how routers should handle
buffer overflows it seems futile to invest in more sophisticated
flow-control mechanisms. Nevertheless, the
bursty nature of parallel computing communication patterns
are likely to require some solution before the performance
characteristics of an ATM network become as
robust as those of as multiprocessor networks.
3.3.2 User-kernel interface and buffer management
The streamlining of the user-kernel interface is the most
important factor contributing to the performance of
SSAM. In the prototype, the kernel preallocates all buffers
for a process when the device is opened. The pages
are then pinned to prevent page-outs and are mapped
(using mmap) into the processes' address space. After
every message send, the user-level library chooses a
buffer for the next message and places a pointer in an
exported variable. The application program moves the
message data into that buffer and passes the connection
id and the handler address to SSAM which finishes formatting
the cell (adding the flow control and handler)
and traps to the kernel. The trap passes the message off-set
within the buffer area and the connection id in registers
to the kernel. Protection is ensured with simple
masks to limit the connection id and offset ranges. A
lookup maps the current process and connection ids to a
virtual circuit. The kernel finally moves the cell into the
output FIFO.
At the receiving end, the read-ATM kernel trap delivers
a batch of incoming cells into a pre-determined shared
memory buffer. The number of cells received is returned
in a register. For each cell the kernel performs four
tasks: it loads the first half of the cell into registers, uses
the VCI to index into a table to obtain the address of the
appropriate processes' input buffer, moves the full cell
into that buffer, and checks the integrity of the cell using
three flag bits set by the NI in the last byte. Upon return
from the trap, the SSAM library loops through all
received cells checking the flow-control information,
calling the appropriate handlers for request and reply
messages, and sending explicit acknowledgments when
needed.
3.4 SSAM performance
The following paragraphs describe performance measurements
of SSAM made with a number of synthetic
benchmarks. The following terminology is used: overhead
consists of the processor cycles spent preparing to
send or receive a message, latency is the time from
which a message send routine is called to the time the
message is handled at the remote end, and bandwidth is
the rate at which user data is transferred. The performance
goal for SSAM is the fiber rate of 140Mbit/s
which transmits a cell every 3.14-s (53+2 bytes) for an
ATM payload bandwidth of 15.2MB/s 1 .
3.4.1 ATM traps
A detailed cost breakdown for the operations occurring
in each of the traps to send and receive cells is shown in
Table
1. The two timing columns refer to measurements
taken on the SPARCstation 1+ and on the
SPARCstation 20, respectively. The times have been
obtained by measuring repeated executions of each trap
with gettimeofday which uses a microsecond-accu-
rate clock and takes 9.5-s on the SS-20. The time break-down
for each trap was measured by commenting
appropriate instructions out and is somewhat approximate
due to the pipeline overlap occurring between successive
instructions.
The write trap cost is broken down into 5 parts: the cost
of the trap and return, the protection checks, overhead
for fetching addresses, loading the cell into registers,
and pushing the cell into the network interface. The SS-
show clearly that the fiber can be saturated
by sending a cell at a time from user level. It also indicates
that the majority of the cost (75%) lies in the
access to the network interface across the Sbus. The cost
of the trap itself is surprisingly low, even though it is the
second largest item. In fact, it could be reduced slightly
as the current implementation adds a level of indirection
in the trap dispatch to simplify the dynamic loading of
the device driver. 2
The read trap is itemized similarly: the cost to trap and
return, fetching the device register with the count of
available cells, additional overhead for setting-up
addresses, loading the cell from the network interface,
1. All bandwidths are measured in megabytes per second.
2. The kernel write-protects the trap vectors after boot-up. The
SSAM prototype uses a permanently loaded trap which performs
an indirect jump via a kernel variable to allow simple
dynamic driver loading.
demultiplexing among processes, and storing the cell
away. The total cost shows a trap which receives a single
cell, as well as the per-cell cost for a trap which
receives cells. Here again the access to the device
dominates due to the fact that each double-word load
incurs the full latency of an Sbus access. The total time
of 4.61-s on the SS-20 falls short of the fiber's cell time
and will limit the achievable bandwidth to at most 68%
of the fiber.
The write-read trap first sends a cell and then receives a
chunk of cells. This amortizes the cost of the trap across
both functions and overlaps checking the cell count
slightly with sending. The last item in the table shows
the cost of a null system call for comparison purposes (a
write to file descriptor -1 was used). It is clear that a system
call approach would yield performance far inferior
to the traps and would achieve only a fraction of the
fiber bandwidth.
3.4.2 ATM read/write system calls
In addition to the direct traps, the device driver allows
cells to be sent and received using traditional read and
system calls on the device file descriptor. At this
time this conventional path is provided for comparison
purposes only and the read and write entry points into
the device driver are limited to sending and receiving
single cells. Multi-cell reads and writes could be supported
easily. The read and write entry points perform
the following operations:
. check for the appropriateness of the file descriptor,
. transfer data between user space and an internal
buffer using uiomove, and
. transfer data between the internal buffer and the
FIFOs of the network interface.
The internal buffer is used because the data cannot be
transferred directly between user space and the device
using uiomove due to the fact that the device FIFOs
are only word addressable. The use of an internal buffer
also allows double-word accesses to the device FIFOs,
which improves the access times considerably.
Table
2 shows the costs for the various parts of the read
and write system calls. The "syscall overhead" entries
reflect the time taken for a read (respectively write) system
call with an empty read (write) device driver rou-
tine. This measures the kernel overhead associated with
these system calls. The "check fd, do uiomove" entry
reflects the time spent in checking the validity of the file
descriptor and performing the uiomove. In the case of
a read, it also includes the time to check the device register
holding the number of cells available in the input
FIFO. The "push/pull cell" entries reflect the time spent
to transfer the contents of one cell between the internal
buffer and the device FIFOs. The "write" and
"read 1 cell" totals reflect the cost of the full system call,
while the "read 0 cells" entry is the time taken for an
unsuccessful poll which includes the system call over-
Table
2: Cost of sending and receiving cells using read and
system calls.
Operation SS-20 SS-1+
write system call
syscall overhead 22.6-s 100-s
check fd, do uiomove 3.4-s 16-s
push cell into NI 2.2-s 8-s
write total 28.2-s 124-s
read system call
syscall overhead 22.1-s 99-s
pull cell from NI 5.0-s 13-s
check fd and recv ready,
do uiomove
7.0-s 25-s
read total for 1 cell 34.1-s 137-s
read total for 0 cells 28.8-s 113-s
Table
1: Cost breakdown for traps to send and receive cells.
Operation SS-20 SS-1+
trap+rett 0.44-s 2.03-s
check pid and connection
id
addt'l kernel ovhd 0.05-s 0.50-s
load cell to push 0.13-s 3.87-s
push cell to NI 2.05-s 3.17-s
total 2.72-s 10.11-s
read trap
trap+rett 0.44-s 2.03-s
check cell count 0.81-s 1.08-s
addt'l kernel ovhd 0.18-s 0.80-s
per cell pull from NI 4.27-s 3.68-s
per cell demux 0.09-s 0.23-s
per cell store away 0.17-s 3.50-s
total for 1 cell 5.87-s 11.32-s
per cell total for
total, 0 cells read 3.7-s 11.2-s
total, 1 cell read 8.2-s 21.4-s
null system call 6.9-s 40-s
head, the file descriptor checks, and the reading of the
receive-ready register.
The timings show clearly that the overhead of the read/
write system call interface is prohibitive for small mes-
sages. For larger messages, however, it may well be a
viable choice and it is more portable than the traps.
3.4.3 SSAM
Measurements of the Active Messages layer built on the
cell send and receive traps are shown in Table 3. In all
cases one word of the Active Message payload carries
data and the handlers simply return. The send request
uses a write-read-trap and adds a little over 1-s of overhead
(on the SS-20) for cell formatting and flow-control.
The handling times are all roughly the cost of a read-
trap (reading 16 cells per trap) plus again a little
over 1-s for the flow control and handler dispatch. If a
reply is sent that adds the time of a write-trap.
The measurements show that supporting only single-cell
Active Messages is not optimal. Longer messages are
required to achieve peak bulk transfer rates: the one-
cell-at-a-time prototype can yield up to 5.6MB/s. A simpler
interface for shorter messages (e.g., with only
bytes of payload) might well be useful as well to accelerate
the small requests and acknowledgments that are
often found in higher-level protocols. Unfortunately,
given that the trap cost is dominated by the network
interface access time and that the SBA-100 requires
all 56 bytes of a cell to be transferred by the processor, it
is unlikely that a significant benefit can be realized.
3.4.4 Split-C
While a full implementation of Split-C [2] is still in
progress, timings of the remote memory access primitives
show that the round-trip time for a remote read
of aligned bytes takes 32-s on the SS-
20 and a one-way remote store takes 22-s for the same
payload. 1 Remote accesses with smaller payloads are
not noticeably cheaper. A bulk write implemented with
the current SSAM layer transfers 5.5Mbytes/s, but
1. Note that in a more realistic setting a Fore ASX-100 switch
will add roughly 10-s of latency to the write time and 20-s
to the round-trip read time [7].
Table
3: Cost breakdown for SPARCstation Active Messages.
Operation SS-20 SS-1+
send request 5.0-s 15-s
handle request, no reply sent 5.6-s 15-s
handle request and send reply 7.7-s 25-s
handle ack 5.0-s 11-s
handle reply 5.2-s 12-s
experiments show that, using long messages, this could
be improved to 9Mbytes/s by using the full ATM payload
and simplifying the handling slightly.
Unresolved issues
The current SSAM prototype has no influence on the
process scheduling. Given the current buffering
scheme the SSAM layer operation is not influenced by
which process is running. The performance of applica-
tions, however, is likely to be highly influenced by the
scheduling. How to best influence the scheduler in a
semi-portable fashion requires further investigation. The
most promising approach appears to be to use real-time
thread scheduling priorities, such as are available in
Solaris 2.
The amount of memory allocated by the SSAM prototype
is somewhat excessive and, in fact, for simplicity,
the current prototype uses twice as many buffers as
strictly necessary. For example, assuming that a flow-control
window of 32 cells is used, the kernel allocates
and pins 8Kbytes of memory per process per connec-
tion. On a 64-node cluster with 10 parallel applications
running, this represents 5Mb of memory per processor.
The number of preallocated buffers could be reduced
without affecting peak bulk transfer rates by adjusting
the flow control window size dynamically. The idea is
that the first cell of a long message contain a flag which
requests a larger window size from the receiver; a few
extra buffers would be allocated for this purpose. The
receiver grants the larger window to one sender at a time
using the first acknowledgment cell of the bulk transfer.
The larger window size remains in effect until the end of
the long message. This scheme has two benefits: the
request for a larger window is overlapped with the first
few cells of the long message, and the receiver can prevent
too many senders from transferring large data
blocks simultaneously, which would be sub-optimal for
the cache. However, fundamentally, it appears that
memory (or, alternatively, low performance) is the price
to pay for having neither flow-control in the network nor
coordinated process scheduling.
A more subtle problem having to do with the ATM payload
alignment used by the SBA-100 interface will surface
in the future: the 53 bytes of an ATM cell are
padded by the SBA-100 to 56 bytes and the 48-byte
payload starts with the 6th byte, i.e., it is only half-word
aligned. The effect is that bulk transfer payload formats
designed with the SBA-100 in mind (and supporting
double-word moves of data between memory and the
will clash with other network interfaces
which double-word align the ATM payload.
3.6
Summary
The prototype Active Messages implementation on a
SPARCstation ATM cluster provides a preliminary demonstration
that this communication architecture developed
for multiprocessors can be adapted to the
peculiarities of the workstation cluster. The performance
achieved is roughly comparable to that of a multiprocessor
such as the CM-5 (where the one-way latency is
roughly 6-s), but it is clear that without a network interface
closer to the processor the performance gap cannot
be closed.
The time taken by the flow-control and protection in
software is surprisingly low (at least in comparison with
the network interface access times). The cost, in effect,
has been shifted to large pre-allocated and pinned buff-
ers. While the prototype's memory usage is somewhat
excessive, other schemes with comparable performance
will also require large buffers.
speed comes from a careful integration
of all layers, from the language level to the kernel
traps. The key issues are avoiding copies by having the
application place the data directly where the kernel
picks it up to move it into the device and by passing
only easy to check information to the kernel (in particular
not pass an arbitrary virtual address).
4 Comparison to other approaches
The ATM network communication layer most directly
comparable to SSAM is the remote memory access
model proposed by Thekkath et. al. [10,11]. The implementation
is very similar to SSAM in that it uses traps
for reserved opcodes in the MIPS instruction set to
implement remote read and write instructions. 1
The major difference between the two models is that the
remote memory operations separate data and control
transfer while Active Messages unifies them. With
remote memory accesses data can be transferred to user
memory by the kernel without the corresponding process
having to run. But the model used does not allow
remote reads and writes to the full address space of a
process. Rather, each communicating process must allocate
special communication memory segments which
are pinned by the operating system just as the buffers
used by SSAM are. The communication segments are
more flexible than SSAM's buffers in that they can
directly hold data structures (limited by the fact that the
segments are pinned).
The advantage of SSAM over the remote memory
accesses is the coupling of data and control: each message
causes a small amount of user code to be executed,
1. One could easily describe the traps employed by SSAM as
additional emulated communication instructions.
which allows data to be scattered into complex data
structures and the scheduling of computation to be
directly influenced by the arrival of data. In the remote
memory access model a limited control transfer is
offered through per-segment notification flags in order
to to cause a file descriptor to become ready.
Finally, SSAM provides a reliable transport mechanism
while the remote memory access primitives are unreliable
and do not provide flow-control.
Table
4 compares the performance of the two
approaches: Thekkath's implementation uses two DECstation
5000 interconnected by a Turbochannel version
of the same Fore-100 ATM interface used for SSAM
and performs a little worse than SSAM for data transfer
and significantly worse for control transfer. The remote
reads and writes are directly comparable in that they
transfer the same payload per cell.
The performance of more traditional communication
layers over an ATM network has been evaluated by Lin
et. al. [7] and shows over two orders of magnitude
higher communication latencies than SSAM offers.
Table
5 summarizes the best round-trip latencies and
one-way bandwidths attained on Sun 4/690's and
SPARCstation 2's connected by Fore SBA-100 interfaces
without switch. The millisecond scale reflects the
costs of the traditional networking architecture used by
these layers, although it is not clear why Fore's AAL/5
API is slower than the read/write system call interface
described in -3.4.2. Note that a TCP/IP implementation
with a well-optimized fast-path should yield sub-millisecond
latencies.
Table
4: Comparison of SSAM to Remote Memory Accesses
between 2 DECstation 5000s over ATM [11].
Operation SSAM Remote
mem access
read latency 32-s 45-s
latency 22-s 30-s
addt'l control
transfer ovhd
none 260-s
block write 5.5MB/s 4.4MB/s
Table
5: Performance of traditional communication layers on
Sun4/690s and SPARCstation 2s over ATM [7].
Communication layer Round-trip
latency
Peak
bandwidth
Fore AAL/5 API 1.7ms 4MB/s
BSD TCP/IP Sockets 3.9ms 2MB/s
PVM over TCP/IP 5.4ms 1.5MB/s
Sun RPC 3.9ms 1.6MB/s
Conclusions
The emergence of high-bandwidth low-latency networks
is making the use of clusters of workstations
attractive for parallel computing style applications.
From a technical point of view a continuous spectrum of
systems can be conceived, ranging from collections of
Ethernet-based workstations to tightly integrated custom
multiprocessors. However, this paper argues that
clusters will be characterized by the use of off-the-shelf
components, which will handicap them with respect to
multiprocessors in which hardware and software are
customized to allow a tighter integration of the network
into the overall architecture.
The use of standard components, and in particular, of
ATM networking technology, results in three major disadvantages
of clusters with respect to multiprocessors:
(i) ATM networks do not offer reliable delivery or flow
control, (ii) the current network interfaces are not well
integrated into the workstation architecture, and (iii) the
operating systems on the nodes of a cluster do not coordinate
process scheduling or address translations.
The prototype implementation of the Active Messages
communication model described in this paper achieves
two orders of magnitude better performance than traditional
networking layers. Table 6 shows that the resulting
communication latencies and bandwidths are in the
same ball-park as on state-of-the-art multiprocessors.
Key to the success are the use of large memory buffers
and the careful design of a lean user-kernel interface.
The major obstacle towards closing the remaining performance
gap is the slow access to the network interface
across the I/O bus, and reducing the buffer memory
usage requires coordination of process scheduling
across nodes. While taking care of flow control in software
does not dominate performance in this study, the
behavior of ATM networks under parallel computing
communication loads remains an open question.
Table
Comparison of SSAM's performance with that of
recent parallel machines.
Machine Peak
bandwidth
Round-trip
latency
Paragon
Active Mesg [4] 10MB/s 12-s
cluster
6
--R
Fast Parallel Sorting: from LogP to Split-C
Active Messages: A Mechanism for Integrated Communication and Computation.
PVM 3.0 User's Guide and Reference Manual.
Memory Coherence in Shared Virtual Memory Systems.
Distributed Network Computing over Local ATM Networks.
The Paragon Implementation of the NX Message Passing Interface.
The SP1 High-Performance Switch
Efficient Support for Multicomputing on ATM Networks.
Separating Data and Control Transfer in Distributed Operating Systems.
--TR
--CTR
Jarek Nieplocha , Robert Harrison, Shared Memory Programming in Metacomputing Environments: The Global Array Approach, The Journal of Supercomputing, v.11 n.2, p.119-136, Oct. 1997
David E. Culler , Lok Tin Liu , Richard P. Martin , Chad O. Yoshikawa, Assessing Fast Network Interfaces, IEEE Micro, v.16 n.1, p.35-43, February 1996
Thomas E. Anderson , David E. Culler , David A. Patterson , and the NOW team, A Case for NOW (Networks of Workstations), IEEE Micro, v.15 n.1, p.54-64, February 1995
Boris Roussev , Jie Wu, Distributed computing using Java: a comparison of two server designs, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.7, p.432-440, July 2006
Takashi Matsumoto , Kei Hiraki, MBCF: a protected and virtualized high-speed user-level memory-based communication facility, Proceedings of the 12th international conference on Supercomputing, p.259-266, July 1998, Melbourne, Australia
Y. Huang , C. C. Huang , P. K. McKinley, Multicast virtual topologies for collective communication in MPCs and ATM clusters, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.9-es, December 04-08, 1995, San Diego, California, United States
Dawson R. Engler , M. Frans Kaashoek, DPF: fast, flexible message demultiplexing using dynamic code generation, ACM SIGCOMM Computer Communication Review, v.26 n.4, p.53-59, Oct. 1996
T. von Eicken , A. Basu , V. Buch , W. Vogels, U-Net: a user-level network interface for parallel and distributed computing (includes URL), ACM SIGOPS Operating Systems Review, v.29 n.5, p.40-53, Dec. 3, 1995
Scott Pakin , Mario Lauria , Andrew Chien, High performance messaging on workstations: Illinois fast messages (FM) for Myrinet, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.55-es, December 04-08, 1995, San Diego, California, United States
Thomas Sterling , Daniel Savaresse , Peter MacNeice , Kevin Olson , Clark Mobarry , Bruce Fryxell , Phillip Merkey, A Performance Evaluation of the Convex SPP-1000 Scalable Shared Memory Parallel Computer, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.55, December 04-08, 1995, San Diego, California, United States
WooYoung Kim , Gul Agha, Efficient support of location transparency in concurrent object-oriented programming languages, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.39-es, December 04-08, 1995, San Diego, California, United States
K. L. Johnson , M. F. Kaashoek , D. A. Wallach, CRL: high-performance all-software distributed shared memory, ACM SIGOPS Operating Systems Review, v.29 n.5, p.213-226, Dec. 3, 1995 | message passing;communication performance;communication layer;network of workstations;parallel computing |
623929 | A Constant-Time Parallel Sorting Algorithm and Its Optical Implementation. | Sorting is a fundamental operation that has important implications in a vast number of areas. For instance, sorting is heavily utilized in applications such as database machines, where hashing techniques are used to accelerate data processing algorithms. It is also the basis for interprocessor message routing and has strong implications in video telecommunications. However, high- speed electronic sorting networks are difficult to implement with VLSI technology because of the dense, global connectivity required. Optics eliminates this bottleneck by offering global interconnects, massive parallelism, and noninterfering communications. We present a parallel sorting algorithm and its efficient optical implementation using currently available optical hardware. The algorithm sorts n data elements in few steps, independent of the number of elements to be sorted. Thus, it is a constant-time sorting algorithm (i.e. O(1) time). This is a significant performance improvement over state-of-the-art electronic sorting systems where the fastest sorting algorithm for n elements takes O(log n) but requires O(n2) processors. We provide the detailed description of an optical system for generating the rank of a data set and physically reordering it. This is evidence that problems considered "solved" using conventional approaches need to be reconsidered so that the benefits of optics can be properly utilized to obtain new, faster solutions to old problems. | Introduction
Sorting is a basic, fundamental operation used for many symbolic, numeric, and artificial intelligence
(AI) tasks. Some of the applications of sorting include the "togetherness" problem,
i.e., the problem of bringing all identical items in a list or file together. Sorting can also be
used for matching problems, for example, if one is trying to find all matching entries of two
files. If both files are first sorted, then all matching entries can be found in one pass through
the data. A sort can also be used in database and knowledgebase processing. Sorting algorithms
can serve as a basis for performing many common and highly useful operations such
as selection, projection, division, and join in the context of relational databases, and inter-
section, union, difference, and cartesian product in the context of sets. Sorting can be used
to simplify searching. In addition to its widespread use in information processing, sorting is
also important in communications where it serves as the basis for packet routing in networks.
Because of its importance, there has been a great deal of work on developing and analyzing
sorting algorithms and architectures. In general, a sort on a string of n data elements can be
done with O(nlog 2 n) comparisons using the best serial algorithms [1]. Using a conventional
sorting network and taking advantage of parallelism as much as possible permits sorting to
be done in O(log 2 steps. Optical architectures for sorting are worth developing for
two reasons: (1) less conventional architectures may permit a sorting operation to be done
in far fewer steps, and (2) the fastest conventional sorting networks seem to require fairly
dense, globally connected networks that are difficult to implement with conventional electronic
technology alone. An example of the latter is a sorting network based on Batcher's
bitonic sort [2], and for a string of 2 k data elements requires k(k+1)stages. Each stage consists
of Fast Fourier Transform (FFT)-like butterfly interconnections of varying sizes.
Optical technology with its inherent parallelism, high spatial and temporal bandwidths,
and non-interfering communications is well suited for implementing the sorting operation.
In particular, optics is very attractive for sorting because of its ability to process two-dimensional
data arrays in parallel. A two-dimensional optical system has an extremely
high number of individual communication channels, all operating in parallel. Besides this
parallelism, optical interconnects permit the efficient and high-speed implementation of the
global connection patterns required for sorting algorithms. Most importantly, optical systems
permit the implementation of sorting with an execution time independent of the number of
data elements to be sorted (i.e., O(1) time), this is in contrast to electronic sorting systems
where the execution time is some function of the number of data elements. In addition, optical
sorting systems have minimum time-skew and may communicate information at optical
media bandwidths. Thus the sorting throughput can be quite large and is limited in practice
by the response time of the optical active devices used. Finally, due to the widespread use
of optical storage technology in modern high-performance computers and communication
systems (ie. Asynchronous Transfer Mode switches), the data to be sorted is already in optical
form, which may make optical sorting systems commonplace in future high-performance
computers and communication systems.
This paper explores the problem of sorting, discussing the optical implementation of a
highly parallel sorting algorithm that takes into account the unique properties of optics.
Several optical sorting algorithms have been proposed in the past [3]. Stirk and Athale [4]
proposed a parallel-pipelined sorting algorithm using optical compare-and-exchange modules
that has a time complexity of O(log 2 n) steps. Researchers at IBM also proposed an optical
enumeration sort [5] using an optical system based on phase addition and subtraction (inter-
ference) to perform analog algebraic operations. However, these coherent systems, as they
are referred to, are difficult to construct since the alignment is very critical. Furthermore,
the authors of the above-mentioned systems failed to suggest a way of physically reordering
the data elements after their positions in the sorted output are determined. This restricts
the system's usefulness to pointer-based computing systems. In this paper, we propose an
optical system capable of both determining the positions of the sorted data elements and
physically reordering them in O(1) time steps. It uses photonics for highly parallel inter-connects
and optoelectronics, in the form of "smart pixels" [6, 7] and deformable mirror
devices (DMDs) [8, 9, 10] for processing. Thus, it exploits the advantages of both the optical
and electrical domains. The paper is organized as follows. Section 2 introduces the parallel
sorting algorithm. Section 3 discusses the detailed optical implementation of the algorithm,
the devices used, and related issues. Section 5 concludes the paper.
2 A Constant-time Parallel Sorting Algorithm
Given the ability of optics to process 2-D arrays of data in one step, parallel algorithms of
the "divide and conquer" classification [11], that are infeasible on electronic computers due
to the requirement of a large number of processors, become relevant. On occasion these are
also the most natural way of describing a solution to the problem at hand. For example,
to sort an array of n numbers, we need to compare every number to every other number,
and thus compute its rank (position in the sorted output). In what follows, we describe a
sorting algorithm that implements exactly the above strategy in constant time ( i.e. O(1),
independent of the number of words being sorted). In Section 3, we describe an optical
implementation of each of the steps.
Before formalizing the algorithm, let us first discuss the conventions that will be used
in this example and throughout the rest of the paper. Row vectors will be indicated by
lowercase underlined letters, such as x, for example, whereas an uppercase letter indicates a
matrix. A subscript indicates the index of the vector or matrix. Thus, x j indicates the j th
element of the row vector x, A i;j indicates the element in the i th row and the j th column of
matrix A. On occasion, the notation x i will be used to indicate the i th element of column
vector x T , where the i illustrates a correspondence to a matrix row. Finally, two-dimensional
(2-D) data will be referred to as matrices in the context of algorithms and data arrays in
the context of optics. This is done to adhere to both mathematical and optical conventions
for representing 2-D arrays.
Let us consider as an example, sorting the data vector Recall that
sorting an n element vector requires comparing each element of the vector against every
other element of the vector, implicitly or explicitly. This problem can also be formulated
as broadcasting the vector a, and performing one comparison operation for every resulting
pair of elements. As optics allows the processing of two-dimensional data arrays in parallel,
we vertically "spread" the vector a such that we can take advantage of this property. This
creates n copies of the original vector a, and places them in a matrix, say A. In the case of
our example,
Hence, each element of the original vector occupies one column of the new matrix. Now,
if we take the transpose of this spread matrix, each element of the original vector occupies
one row of the transposed matrix.
A T =6664
(2)
The comparison operation between every pair of elements of a is then performed by
taking the difference of the two matrices A and A T where the subtraction is represented as
the addition of a negative quantity.
Note that in the difference matrix D, each column j represents the comparison information
for element a j with every other element of a. Recall that if we are to sort the elements, the
rank of an element indicates its position in the sorted output. It is required that even though
the elements a j of the original vector might not be unique, i.e. multiple occurrences of the
same value, each element is assigned a unique rank from 1 to n. The rest of the sorting
problem then involves using the information in the difference matrix to arrive at a unique
rank for each element in the original vector. Each negative number in column j of D implies
the existence of an element a i ? a j . Then, if we are sorting the numbers in an ascending
order, by the definition of rank given above, a negative number in column j of D contributes
0 to a j 's rank. By a similar argument, each positive number in column j of D contributes 1
to the rank of a j . Finally, all zeroes in column j represent non-unique elements, i.e., numbers
with value equal to a j . Each column necessarily contains one zero, the result of comparing
a j against itself. If we allow zeroes to also contribute one to the rank of a j , we arrive at the
R =6664
meaning that a i appears after a j in the sorted output and
should not contribute to a j 's rank. Similarly, R indicating that a i will
appear in the sorted output before a j and contributes to its rank. Summing each column j,
we obtain the non-unique rank for the corresponding element a j . Thus, in the case of our
example, the ranks of the sorted elements are:
Notice that the multiple instances of the number 8 are assigned the same rank. To resolve
the non-unique ranks, we break ties by comparing the locations of the two elements being
compared. So for every two non-unique numbers in the vector a, if one occurs before the
other, then we consider it to be the larger of the two. Hence, for every fi; jg, such that
as though a i ? a j . To translate this into an operation
on the difference matrix D, we focus our attention on the zeros of D. As mentioned, the
existence of zeros on the main diagonal is guaranteed since the condition
the comparison of an element to itself. Thus, they are of no concern to us. However, zeros
of D for the condition i 6= j mean that the two different elements of a being compared are
equal. Hence, we have identified the existence and location of non-unique numbers. Each
pair of non-unique numbers is repeated, the first in fi; jg and the second in fj; ig. Since we
only need to modify the rank of only one occurrence, we select the earlier occurrence to have
the smaller rank. By our revised rule, if D should not contribute
to the rank of a j , and hence should be negative. Notice that we need to consider modifying
only the upper triangle portion of the difference matrix D. To do so, we create a new matrix
U , where U
U =6664
Next, we subtract U from D, again, by the addition of a negative quantity.
In this new difference matrix D 0 , the negative elements of D remain negative, and hence
contribute nothing to the rank of an element. The positive elements remain positive, or
become 0, and hence, if we consider that all zeroes contribute one to the rank of an element,
the positive elements still contribute the same amount to the rank of an element. The zeros
in the upper triangular portion of the matrix D become negative in D 0 . This resolves the
ties between non-unique ranks in the manner described above. The new rank matrix is then:
indicates that a i appears in the sorted output after a j and R
the opposite. Summing each column of R 0 , we get the fully resolved ranks of each elements
in a.
The result from the algorithm is the generation of the rank vector,
contains the positions of each of the data elements in the sorted output. This algorithm and
an accompanying optical system are complete when they are capable of rearranging the input
data to the order reported in r. The problem of physically reordering the data reduces to
the task of determining, in parallel, which column of r contains the number 1, the number 2,
etc. so that we know which element is 1st, 2nd, etc. in the sorted output. We accomplish this
by comparing each element of r to the numbers [1; . ; n]. Mathematically, this is illustrated
as spreading the r vector n times and subtracting the vector [ horizontally
times, as in the following:
\Gamma6664
In the resulting matrix on the righthand side of Eqn. 8, S highlighted by boldfacing,
indicates in which column j, of r, the value i exists. It tells us that a j is to be relocated to
row i of the sorted output, where we assume the sorted output as a column vector. Note
that S only one element per row of S. If we use S to select/discard elements
from a copy of the A matrix, such that elements select elements A i;j and elements
effectively rearranged the inputs to their positions
in the sorted output. This operation is illustrated below where the selected elements are
boldfaced.
A =6664
Thus, the problem of reordering the data reduces to selecting the appropriate element
from each row of A, since each row of A has a copy of each data element, and discarding the
rest. The discrete steps for constant time sorting are summarized below.
1: Process the input:
(a) Generate matrix A by vertically spreading a n times.
(b) Generate matrix A T by horizontally spreading a T n times.
Step 2: Compare every element of a with every element of a T by computing the difference
Step 3: Generate the U matrix, where U
Step 4: Resolve non-unique ranks by computing matrix D
Step 5: Generate R 0 by thresholding D 0 , where R
Step the rank vector, r, by summing each column of the matrix R 0 .
Step 7: Reorder the sorted data:
(a) Compare every element of r to every element of [1; . ; n] T by expanding both by
n and subtracting the latter from the former to form the S matrix.
(b) Use S to select/discard A where S indicates that data element A i;j should
be transferred to row i in the sorted output.
In Section 3, we present an efficient and economical optical implementation of the proposed
algorithm.
Optical Implementation of the Constant-time Parallel
Sorting Algorithm
We will now consider an optical system that implements the above steps and uses the rank
vector to physically reorder the input data in constant time. We use currently available
components to demonstrate its feasibility. The system contains a mixture of optics and
electronics. Photonics are used for highly parallel, non-interfering interconnects while elec-
tronics, integrated into arrays of "smart pixels", are used for processing. The development
status of these devices is discussed in Refs. [12] and [13]. Smart pixels are optoelectronic
devices which combine optical detectors, simple processing electronics, and optical modulators
or sources on a single substrate for high-speed processing of 2-D data arrays [6, 14].
These devices take advantage of the strengths of both the optical and electrical domains.
Photons do not readily interact with each other in the optical domain. Thus, it is an ideal
communications environment. However, this makes photonics difficult for switching and
control. Electrons, on the other hand, interact very easily which makes them convenient for
controlling each other. Smart pixels capitalize on the benefits of electronics for switching and
control, and photonics for highly parallel communication. The integration of optical sources
provide signal regeneration to the following stage. Furthermore, each of their components
can be readily fabricated with conventional Si and GaAs processing techniques. Some examples
of these are SEED-based [13], Si/PLZT-based [15], and VSTEP-based [16] smart
pixels.
A promising class of lasers that may be used as optical sources both within smart pixels
and as system input is vertical-cavity surface-emitting microlasers [17]. They are capable of
being integrated in densities of over one million microlasers with dimensions of a few -m on
a single chip. Their development is an important milestone in the realization of high-density
optically-interconnected systems.
3.1 Optical Algebraic Operations
In order to implement steps 2, 4, and 7 of the algorithm, we must have a way of subtracting
two numbers. Since the intensity of light cannot be less than zero, we use a dual-wavelength
scheme for representing both positive and negative results [18]. For ease of explanation,
we assume that we are only sorting positive values. Negative values will then occur only
as the subtraction result of two positive numbers. The system can be modified slightly to
accommodate negative data elements, but it is more cumbersome to explain and is avoided
in this paper.
Positive data numbers are represented by the positive light intensity level of one wave-
negated values are represented by an equal positive light intensity
level of a second wavelength, - 2 . Thus, a number is considered negative merely by the fact
that it is encoded on - 2 . Subtraction is performed by first superimposing (summing), via a
beamsplitter, the two modulated wavelengths. Since photons do not interact in free space,
the actual subtraction of the absolute values written to - 1 and - 2 is performed electronically.
The light of wavelength - 1 impinges upon a photodetector preceded by a - 1 color filter while
the light of - 2 impinges upon a photodetector preceded by a - 2 color filter. The color filters
are included to eliminate any crosstalk from the opposing wavelength. In general, the two
wavelengths should be chosen so that their separation is larger than the pass bandwidth
of the filters. The positive photodetector outputs, V (- 1 ) and V (- 2 ), are then fed into the
positive and negative terminals, respectively, of an op-amp, where the subtraction of the
absolute values, occurs.
3.2 Generating the Rank Vector r
3.2.1 Implementation of Step 1 of the Algorithm
Fig. 1 illustrates an optical implementation for the first two steps of the algorithm. In
Step 1, the one-dimensional (1-D) input, a, modulates the columns of a 2-D laser array of
wavelength - 1 to form the A array. Meanwhile, a T modulates the rows of a 2-D laser array
of wavelength - 2 to form the \GammaA T Array (where the '-' is inherent in the use of - 2 ). This
differs slightly from our algorithm which suggested that the two vectors be vertically and
horizontally spread n times with optics. We avoided spreading because the beam intensity
and "distance" between analog levels are reduced to 1
n times the levels of the original beam.
The latter reduction makes it more difficult to distinguish the analog levels at the detectors.
Beam
Electrical
inputs
Electrical
inputs
A Array of
wavelength
positive weights ]
A Array of
wavelength
negative weights ]
column
2-D array of
lasers of
wavelength
2-D array of
lasers of
wavelengthllll
2-D array of laser light
Legend
optical array of
wavelength 2
l
optical array of
wavelength 1
l
optical array of
superimposed wavelengths
and 2
ll
Step 1: Step 2:
light of
wavelength
light of
wavelength5ll
superimposed
light of
andll
Figure
1: The figure presents the optical hardware for implementing steps 1 and 2 of the
algorithm. A and A T are formed by modulating 2-D laser arrays, with wavelengths - 1 and
by a and a T , respectively. These are "summed" to form the difference matrix D.
Furthermore, the beam spreading approach results in more crosstalk (assuming non-ideal
filters) for this particular system since multiple wavelengths would overlap on the same
detector.
3.2.2 Implementation of Step 2 of the Algorithm
Beam
Splitter
Array
-U Array of
wavelength
negative weights ]l000101011100Step 3:
Array
column
Step 4:
Figure
2: The figure illustrates the optical implementation of Step 3 and part of Step 4 of
the proposed algorithm. To resolve non-unique ranks, U is "added" to D to form D 0 , where
the actual subtraction is performed in the next stage, shown in Fig. 3.
The difference array D of Step 2 is formed by "summing" arrays A and \GammaA T . This is
performed optically by merging their optical data planes with a beamsplitter. Recall, that at
this stage, the beams are merely superimposed and the actual subtraction will be performed
later. In Fig. 1, each element of the D array contains two numbers that represent the
superposition of the two colors. The number in the upper right corner represents the intensity
level of the - 2 light component while the number in the lower left corner represents the - 1
light component.
3.2.3 Implementation of Steps 3, 4, 5, and 6 of the Algorithm
Surface-emitting
laser
Op-amp and
thresholding
electronics
Laser driver5l
light intensity level
of wavelengthl
light intensity level
of wavelength
th
Prism
(spatially separates
two wavelengths)
Photodetector
w/ filterl
Photodetector
w/ filterl
l
A single pixel
from D array
Light of Wavelength 1
l
Light of Wavelength 2
l A Single
Smart Pixel
Figure
3: Illustrating the implementation of the actual subtraction from Step 4 and all of Step
5 for a single pixel, the D 0 pixel passes through a prism which separates the two wavelengths.
These light components impinge upon their corresponding photodetectors in the smart pixel
to generate V (- 1 ) and V (- 2 ). The op-amp performs the subtraction
The output is thresholded, the result of which modulates the laser driver.
Fig. 2 illustrates the implementation of Step 3 and part of Step 4. The \GammaU array of Step 3
is formed by modulating a 2-D laser array (not shown) of wavelength - 2 . The summation
of D and \GammaU in Step 4 is performed by the second beamsplitter in Fig. 2. Fig. 3 illustrates,
for a single pixel, the subtraction by the smart pixel of the absolute values in D 0 . Notice the
integration of the photodetectors, modulation electronics, and the surface-emitting laser in
this close-up view of a single smart pixel. The wavelengths of the D 0 Array are separated by a
prism and imaged onto the photodetectors residing within the smart pixel array. The op-amp
subtracts the detected value of - 2 , V (- 2 ), from the detected value of - 1 , V (- 1 ). The output
is then thresholded by a CMOS gate (not shown). The digital output from the thresholding
operation of Step 5 then modulates the surface-emitting laser for communication to the next
stage.5
1Cylindrical
Lens
Prism
Beam
Splitter Smart
Pixel
Array24Rank
Vector
light of wavelength248287582875
3 828758287582857D Array R Array
light of wavelengthll
Column
Column
(As viewed
from behind)
Figure
4: The optical system for implementing the actual subtraction phase of Step 4 along
with Steps 5 and 6 on a full scale. The prism separates the wavelengths so that they may
impinge upon their corresponding photodetectors on the smart pixel array. The subtraction
and thresholding of Steps 4 and 5 are performed by the integrated electronics in the smart
pixel array. The surface-emitting laser writes the result to the R 0 array which then reflects
off the beamsplitter and is then vertically summed by the cylindrical lens to form the rank
vector, r, in Step 6. Notice that the labeling conventions on the D 0 array in this figure have
been changed to mirror the fact that we are now viewing it from behind.
Fig. 4 illustrates Steps 4, 5, and 6 on a full scale where the labeling conventions of the D 0
array have been altered since it is being viewed from the backside in this figure. The output
of the electronic subtraction and thresholding of D 0 by the smart pixel array modulates
the surface-emitting lasers to generate the R 0 array. Since the lasers are integrated on the
same side of the substrate as the photodetectors, the R 0 array propagates back into the
system and reflects off the beamsplitter. The cylindrical lens, which focuses a beam only
in one dimension, vertically sums the ones in the R 0 matrix to form the rank vector, r, in
accordance with Step 6. To this end, the rank vector r is generated. Next, we need to
reorder the input data according to the rank vector. We should note that this additional
step may not be required for all applications. For example, if the data elements are sorted
in a content-addressable memory, then any data element can be recalled directly by its rank,
and if this is what is needed, then physical reordering is not necessary.
3.3 Physical reordering of the input data
3.3.1 Step 7:
Figure
5: The architecture of a single torsion beam DMD pixel [8].
In addition to the generation of the rank vector, an equally important aspect of this paper
is the physical reordering of the input. This has received little attention previously since
prior systems generally proceeded as far as generating the rank vector and recommending
its use as a pointer. Our system uses a novel technique involving torsion-beam DMDs for
accomplishing this final step. DMDs are micromechanical arrays of mirror elements that
are capable of mechanical movement in response to an electrical input. Incident light is
modulated in its direction by reflection from rotated pixels. DMDs are addressed optically
or by integrated circuits. In an optically-addressed device, integrated photodetectors convert
incident beams of light into electrical control signals for the mirror elements. DMDs are also
highly efficient light modulation devices since they modulate under reflection. The devices
exhibit a reflectivity of - 90% [10] with a contrast ratio of 2500:1 [9] for the torsion-beam
configuration. Its pixel switching time is comparable with other optical light modulators.
Furthermore, DMDs are fabricated with conventional Si and GaAS processing techniques
and have been reported in sizes of 1920 \Theta 1080 elements [19].
Torsion Beam
Reflected light for
no angular beam
deflection
Reflected light for
angular beam
deflection of
Reflected light for
angular beam
deflection of
Incident light
Address Electrodes
Landing Electrodes
Differential Bias
Position 2
Position 1
Position 3
Figure
Cross sectional view of a torsion beam DMD pixel illustrating the effects of the
three mirror positions on an incident light beam [8].
The architecture of a single mirror is illustrated in Fig. 5. In the torsion-beam configura-
tion, a square pixel is suspended over an air gap by two thin torsion hinges connected to two
rigid supports. As voltages are applied to the address electrodes, the pixel deflects about the
hinge in response to the electric field. This results in a change in the direction of the incident
light. A cross sectional view of a single mirror is shown in Fig. 6. The differential voltage
on the address electrodes causes the torsion beam to be attracted toward the more positive
electrode. At increasing voltage levels, the angular deflection of the beam increases until it
contacts the landing electrode, which prevents any further deflection. This corresponds to
an angular deflection, ' , of approximately \Sigma10 . The torsion beam and landing electrodes
are held at the same potential in order to prevent current flow upon contact. By applying
a differential bias to the beam and landing electrodes, the voltage required for maximum
angular deflection can be reduced to TTL levels. This differential bias also determines the
number of digital deflection states. In the bistable condition, the beam is in a state of equilibrium
at an angular deflection of meaning that it must be deflected to either the
right or the left. In the tristable condition, the beam is in a state of equilibrium at angular
deflections of We use the tristable condition in our design since the
subtraction result of Step 7a can be negative, zero, or positive.
Optical Axis
Cylindrical
Lens
A single deformable
mirror pixel
(demonstrating
positions 1 and 2)
Top View
Beam
Splitter
Incident light
f
F
F
Opaque Screen
w/ on-axis
opening
Legend
Incident light
Reflected light
F On-axis
Focal Point
F Off-axis
Focal Point
f Focal length
Position 1
Position 2
Figure
7: A top view of the optical system for reordering sorted data. Here, we demonstrate
the use of a DMD to separate data elements. Incident light enters from the side where it
reflects off the beamsplitter and impinges on the DMD mirror. It is then reflected depending
on the mirror's position. Light reflected from position 1 is focused by the cylindrical lens to
point F along the optical axis while the light reflected from position 2 is focused to F'. Thus,
data can be spatially separated by controlling the mirror's position.
Fig. 6 also illustrates the operation of a DMD pixel for the three addressing conditions
potential differences across the electrodes,
results in no angular deflection (position 1) of the torsion beam whereas a
positive potential difference, results in a counterclockwise deflection (position
2) of the torsion beam by 10 . In this position, the incident light is reflected to the left.
Similarly, a negative potential difference deflects the torsion beam counterclockwise by 10
(position 3). Fig. 7 shows a top view of the actual system. Incident light enters from the side
where it is reflected by the beamsplitter and impinges on the DMD pixel. It is then reflected
by the mirror in the direction determined by its position. The reflected beam passes through
the beamsplitter and is focused by the cylindrical lens to a point a distance f from the lens,
where f is the focal length of the lens. Light reflected by the mirror in position 1 is focused
to point F along the optical axis while light reflected by mirror position 2 is focused to point
F'. An opaque screen, with an on-axis opening, blocks the off-axis beam. Thus, data can be
disabled by applying a nonzero potential difference to the DMD's address electrodes so that
the data beam is deflected and focused off-axis, where it is then blocked by the screen.
The optical system in Fig. 8 builds upon these principles to perform the final step of the
algorithm. The first task is the subtraction of the horizontally spread version of [1; . ; n] T
from the vertically spread version of r as outlined in Eqn. 8. To facilitate this, we propose
that the DMD be fabricated such that the column addressing electrodes, V c , of each cell in
a column be connected. Similarly, the row addressing electrodes, V r , of each row should be
connected. This allows us to address the entire array in parallel while also performing the
vector spreading operation required in Eqn. 8. We load r onto the column electrodes and [
onto the row electrodes. The array can be easily implemented by a resistive
network integrated onto the device substrate since its values remain the same for each sort
unless the order of the sort (ascending or descending) is changed. The column addressing lines
may be either electrically or optically-addressed. In the optically-addressed configuration,
an array of photodetectors is integrated onto the DMD substrate which connect internally
to the column addressing lines. To implement this, the r array formed in Fig. 4 is imaged
directly onto the integrated photodetectors. In the electrically-addressed configuration, the
r vector is imaged onto an external photodetector array which is then connected to the
column addressing lines through DMD chip I/O. The optically-addressed configuration is
preferred since it is directly scalable with array size as opposed to the electrically-addressed
configuration which is limited by the device pinout.
If we consider an element of S as the subtraction of a row electrode voltage from a col-
Cylindrical
Lens
Beam
Circles represent
locations of selected
data elements to
appear in the sorted
output
Sorted data
(physically
reordered)
Focal lines
for
DMD positions
Rank vector is written
to column addressing
lines either by
electrical-addressing
or optical-addressing
Array is generated
internally by a resistive
network and written to
the row addressing lines7722255888
A array
(As viewed
from behind)88135
Column
Column index
Boldface values represent
locations of selected data
elements to appear in the
sorted output
Deformable
Mirror
Device
Optical
Axis
Opaque Screen
w/ opening along
the optical axis58Lenslet
Array
Figure
8: Optical setup for physically reordering the sorted input. The A array reflects off
the beamsplitter and is imaged onto the DMD. Selected elements of A reflect off the mirrors
and are focused to a column that lies within the opening of the opaque screen by the lenslet
array and the cylindrical lens. Elements of A to be discarded reflect off the mirror and are
focused to off-axis columns where they are permanently blocked by the screen.
umn electrode voltage, we see that the DMD addressing condition implements the
subtraction in Eqn. 8. Thus, S represents the potential difference across the addressing elec-
trodes. Since the elements selected elements, one per row, of A, the selected
elements are focused to a column at the opening of the opaque screen by the lenslet array
and the cylindrical lens. Elements of A such that S i;j 6= 0 are deflected by the mirror array
and are focused to off-axis columns. The opaque screen discards these, in accordance with
Eqn. 9 by blocking them from the rest of the system. Thus, we have effectively demonstrated
the implementation of Eqn. 9, the physical reordering of the sorted data.
In Fig. 9, we present the physical layout of the proposed optical sorting system. The two
vertical-cavity surface-emitting lasers (VCSELs), V1 and V2, provide the 2-D optical input
data as mentioned in Section 3. V1 generates the A array with wavelength - 1 . In order
to reduce the amount of hardware required (a VCSEL and beamsplitter BS2) while also
reducing the power loss due to beamsplitting, the \GammaA T and \GammaU arrays are generated by a
single VCSEL. Since U is an array of constants, it can easily be added to the A T array as
one of the inputs to the laser drivers for the VCSELs in V2. Since V2 is of wavelength - 2 ,
it provides the array U) to be "added" to A to form the D 0 array. Two copies of
the D 0 array are generated by beamsplitter BS1. One copy is imaged onto SP by polarizing
beamsplitter PBS1 for the generation of the R 0 array. The rank vector is formed by CL1 and
imaged onto the column addressing photodetectors of DMD. In order to preserve the parity
of the array, this 90 degree deflection can be implemented with a pentaprism (not shown)
instead of a mirror. The other copy of D 0 is imaged onto the pixels of DMD by PBS2 for
the reordering step. Since this step requires the A array, a color filter that passes only light
of wavelength - 1 removes the elements from the D 0 array, leaving behind only the
A array. After DMD selects the appropriate elements of A, the sorted data is imaged onto
the 1-D detector array, DET, by the lenslet array, LA, and the cylindrical lens, CL2.
The layout in Fig. 9 suggests that the system volume should be very small. Using 1in.
aperture optics (lenses, beamsplitter and half-wave plates) with a modest 1.5in. component
separation, we see that the entire system can fit in an area of about 20cm \Theta 20cm. Although
l 2
l
Legend
array
array
array
array
Half-wave plate
Filter
Polarizing Beamsplitter
CL - Cylindrical Lens
(Actually
implemented as a pentaprism
to preserve beam parity)
addressing
unit
CF
A arraylll
R array
A array
Electrical
Inputs
Electrical
Inputs
array
(As viewed from behind)
(As viewed from behind)
Approx. 20cm
Approx.
20cm
LA
Figure
9: Layout of the proposed optical sorting system.
the above measurement doesn't include the interface electronics or the heat removal compo-
nents, the complete optical system should be small enough to be manufactured as an add-on
unit for mainframe computers.
Next, we estimate the space bandwidth product (SBWP) of the proposed optical sorting
system. For simplicity, we assume that the lengths of the pixel of the active components
(VCSELs, SP array, and DMD) have the same value which is a. For the given optical setup
of Fig. 10, the diffraction-limited spot diameter, D d , at the detector plane is given by [20]
cl
a
where - denotes the wavelength of the light and f cl represents the focal length of the cylindrical
lens CL2. Then, the maximum height of the image at the detector plane, hM , becomes
f cl la
f
a
Lenslet
Array
Optical
Axis
la
lens focal length
cl - lenslet array focal length
a - lens diameter of the lenslet array
and DMD pixel size
Legend
Figure
10: A top view of a part of the optical setup of Fig. 9. The figure explains how a
DMD pixel of size a is imaged onto a detector pixel of size hM by the lenslet array and the
cylindrical lens.
cl
f la
\Theta a; (11)
where f la represents the focal length of each lens in the lenslet array. Therefore, the maximum
SBWP is given by
2:44-f la
As discussed in Section 3.3, a 1920 \Theta 1080 element DMD with a pixel size of 17 -m is
reported in Ref. [19]. In a typical optoelectronic smart pixel, the power dissipation limited
SPD is known to be about 200(pixels/cm 2 ) [21, 22]. Thus, the length of a square pixel of
array becomes (
Comparing these two pixel size, we should use
the length of SP array as the value of a since it represents the worst case pixel size in the
system. Assuming - is 0.85 -m and f la is 0.1 mm, the SBWP of the proposed optical sorting
system becomes 2410.
In addition to the SBWP, an equally important parameter is the bit-capacity of the
system, i.e. the word size. Since the proposed optical sorting system is analog, the dynamic
range of the detectors is mostly responsible for determining this value. Currently, light
intensity may be produced, controlled, and detected in about 500 discrete levels [23, 24, 25].
This implies that the sorter has the capability of an 8-bit microprocessor. Obviously, this
limits the word-length of the data that can be sorted in constant time. Note that the number
of data elements to be sorted at once is not affected by this limitation. However, this same
restriction is also imposed upon electronic sorting systems. For a string sort, a typical string
might have between 10 and 20 characters in it. Since both optical and electronic sorters
can only operate on a subset of this string at any time, a single character for instance, the
sorting time is inevitably related linearly to the number of characters. In addition to the
word-length restriction, it is also prohibitive to build a sorting system that is capable of
sorting a sequence length of 10,000 data items without iteration, where a sequence length
is the number of data items to be sorted. In general, the sequence length, m, will be much
larger than the number of items, n, that the sorter can hold at any time. Thus, the data set
will have to be divided into m
smaller data subsets of sequence length n which will then be
sorted separately and compared to each other during multiple passes through the data set.
At first glance, it would seem as though one needs to sort each data subset against itself and
then against each other subsets. This approach would extend the execution time to O( m 2
cycles.
5 Conclusion
Advances in optical computing have opened up new possibilities in several fields related
to high-performance computing, high-speed communications, and parallel algorithm design.
It is necessary to take into consideration the specific properties of optics, such as massive
parallelism and global interconnects, to design algorithms that execute faster.
Sorting is a fundamental operation that has important implications in many areas. In
this paper, we presented a parallel sorting algorithm and its efficient optical implementation
using currently available hardware. The algorithm sorts n data elements in constant time,
i.e. independent of the number of words being sorted. The proposed algorithm can provide
a quantum leap over its electronic counterparts in execution time. The proposed optical
system is capable of both generating the rank vector and physically reordering the sorted
data. Previously proposed optical sorting systems proceeded only as far as generating the
rank vector which was used as pointer.
Our system used state-of-the-art optoelectronic devices such as smart pixel arrays and
deformable mirror devices for spatial light modulation. The algorithm and its optical implementation
presented in this paper are an excellent example of what optics can achieve. As
these optoelectronic and smart pixel devices mature, it is expected that such an algorithm
will have a major impact on sorting applications in the future.
--R
Introduction to Parallel Algorithms and Architectures: Arrays
"Sorting Networks and Their Applications,"
"Fast sorting algorithm based on the massive parallelism of optical computing,"
"Sorting with optical compare-and-exchange modules,"
"Optical facility for parallel enumeration sort,"
"Architectural Considerations for Photonic Switching Networks,"
"A Complexity Analysis of Smart Pixel Switching Nodes for Photonic Extended Generalized Shuffle Switching Networks,"
"Deformable-mirror spatial light modulators,"
"Deformable mirror light modulators for image processing,"
"Mirrors on a chip,"
Introduction to Algorithms - A Creative Approach
"Self-Electro-Optic Effect Devices for Optical Information Processing,"
"Field-Effect Transistor Self-Electrooptic Effect Device: Integrated Photodiode, Quantum Well Modulator and Transistor,"
"Amorphous Silicon Carbide Multilayer Modulators for Silicon Smart Pixels,"
"Two-dimensional silicon/PLZT spatial light modulators: design considerations and technology,"
"VSTEP-Based Smart Pixels,"
"Surface-emitting microlasers for photonic switching and interchip connections,"
"Deformable mirror device spatial light modulators and their applicability to optical neural networks,"
"A 1920 \Theta 1080 Element Deformable Mirror Device for High-Definition Displays,"
Fundamentals of Photonics.
"Implementation of Smart Pixels for Optoelectronic Processors and Interconnection Systems I : Optoelectronic Gate Technology,"
"Implementation of Smart Pixels for Optoelectronic Processors and Interconnection Systems II : SEED-Based Technology and Comparison with Optoelectronic Gates,"
"Linear Acousto-Optic Heterodyning Processors for Complex-Valued Data Processing,"
"Input/Output Devices,"
"High accuracy computation with linear analog optical systems: a critical study,"
--TR | optical computing;parallel processing;parallel sorting;sorting |
624159 | Virtual Network Transport Protocols for Myrinet. | This article describes a protocol for a general-purpose cluster communication system that supports multiprogramming with virtual networks, direct and protected network access, reliable message delivery using message time-outs and retransmissions, a powerful return-to-send error model for applications, and automatic network mapping. The protocols use simple, low-cost mechanisms that exploit properties of our interconnect without limiting flexibility, usability, or robustness. We have implemented the protocols in an active message communication system that runs a network of 100+ Sun UltraSPARC workstations interconnected with 40 Myrinet switches. A progression of microbenchmarks demonstrate good performance - 42 microsecond round-trip times and 31 MB/s node-to-node bandwidth - as well as scalability under heavy load and graceful performance degradation in the presence of high contention. | Introduction
With microsecond switch latencies, gigabytes per
second of scalable bandwidth, and low transmission
error rates, cluster interconnection networks
such as Myrinet [BCF+95] can provide substantially
more performance than conventional lo-
This research is supported in part by ARPA grant F30602-95-
C-0014, the California State Micro Program, NSF Infrastructure
Grant CDA-8722788, and an NSF Graduate Research Fellow-
ship. The authors can be contacted at
cal area networks. These properties stand in
marked contrast to the network environments for
which traditional network and internetwork protocols
were designed. By exploiting these fea-
tures, previous efforts in fast communication systems
produced a number of portable communication
interfaces and implementations. For exam-
ple, Generic Active Messages (GAM) [CLM+95],
Illinois Fast Messages (FM) [PKC97, PLC95], the
Real World Computing Partnerships's PM [THI96],
and BIP [PT97] provide fast communication lay-
ers. By constraining and specializing communication
layers for an environment, for example by
only supporting single-program multiple-data parallel
programs or by assuming a perfect, reliable
network, these systems achieved high-performance,
oftentimes on par with massively parallel processors
Bringing this body of work into the mainstream
requires more general-purpose and robust communication
protocols than those used to date. The
communication interfaces should support client-
server, parallel and distributed applications in
a multi-threaded and multi-programmed environ-
ment. Implementations should use process scheduling
as an optimization technique rather than as
a requirement for correctness. In a timeshared
system, implementations should provide protection
and the direct application access to network resources
that is critical for high-performance. Fi-
nally, the protocols that enable these systems
should provide reliable message delivery, automatically
handle infrequent but non-catastrophic net-work
errors, and support automatic network management
tasks such as topology acquisition and
route distribution.
Section 2 presents a core set of requirements for
our cluster protocol and states our specific assump-
tions. Section 3 presents an overview of our system
architecture and briefly describes the four layers of
our communication system. Then in section 4, we
examine the issues and design decisions for our pro-
tocols, realized in our system in network interface
card (NIC) firmware. Section 5 analyses performance
results for several microbenchmarks. We
finish with related work and conclusions.
Requirements
Our cluster protocol must support multiprogram-
ming, direct access to the network for all applica-
tions, protection from errant programs in the sys-
tem, reliable message delivery with respect to buffer
overruns as well as dropped or corrupted packets,
and mechanisms for automatically discovering the
network's topology and distributing valid routes.
Multiprogramming is essential for clusters to become
more than personal supercomputers. The
communication system must provide protection between
applications and isolate their respective traf-
fic. Performance requires direct network access
and bypassing the operating system for all common
case operations. The system should be resilient to
transient network errors and faults - programmers
ought not be bothered with transient problems that
retransmission or other mechanisms can solve -
but catastrophic problems require handling at the
higher layers. Finally, the system should support
automatic network management, including the periodic
discovery of the network's topology and distribution
of mutually deadlock-free routes between
all pairs of functioning network interfaces.
Our protocol architecture makes a number of assumptions
about the interconnect and the system.
First, it assumes that the interconnect has network
latencies on the order of a microsecond, link band-width
of a gigabit or more and are relatively error-
free. Second, the interconnect and host interfaces
are homogeneous, and that the problem of interest
is communication within a single cluster network,
not a cluster internet. System homogeneity eliminates
a number of issues such as the handling of
different network maximum transmission units and
packet formats, probing for network operating parameters
(e.g., as by TCP slow-start), and guarantees
that the network fabric and the protocols used
between its network interfaces are identical. This
doesn't preclude use of heterogeneous hosts at the
endpoints, such as hosts with different endianness.
Lastly, the maximum number of nodes attached to
the cluster interconnect is limited. This enables
trading memory resources proportional to the number
of network interfaces (NICs) in exchange for reduced
computational costs on critical code paths.
(In our system, we limit the maximum number of
NICs to 256, though it would be straightforward to
change the compile-time constants and to scale to
a few thousand.)
3 Architecture
Our system has four layers: (1) an active message
applications programming interface, (2), a virtual
network system that abstracts network interfaces
and communication resources, (3), firmware executing
on an embedded processor on the network
interface, and (4), processor and interconnection
hardware. This sections presents a brief overview
of each layer and highlights important properties
relevant for the NIC-to-NIC transport protocols described
thoroughly in Section 4.
3.1 AM-II API
The Active Messages 2.0 (AM-II) [MC96] provides
applications with the interface to the communications
system. It allows an arbitrary number of applications
to create multiple communications end-points
used to send and to receive messages using a
procedural interface to active messages primitives.
Three message types are supported: short messages
containing 4 to 8 word payloads, medium messages
carrying a minimum of 256 bytes, and bulk messages
providing large memory-to-memory transfers.
Medium and bulk message data can be sent from
anywhere in a sender's address space. The communication
layer provides pageable storage for receiving
medium messages. Upon receiving a medium
message, its active message handler is passed a
pointer to the storage and can operate directly on
the data. Bulk message data are deposited into
per-endpoint virtual memory regions. These regions
can be located anywhere in a receiver's address
space. Receivers identify these regions with
a base address and length. Applications can set
and clear event masks to control whether or not
semaphores associated with endpoints are posted
whenever a message arrives into an empty receive
queue in an endpoint. By setting the mask and
waiting on the semaphore, multi-threaded applications
have the option of processing messages in an
event-driven way.
Isolating message traffic for unrelated applications
is done using per-endpoint message tags specified
by the application. Each outgoing message
contains a message tag for its destination endpoint.
Messages are delivered if the tag in the message
Figure
1: Data paths for sending and receiving
short, medium, and bulk active mes-
sages. Short messages are transferred using programmed
I/O directly on endpoints in NIC mem-
ory. Medium messages are sent and received using
per-endpoint medium message staging areas in the
pageable kernel heap that are mapped into a pro-
cess's address space. A medium message is a single-copy
operation at the sending host and a zero-copy
operation at the receiving host. Bulk memory
transfers, currently built using medium messages,
are single-copy operations on the sender and single-copy
operations on the receiver.
matches the tag of the destination endpoint. The
AM-II API provides an integrated return-to-sender
error model for both application-level errors, such
as non-matching tags, and for catastrophic network
failures, such as losing connectivity with a remote
endpoints. Any message that cannot be delivered
to its destination is returned to its sender. Applications
can register per-endpoint error handlers to
process undeliverable messages and to implement
recovery procedures if so desired. If the system returns
a message to an application, simply retransmitting
the message is highly unlikely to succeed.
3.2 Virtual Networks
Virtual networks are collections of endpoints with
mutual addressability and the requisite tags necessary
for communication. While AM-II provides
an abstract view of endpoints as virtualized net-work
interfaces, virtual networks view collections
of endpoints as virtualized interconnects. There is
a one-to-one correspondence between AM-II end-points
and virtual network endpoints.
Figure
2: Processor/NIC node.
The virtual networks layer provides direct net-work
access via endpoints, protection between unrelated
applications, and on-demand binding of
endpoints to physical communication resources.
Figure
3.2 illustrates this idea. Applications create
one or more communications endpoints using
API functions that call the virtual network segment
driver to create endpoint address space seg-
ments. Pages of network interface memory provide
the backing store for active endpoints, whereas host
memory acts as the backing store for less active or
endpoints from the on-nic endpoint "cache". End-points
are mapped into a process's address space
where they are directly accessed by both the application
and the network interface, thus bypassing
the operating system. Because endpoint management
uses standard virtual memory mechanisms,
they leverage the inter-process protection enforced
between all processes running on a system.
Applications may create more endpoints than the
NIC can accommodate in its local memory. Providing
that applications exhibit bursty communication
behavior, a small fraction of these endpoints
may be active at any time. Our virtual network
system takes advantage of this when virtualizing
the physical interface resources. Specifically on our
Myrinet system, it uses NIC memory as a cache
of active endpoints, and pages endpoints on and
off the NIC on-demand, much like virtual memory
systems do with memory pages and frames.
Analogous to pagefaults, endpoint faults can occur
when either an application writes a message into a
non-resident endpoint, or a message arrives for a
non-resident endpoint. Endpoint faults also occur
whenever messages (sent or received) reference host
memory resources - medium message staging area,
arbitrary user-specified virtual memory regions for
sending messages, or endpoint virtual memory segment
for receiving messages - that are not pinned,
or for which there are no current DMA mappings.
3.3 NIC Firmware
The firmware implements a protocol that provides
reliable and unduplicated message delivery between
NICs. The protocols must address four core issues:
the scheduling of outgoing traffic from a set of resident
endpoints, NIC to NIC flow control mechanisms
and policies, timer management to schedule
and perform packet retransmissions, and detecting
and recovering from errors. Details on the NIC protocols
are given in Section 4.
The protocols implemented in firmware determine
the structure of an endpoint. Each endpoint
has four message queues: request send, reply send,
request receive, and reply receive. Each queue entry
holds an active message. Short messages are transferred
directly into resident endpoints memory using
programming I/O. Medium and bulk messages
use programming I/O for the active message and
DMA for the associated bulk data transfer. Figure
1 illustrates the data flows for short, medium,
and bulk messages through the interface. Medium
messages require one copy on the sender and zero
copies on the receiver. (Bulk messages, currently
implemented using medium messages, require one
copy on the sender and one copy on the receiver.
The code for zero-copy bulk transfers exists but has
not been sufficiently tested.)258146Node3
Switch
Switch
ID135Node51ID246165025Node34166135Node91
Switch
ID136Switch
ID136
ID246ID0246ID146884036ID146SwitchID136
Switch
ID146Switch
Switch
Figure
3: Berkeley NOW network topology as
discovered by the mapper. The network mapping
daemons periodically explore and discover the
network's current topology, in this case a fat tree-like
network with 40 Myrinet switches. The three
sub-clusters are currently connected using through
two switches using only 11 cables.
3.4 Hardware
The system hardware consists of 100
Sun UltraSPARC workstations interconnected with
Myrinet (Figure high-speed local
area network with wormhole routing and link-level
back-pressure. The network uses 40 8-port crossbar
switches with 160 MB/s full-duplex links. Each
host contains a LANai 4.1 network interface card
on the SBUS. Each NIC contains a 37.5 MHz embedded
processor, 256 KB of SRAM, a single host
SBUS DMA engine but independent network send
and receive DMA engines.
We now show how the requirements in Section 2
in the context of AM-II, virtual networks, and
Myrinet influence the design and implementation
of our NIC-to-NIC protocol. Each of the key issues
- endpoint scheduling, flow control, timer manage-
ment, reliable message delivery and error handling
make contributions to the protocol.
4.1 Endpoint Scheduling
Because our system supports both direct network
access and multiprogramming, the NIC has a new
task of endpoint scheduling, i.e., sending messages
from the current set of cached endpoints. This situation
is different from that of traditional protocol
stacks, such as TCP/IP, where messages from
applications pass through layers of protocol processing
and multiplexing before ever reaching the
network interface. With message streams from different
applications aggregated, the NIC services
shared outbound (and inbound) message queues.
Endpoint scheduling policies choose how long to
service any one endpoint and which endpoint to
service next. A simple round-robin algorithm that
gives each endpoint equal but minimal service time
is fair and is starvation free. If all endpoints always
have messages waiting to send, this algorithm
might be satisfactory. However, if application communication
is bursty [LTW+93], spending equal
time on each resident endpoint is not optimal. Better
strategies exist which minimize the use of critical
NIC resources examining empty queues.
The endpoint scheduling policy must balance optimizing
the throughput and responsiveness of a
particular endpoint against aggregate throughput
and response time. Our current algorithm uses a
weighted round-robin policy that focuses resources
on active endpoints. Empty endpoints are skipped.
For an endpoint with pending messages, the NIC
makes attempts to send, for some parameter k.
This holds even after the NIC empties a particular
endpoint - it loiters should the host enqueue additional
messages. Loitering also allows firmware to
cache state, such as packet headers and constants
while sending messages from an endpoint, lowering
per-packet overheads. While larger k's result in
better performance during bursts, too large a k degrades
system responsiveness with multiple active
endpoints. Empirically, we have chosen a k of 8.
4.2 Flow Control
In our system, a flow control mechanism has two
requirements. On one hand, it should allow an adequate
number of unacknowledged messages to be
in flight in order to fill the communication pipe between
a sender and a receiver. On the other, it
should limit the number of outstanding messages
and manage receiver buffering to make buffer overruns
infrequent. In steady state, a sender should
never wait for an acknowledgment in order to send
more data. Assuming the destination process is
scheduled and attentive to the network, given a
bandwidth B and a round trip time RTT , this requires
allowing at least B \Delta RTT bytes of outstanding
data.
Our system addresses flow control at three lev-
els: (1) user-level active message credits for each
endpoints, (2) NIC-level stop-and-wait flow control
over multiple, independent logical channels, and (3)
network back-pressure. The user-level credits rely
upon on the request-reply nature of AM-II, allowing
each endpoint to have at most K user outstanding
requests waiting for responses. By choosing a K user
large enough, endpoint-to-endpoint communication
proceeds at the maximum rate. To prevent receive
buffer overflow, endpoint request receive queues are
large enough to accommodate several senders transmitting
at full speed. Because senders have at most
a small number, K user , of outstanding requests,
setting the request receive queue to a small multiple
of K user is feasible. Additional mechanisms,
discussed shortly, engage when overruns do occur.
In our protocol, with 8 KB packets the band-width
delay product is 31MB=s
bytes - less than two 8KB messages. For
short packets the bandwidth delay product is
messages. To provide
slack at the receiver and to optimize arithmetic
computations, K user is rounded up to the
next power of 2 to 4. The NIC must provide at
least this number of logical channels to accommodate
this number of outstanding messages, as discussed
next.
4.2.1 Channel Management Tables
Two simple data structures manage NIC-to-NIC
flow control information. These data structures
also record timer management and error detection
information. Each physical route between a
source and destination NIC is overlayed with multiple
independent logical channels. Each row of
the send channel control table in Figure 4 holds
the states of all channels to a particular destination
interface. Each intersecting column holds the
state for a particular logical channel. This implicit
bound on the number of outstanding messages
enables implementations to trade storage for
reduced arithmetic and address computation. Two
simple and easily-addressable data structures with
(#NICs \Delta #channels) entries are sufficient.
Link-level back-pressure ensures that under
heavy load, for example, with all to one communi-
cation, the network does not drop packets. Credit-based
flow control in the AM-II library throttles
individual senders but cannot prevent high con-
Figure
4: NIC channel tables. The NIC channel
tables provide easy-access to NIC flow control,
timer management, and error detection informa-
tion. The NIC uses stop-and-wait flow control on
each channel and manages communication state information
in channel table entries. In the send table
(left), each entry includes timer management
information (packet timestamp, pointer to an unacknowledged
packet, number of retries with no receiver
sequencing information (next sequence
number to use), and whether the entry is
in use or not. In the receive table (right), each entry
contains sequencing information for incoming
packets (expected sequence number).
tention for a common receiver. By also relying
on link-level back-pressure, end-to-end flow control
remains effective and its overheads remain small.
This trades network utilization under load - allowing
packets to block and to consume link and switch
resources - for simplicity. Section 5 shows that this
hybrid scheme performs very well.
4.2.2 Receiver Buffering
Some fast communication layers prevent buffer
overruns by dedicating enough receiver buffer space
to accommodate all messages potentially in flight.
With P processors, credits for K outstanding mes-
sages, and a single endpoint per host, this requires
systems with
one endpoint made allocating K \Delta P storage prac-
tical. However, large scale systems with a large
number of communication endpoints requires K
storage, where E is number of endpoints in a virtual
network. This has serious scaling and storage
utilization problems that makes pre-allocation
approaches impractical, as the storage grows proportionally
to virtualized resources and not physical
ones. Furthermore, with negligible packet re-transmission
costs, alternative approaches involving
modest pre-allocated buffers and packet re-transmission
become practical.
We provide request and response receive queues,
each with 16 entries (4 \Delta K user ), for each endpoint.
These are sufficient to absorb load from up to
four senders transmitting at their maximum rates.
When buffer overflow occurs, the protocol drops
packets and NACKs senders. The system automatically
retransmits such messages. An important
consequence of this design decision is that our virtual
network segment driver can use a single virtual
memory page per endpoint, simplifying its memory
management activities.
4.3 Timer Management
To guarantee reliable message delivery, a communication
system must perform timeout and re-transmission
of packets. The timer management
algorithm determines how packet retransmissions
events are scheduled, how they are deleted and
how retransmission is performed. Sending a packet
schedules a timer event, receiving an acknowledgment
deletes the event, and all send table entries
are periodically scanned for packets to retransmit.
The per-packet timer management costs must be
small. This requires that the costs of scheduling
a retransmission event on each send operation and
deleting a retransmission event on an acknowledgment
reception to be negligible. Depending on the
granularity of the timeout quantum and the frequency
of time-out events, different trade-offs exist
that shift costs between per-packet operations and
retransmissions. For example, we use a larger timer
quantum and low per-packet costs at the price of
more expensive retransmissions. Section 5 shows
that this hybrid scheme has zero amortized cost for
workloads where packets are not retransmitted.
Our transport protocol implements timeout and
retry with positive acknowledgments in the interface
firmware. This provides efficient acknowledgements
and minimizes expensive SBUS transactions.
(We currently do not perform the obvious piggy-backing
of ACKs and NACKs on active message
reply messages). Channel management tables store
timeout and transmission state. Sending a packet
involves reading a sequence number from the appropriate
entry in the send table indexed by the destination
NIC and a free channel, saving a pointer
to the packet for potential retransmissions, and
recording the time the packet was sent. The receiving
NIC then looks up sequencing information
for the incoming packet in the appropriate receive
table entry indexed with the sending NIC's id and
the channel on which the message was sent. If the
sequencing information matches, the receiver sends
an acknowledgment to the sender. Upon its receipt,
the sender which updates its sequencing information
and frees the channel for use by a new packet.
By using a simple and easily-addressable data
structures, each with (#NICs \Delta #channels) en-
tries, scheduling and deleting packet retransmission
events take constant time. For retransmissions,
though, the NIC perform (#NICs \Delta
work. Maintaining unacknowledged packet counts
for each destination NIC reduces this cost signifi-
cantly. Sending a packet increments a counter to
the packet's destination NIC and receiving the associated
acknowledgement decrements the counter.
These counts reduce retransmission overheads to be
proportional to the total number of network interfaces
4.4 handling
Our system addresses data transmission errors and
resource available problems at three levels: NIC-
to-NIC transport protocols, the AM-II API return-
to-sender error model, and the user-level network
management daemons. The transport protocols are
the building blocks on which the higher-level API
error models and the network management daemons
depend. The transport protocols handle transient
network errors by detecting and dropping each
erroneous packet and relying upon timeouts and
retransmissions for recovery. After 255 retransmis-
sion, for which no ACKs or NACKs were received,
the protocol declares a message as undeliverable
and returns it to the AM-II layer. (Reliable message
delivery and timeout/retransmission mechanisms
require that sending interfaces have a copy of
each unacknowledged message anyway.) The AM-II
library invokes a per-endpoint error handler function
so that applications may take appropriate recovery
actions.
4.4.1 Transient Errors
Positive acknowledgement with timeout and re-transmission
ensures delivery of packets with valid
routes. Not only can data packets be dropped or
corrupted but protocol control messages as well. To
ensure that data and control packets are never delivered
more than once to a destination despite re-
transmissions, they are tagged with sequence numbers
and timestamps. With a maximum of 2 k
outstanding messages, detecting duplicates requires
sequence numbers. For our alternating-bit
protocol on independent logical channels,
4.4.2 Unreachable Endpoints
The NIC determines that destination endpoints
are unreachable by relying upon its timeout and
retransmission mechanisms. If after 255 retries
(i.e., several seconds) the NIC receives no ACKs or
NACKs from the receiver, the protocol deems the
destination endpoint as unreachable. When this
happens, the protocols marks the sequence number
of the channel as uninitialized and returns the original
message back to user-level via the endpoint's
reply receive queue. The application handles undeliverable
message as it would any other active
message, with a user-specifiable handler function.
Should not route to a destination NIC exist, all of
its endpoints are trivially unreachable.
4.4.3 Network Management
The system uses privileged mapper daemons, one
for each interface on each node of the system, to
probe and to discover the current network topol-
ogy. Given the current topology, the daemons
elect a leader that derives and distributes a set of
mutually deadlock-free routes to all NICs in the
system [MCS+97]. Discovering the topology of
a source-routed, cut-through network with anonymous
switches like Myrinet requires use of network
probe packets that may potentially deadlock on
themselves or on other messages in the network.
Hence mapping Myrinets can induce deadlock and
produce truncated and corrupted packets to be received
by interfaces (as a result of switch hardware
detecting and breaking deadlocks), even when the
hardware is working perfectly. From the transport
protocol's perspective, mapper daemons perform
two specialized functions: (1) sending and receiving
probe packets with application-specified source-based
routes to discover links, switches, and hosts
and (2) reading and writing entries in NIC routing
tables. These special functions can be performed
using privileged endpoints available to privileged
processes.
4.4.4 Virtual Networks Issues
Virtual networks introduce new issues for reliable,
unduplicated message delivery. Because endpoints
may be non-resident or not have DMA resources set
up, such as medium message staging areas, a packet
may need to be retried because of unavailable re-
sources. Because endpoints can be unloaded into
host memory, the NIC must cope with late or duplicate
acknowledgments arriving for non-resident
endpoints. And because transport protocol acknowledgments
operate upon send and receive table
entries, not endpoints, the protocols must ensure
that channel table state remains consistent while
loading and unloading endpoints.
Packets that are successfully written into their
destination endpoints return positive acknowledgments
(ACKs) to their senders. Receiving an ACK
frees the corresponding send channel resources.
NACKs notify senders of transmission errors or unavailable
receiver resources. Receiving a NACK
causes the sender to note that it has received feed-back
from the receiver, and then the timeout and
retransmission mechanisms resend the packet. For
simplicity, our design uses a single retransmission
mechanism for all packets.
Because we have chosen to use sequencing and
timeout/retry based on multiple independent logical
channels, ACKs and NACKs manage send and
receive channel table state and physical resources.
Consequently, each time an endpoint is unloaded
from the NIC, care must be taken to flush ACKs
and NACKs potentially lingering in the network
or in the send queue on a remote NIC. Requiring
that all outstanding packets be positively (and af-
firmatively) acknowledged before unloading is the
starting point: endpoints with no outstanding messages
are immediately unloaded while endpoints
with outstanding messages wait until all outstanding
messages are ACKed. Because all packets are
positively acknowledged, when an endpoint is un-
loaded, we are guaranteed that all of its transmitted
packets were successfully written into their destination
endpoints. With FIFO message delivery,
we know that all duplicate ACKs that may have
been retransmitted immediately follow and use the
same channel sequence number as the first ACK.
A new packet sent on the same channel will use
a new sequence number and will not be acknowledged
until an ACK with the new sequence number
is seen. Requiring that all messages receive ACKs
thus "flushes" all previous duplicate ACKs that
may exist in the network or in a receiving NIC's
send queue.
However, if a destination node experiences load
and the endpoint being unloaded has outstanding
messages to it, it may be a while before packets
receive ACKs. The goal of delaying an endpoint
unload operation was to ensure that old ACKs and
NACKs do not corrupt channel state. Towards this
goal, receiving the latest NACK, which reflects the
result of the latest retransmission, should be just
as good as receiving an ACK. Requiring that the
NACK be the most recent one is necessary to avoid
cases where a NACK is received, an endpoint is
unloaded, and the ACK for the original message,
which was successfully written, arrives. Determining
that a sender has received the latest NACK is
done by using a 32-bit timestamp. Each packet
retransmissions carries the sender's timestamp and
all NACKs echo this timestamp back to the sender.
An endpoint in the improved scheme requires that
a packet either receive an ACK or the latest NACK
before being unloaded.
5 Measured Performance
This section presents a series of benchmarks and
analyzes our system. The first microbenchmarks
characterize the system in terms of the LogP communication
model and lead to a comparison with
a previous generation of an active message system
and to an understanding of the costs of the
added functionality. The next benchmarks examine
performance between hosts under varying degrees
of destination endpoint contention. It concludes
with an examination of system performance
as the number of active virtual networks increases.
All programs were run on the Berkeley Network
of Workstations system in a stand-alone environ-
ment. Topology acquisition and routing daemons
were disabled, eliminating background communication
activity normally present.
5.1 LogP Characterization
The LogP communications model uses four parameters
to characterize the performance of communication
layers. This parameterization enables the
comparison of different communication layers in a
consistent framework. The microbenchmark facilities
of [CLM+95] derive the model parameters of
L (latency), overhead (o) and gap (g). The number
of processors (P ) is given. The overhead has two
components, the sending overhead (O s ) and the receiving
overhead (O r ). The send and receive overheads
measure the time spent by the processor issuing
and handling messages. The gap measures the
per-message time through the rate-limiting stage in
the communicationsystem and the latency accumulates
all time not accounted for in the overheads.
LogP Parameter Comparison
Time
(microseconds)
GAM 1.90 4.00 5.80 5.50 21.00
AM-II 4.09 4.28 15.98 12.60 41.94
Os Or g L RTT515253545
Components of Gap, Latency and RTT
Time
(microseconds)
protection 1.05 1.12 2.21
reliability 5.33 1.12 2.28
baseline 9.60 10.36 37.45
Gap Latency Rtt
Figure
5: Performance characterization using
the LogP model. The top graph shows the LogP
parameters as measured for an older and the current
active message systems on the same hardware
platform.
Figure
5 shows the LogP characterization of AM-
II, our new general-purpose, active message system
with virtual networks and an error model. For com-
parison, its also shows the parameters for GAM,
an earlier active message system for SPMD parallel
programs that lacks virtual networks, an error
model, and other features found in the new system.
For AM-II, the figure show the contributions of the
protection checks, mechanisms for reliable message
delivery as well as retransmission add to the fundamental
cost of communication.
The AM-II round-trip time is 42 microseconds as
compared with the GAM round-trip time of 21 mi-
croseconds. Of the 21 microseconds spent in each
direction in AM-II, 4.1 is spent finding and writing
a message descriptor in an endpoint, 4.1 is spent
reading the messages from the endpoint at the re-
ceiver, and the two network interfaces spend a total
of 12.6 microseconds injecting and ejecting the message
from the network. Careful conditional compilation
and inclusion of individual protocol components
in the network interface firmware allows us to
measure their performance impact. The additional
costs appear in the gap, and are attributable to the
NIC-to-NIC transport protocols.
Beyond a baseline of 9.60 microseconds for the
gap, reliability, including all costs of positively acknowledging
each message, contributes 5.3 additional
microseconds. The protection checks required
in a general-purpose, multiprogramming environment
add another 1 microsecond to the gap.
The timer and retransmission mechanisms add an
unmeasurable small cost, because of their coarse
granularity and because the network is reliable on
the time scales necessary for taking these measure-
ments. Beyond a baseline of 10.4 microseconds for
the latency, reliability contributes 1.1 additional
microseconds. The protection checks add another
1.1 microseconds, and the timer and retransmission
protocol mechanisms add no measurable latency.
Further comparison shows that the gap, specifically
the network interface firmware, limits the
AM-II short message rates whereas the sending and
receiving overhead limits the GAM short message
rates. Although in both cases, the microbenchmark
used small active messages with 4-word payloads,
the AM-II send overhead is larger because additional
information such as a capability is stored to
the network interface across the SBUS. The AM-II
gap is also larger because the firmware constructs
a private header for each message, untouchable by
any application, that is sent using a separate DMA
operation. This requires additional firmware instructions
and memory accesses.
5.2 Contention-Free Performance
Figure
6 shows the endpoint-to-endpoint band-width
between two machines using both cache-coherent
and streaming SBUS DMA transfers. Because
the NIC can only DMA messages between the
network and its local memory, a store-and-forward
delay is introduced for large messages moving data
between host memory and the interface. Although
the current network interface firmware does not
pipeline bulk data transfers to eliminate this de-
lay, streaming transfers nevertheless reach 31 MB/s
with 4KB messages and consistent DMA transfers
reach 23 MB/s with 8KB messages. With GAM,
pipelining increased bulk transfer performance to
a maximum of 38 MB/s. The difference between
the consistent and streaming DMA transfers rests
on whether or not a hardware buffer in the SBUS
Message size (bytes)
Bandwidth
(MB/s)
Consistent DMA
Streaming DMA
Figure
Sending bandwidth as a function of
message size in bytes. Consistent host-to-NIC
DMA operations across the SBUS have higher performance
for small transfers. Streaming transfers
obtain higher performance once the data transfer
times swamp the cost of flushing a hardware stream
buffer in the SBUS bridge chip.
adaptor is kept consistent automatically with memory
through the transaction, or, manually via a system
call upon completion of a transfer.
Perm Avg BW Agg BW Avg RTT
us
Neighbor 30.97 MB/s 2.85 GB/s 47.5 us
Bisection 5.65 MB/s 0.52 GB/s 50.8 us
Table
1: This table shows aggregate bandwidth and
average round trip times for 92 nodes with different
message permutations. In the cshift permuta-
tion, each node sends requests to its right neighbor
and replies to requests received from its left neigh-
bor. With neighbor, adjacent nodes perform pair-wise
exchanges. In bisection, pairs of nodes separated
by the network bisection perform pairwise
exchanges. Bandwidth measurements used medium
messages, whereas RTT measurements used 4-word
active messages.
Table
presents three permutations and their resulting
aggregate sending bandwidths, average per-host
sending bandwidths, and per-message round-trip
times when run on 92 machines of the NOW.
Each column shows that the bandwidth scales as
the system reaches a non-trivial size. The first two
permutations, circular shift and neighbor exchange,
are communication patterns with substantial net-work
locality. As expected, these cases perform
well, with bandwidths near their peaks and per-message
round-trip times within a factor of 2 of op-
timal. The bisection exchange pattern shows that a
large number of machines can saturate the bisection
bandwidth. Refer to figure 3 to see the network
topology and the small number of bisection cables.
(Additional switches and network cables have been
ordered to increase the bisection!)
5.3 Single Virtual Network
The next three figures show the performance of the
communication subsystem in the presence of con-
tention, specifically when all hosts send to a common
destination host. All traffic destined for the
common host is also destined for the same end-
point. For reasons that will become clear, the host
with the common destination is referred to as "the
server" and all other hosts are referred to as "the
clients".100003000050000700001 7 13 19 25 31 37 43
# of senders
Message
rate
(msgs/sec)
Aggregate Msg Rate
Avg Sender Rate
Figure
7: Active Message rates with destination
endpoint contention within a single virtual network.
Figure
7 shows the aggregate message rate of
the server (top line) as the number of clients sending
4-word requests to it and receiving 4-word response
messages increases. Additionally it shows
the average per-client message rate (bottom line)
as the number of clients increases to 92. Figure
8 presents similar results, showing the sustained
bandwidth with bulk transfers to the server as the
number of clients sending 1KB messages to it and
receiving 4-word replies. The average per-client
bandwidth gracefully and fairly degrades. We conjecture
that the fluctuation in the server's aggregate
message rates and bandwidths arises from acknowledgements
for reply messages encountering
congestion (namely other requests also destined for
the server). The variation in per-sender rates and
# of senders
Bandwidth
(MB/s)
Aggregate BW
Avg Sender BW
Figure
8: Delivered bandwidths with destination
endpoint contention within a single virtual network.200600100014001 5 9 13 17
# of senders
Average
round
time
Figure
9: Round-trip times with destination end-point
contention within a single virtual network.
bandwidths are too small to be observable on the
printed page. Figure 9 shows the average per-client
round-trip time as the number of clients grows to
hosts. The slope of the line is exactly the gap
measured in the LogP microbenchmarks.
5.4 Multiple Virtual Networks
We can extend the previous benchmark to stress
virtual networks. First, by increasing the number
of server endpoints up to the maximum of 7 that
can be cached in the interface memory, and then
continuing to incrementally add endpoints to increasingly
overcommit the resources. Thus, rather
than clients sharing a common destination end-
point, each client endpoint now has its own dedicated
server endpoint. With N clients, the server
process has N different endpoints where each one is
paired with a different client, resulting in N different
virtual networks. This contains client messages
within their virtual network and guarantees that
messages in other virtual networks make forward
Number of Clients
Message
Rate
(msgs/s)
Server
Client
Figure
10: Aggregate server and per-client message
rates with small numbers of virtual networks.
Figure
shows the average server message rate
and per-client message rates (with error bars) over
a five minute interval. The number of clients continuously
making requests of the server varies from
one to seven. In this range, the network interface's
seven endpoint frames can accommodate all server
endpoints. This scenario stresses both the scheduling
of outgoing replies and the multiplexing of in-coming
requests on the server. The results show
server message rates within 11% of their theoretical
peak of 62; 578 messages per second given the
measured LogP gap of 15:98 microseconds. The
per-client message rates with within 16% of their
ideal fair share of 1=Nth of the server's through-
put. Steady server performance and the graceful
response of the system to increasing load demonstrate
the effective operation of the flow-control,
endpoint scheduling, and multiplexing mechanisms
throughout the system.
Figure
11 extends the scenario shown in Figure
with one important difference. The server
host in Figure 10 is a single-threaded process
that polls its endpoints in a round-robin fashion.
In this extension, when the number of busy end-points
exceeds the network interface capacity, the
virtual network system actively loads and unloads
endpoints into and out of interface memory in an
on-demand fashion. When the server attempts to
write a reply message into a non-resident endpoint
(or when a request arrives for a non-resident end-
a pagefault occurs and the virtual network
Number of Clients
Message
Rate
(msgs/s)
Server
Client
Figure
11: Aggregate server and per-client message
rates with large numbers of virtual networks.
driver moves the backing storage and re-mapped
the endpoint pages as necessary. However, during
this time the server process is suspended and thus
it neither sends nor receives additional messages.
Messages arriving for non-resident endpoints and
for endpoints being relocated are NACKed. This
would result in a significant performance drop when
interface endpoint frames become overcommitted.
To extend this scenario and to avoid the pitfalls
of blocking, the server spawns a separate thread
(and Solaris LWP) per client endpoint. Each server
thread waits on a binary semaphore posted by the
communication subsystem upon a message arrival
that causes an endpoint receive queue to become
non-empty. Additional messages may be delivered
to the endpoint while the server thread is sched-
uled. Once running, the server thread disables further
message arrival events and processes a batch
of requests before re-enabling arrival events and
again waiting on the semaphore. Apart from being
a natural way to write the server, this approach
allows a large number of server threads to be suspended
pending resolution of their endpoint page-
faults while server threads with resident endpoints
remain runnable and actively send and receive messages
The results show that event mechanisms and
thread overheads degrade peak server message rates
by 15% to 53; 488 messages per second. While variation
in average per-client message rates across the
five minute sampling interval remains small, the
variation in message rates between clients increases
with load, with some clients rates 40% higher than
average while others are 36% lower than average.
A finer-grain time series analysis (not shown) of
client communication rates reveals the expected be-
havior: clients with resident server endpoints burst
messages at rates as shown in Figure 10 while others
send no messages until both their endpoints become
resident and the appropriate server thread is
scheduled. Some clients miss their turns to send
an appreciable number of messages because their
server thread is not scheduled.
6 Related Work
Recent communication systems can be categorized
by their support for virtualization of network interfaces
and communication resources and their positions
on multiprogramming and error handling.
GAM, PM, and FM use message-based APIs
with little to no support for multiprogramming.
GAM is the canonical fast active message layer.
PM and FM add support for gang-scheduling of
parallel programs. These systems are driven primarily
by the needs of SPMD parallel comput-
ing, such as support for MPI and portability to
MPPs. FM handles receive buffer overruns but ignores
other types of network error. None of these
systems have explicit error models which hinders
the implementation of highly-available and non-scientific
applications.
SHRIMP, U-Net and Hamlyn are closer to our
system. These systems provide direct, protected
access to network interfaces using techniques similar
to those found in application device channels
[DPD94]. The SHRIMP project, which uses
virtual memory mapped communication model, has
run multiple applications and has preliminary multiprogramming
results. U-Net and U-Net/MM can
support multiprogramming. Hamlyn presented a
vision of sender-based communication that should
have been able to support multiprogramming, but
demonstrated results using only ping-pong style
benchmarks.
The most important distinction between previous
work and our own lies in the virtualization of
network interfaces and communication resources.
In SHRIMP, the level of indirection used to couple
virtual memory to communication effectively virtualizes
the network. U-Net provides virtualized
interfaces, but leaves routing, buffer management,
reliable message delivery and other protocol issues
to higher-layers. Hamlyn allows a process to map
contiguous regions of NIC-addressable host memory
into its address space. These "messages areas"
afford a level of indirection that allows the system
to virtualize the network interface. The position
taken on virtualization has direct impact on the error
model. In the event of an error, SHRIMP and
Hamlyn deliver signals to processes. U-Net delegates
responsibility for providing adequate buffer
resources and conditioning traffic to higher-level
protocols, and drops packets when resources are unavailable
Conclusions
Bringing direct, protected communication into
mainstream computing requires a general-purpose
and robust communication protocol. This paper introduces
the AM-II API and virtual networks abstraction
which extends traditional active messages
with reliable message delivery, a simple yet powerful
error model and supports use in arbitrary sequential
and parallel programs. In this paper we
have presented the design of the NIC-to-NIC transport
protocols required by this more general sys-
tem. For our Myrinet implementation, we have
measured the costs of the generality relative to
GAM, a minimal active message layer, on the same
hardware. In particular, we have explored the costs
associated with endpoint scheduling, flow control,
timer management, reliable message delivery and
error handling.
Using the LogP communication model, we measured
the basic parameters of the system. The
implementation achieves end-to-end latencies of
microseconds for short active messages with a
peak bandwidth of 31 MB/s. These numbers represent
twice the end to end latency and 77% of
the bandwidth provided by GAM. The cost of reliable
message delivery makes the most significant
contribution above basic communication costs. Using
additional benchmarks, we have demonstrated
that the protocols provide robust performance and
graceful degradation for the virtual networks ab-
straction, even when physical network interface resources
are overcommitted by factors of 12 or more.
These benchmarks demonstrate the feasibility of
truly virtualizing network interfaces and their resources
and show the importance of supporting
multi-threaded applications.
The NIC-to-NIC protocols discussed in this paper
perform well, and, enable a diverse set of timely
research efforts. Other researchers at Berkeley
are actively using this system to investigate explicit
and implicit techniques for the co-scheduling
of communicating processes [DAC96], an essential
part of high-performance communication in multiprogrammed
clusters of uni and multiprocessor
servers. Related work on clusters of SMPs [LMC97]
investigates the use of multiple network interfaces
and multiprotocol active message layers. The impact
of packet switched networks, such as gigabit
ethernet, on cluster interconnect protocols is an
open question. We are eager to examine the extent
to which our existing protocol mechanisms and
policies apply in this new regime.
Acknowledgments
This research is supported in part by ARPA grant
F30602-95-C-0014, the California State Micro Pro-
gram, Professor David E. Culler's NSF Presidential
Faculty Fellowship CCR-9253705, NSF Infrastructure
Grant CDA-8722788, a NSF Graduate Re-search
Fellowship, and a National Seminconductor
Corportation Graduate Research Fellowship. We
would like to thank Rich Martin for providing valuable
feedback on earlier versions of this paper. We
would also like to thank Eric Anderson for discussions
on specialization, and especially Andrea
Arpaci-Dusseau for comments and suggestions for
improving this paper.
--R
A Case for Networks of Workstations: NOW.
Two Virtual-Memory Mapped Virtual Network Interface Designs
A Gigabit per Second Local Area Network.
An Implementation of the Hamlyn Sender Managed Interface Archi- tecture
LogP Performance Assessment of Fast Network Interfaces.
Effective Distributed Scheduling of Parallel Work- loads
Experiences with a High-speed Network Adap- tor: A Software Perspective
The Interface Message Processor for the ARPA Computer Net- work
On the Self-Similar Nature of Ethernet Traffic
Active Message Application Programming Interface and Communication Subsystem Organization.
HPAM: An Active Message Layer for a Network of HP Workstations.
System Area Network Map- ping
High Performance Messaging on Workstations: Illinois Fast Messages (FM) for Myrinet.
Protocol Design for High Performance Network- ing: a Myrinet Experience
PM: A High-Performance Communication Library for Multi-user Parallel Environments
Active Messages: A Mechanism for Integrated Communication and Com- putation
--TR
--CTR
Hans Eberle , Nils Gura, Separated high-bandwidth and low-latency communication in the cluster interconnect Clint, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-12, November 16, 2002, Baltimore, Maryland
Gang Qu , Miodrag Potkonjak, Techniques for energy minimization of communication pipelines, Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design, p.597-600, November 08-12, 1998, San Jose, California, United States
Jin-Soo Kim , Kangho Kim , Sung-In Jung, Building a high-performance communication layer over virtual interface architecture on Linux clusters, Proceedings of the 15th international conference on Supercomputing, p.335-347, June 2001, Sorrento, Italy
Matt Welsh , Anindya Basu , Xun Wilson Huang , Thorsten von Eicken, Memory Management for User-Level Network Interfaces, IEEE Micro, v.18 n.2, p.77-82, March 1998
Thorsten von Eicken , Werner Vogels, Evolution of the Virtual Interface Architecture, Computer, v.31 n.11, p.61-68, November 1998
Evan Speight , Hazim Abdel-Shafi , John K. Bennett, Realizing the performance potential of the virtual interface architecture, Proceedings of the 13th international conference on Supercomputing, p.184-192, June 20-25, 1999, Rhodes, Greece
Alan M. Mainwaring , David E. Culler, Design challenges of virtual networks: fast, general-purpose communication, ACM SIGPLAN Notices, v.34 n.8, p.119-130, Aug. 1999
Stephan Brauss , Martin Frey , Martin Heimlicher , Andreas Huber , Martin Lienhard , Patrick Mller , Martin Nf , Josef Nemecek , Roland Paul , Anton Gunzinger, An efficient communication architecture for commodity supercomputers, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.19-es, November 14-19, 1999, Portland, Oregon, United States
Philip Buonadonna , Andrew Geweke , David Culler, An implementation and analysis of the virtual interface architecture, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-15, November 07-13, 1998, San Jose, CA | multiprogramming;cluster communications;network transport protocols;Myrinet LAN |
624870 | Assuring Good Style for Object-Oriented Programs. | The language-independent Law of Demeter, which encodes the ideas of encapsulation and modularity in an easy-to-follow form for object-oriented programmers, is presented. The law was developed during the design and implementation of the Demeter system, which provides a high-level interface to class-based, object-oriented systems. Two forms of the law, the class and object forms, are described. Its motivation is to ensure that the software is as modular as possible. Principles covered by the law include coupling control, information hiding, information restriction, information localization, and structured induction. An example is given to show how the law is applied, and valid violations are identified. It is shown how to transform a method that does not satisfy the law into one that does. | Introduction
This paper describes the object-oriented programming style rule called The Law of Demeter.
Along with the 'goto-rule' and other programming style rules inherited from the procedural
programming paradigm, many of which still apply, the Law should be part of the programming
knowledge that is considered when implementing object-oriented software. It is a partial response
to the questions: "When is an object-oriented program written in good style?", "Is there
some formula or rule which one can follow in order to write good object-oriented programs?",
"What metrics can we apply to an object-oriented program to determine if it is 'good' ?'', and
"What are the characteristics of good object-oriented programs?". In addition, it helps to
formalize the existing ideas on these issues that can be found in the literature [KP86] [Sny87].
There are two kinds of style rules for object-oriented programming: rules that constrain the
structure of classes and rules that constrain the implementation of methods. Style rules that
influence the structure of classes have been published elsewhere [Lie88]. The focus of this
paper is on a style rule that restricts how methods are written for a set of class definitions. In
particular, the Law restricts the message-sending statements in method implementations.
Informally, the Law says that any object receiving a message in a given method must be one
of a restricted set of objects. This set of preferred objects includes the method arguments, the
self pseudo-variable, and to the immediate subparts of self. The self object in Smalltalk and
Flavors is called this in C++ and current in Eiffel.
The Law of Demeter is named after the Demeter System TM , which provides a high-level interface
to class-based object-oriented systems, and the Demeter Research Group at Northeastern
University, which develops the system. The Group has applied the Law in the development of
the system itself (formally about fourteen thousand lines of Lisp/Flavors and now about ninety
thousand lines of C++ code) and in the implementation of numerous applications developed
with the system.
Our experience has been that the Law promotes maintainability and comprehensibility of the
software. This is a result of the small method size and the predicable message-passing patterns,
both of which are caused by the application of the Law. In other words, following the Law in
concert with rules such as, minimizing code duplication, minimizing the number of arguments,
and minimizing the number of methods, produces code with a characteristic and manageable
form.
We have also seen that adherence to the Law prevents programmers from encoding details of
the class hierarchy structure in the methods. This is critical to the goal of making the code
robust with respect to changes in the hierarchy structure. These changes occur very frequently
in the early stages of development.
The goal of the Law of Demeter is to organize and reduce the behavioral dependencies between
classes. Informally, one class behaviorally depends on another class when it calls a method
(through a message sent to an object) defined in the other class. The behavioral dependencies
encoded in the methods of an object-oriented program determine the complexity of the pro-
gram's control flow and the level of coupling between the classes. This paper examines these
relationships and illustrated how the Law impacts their existence.
Some other work describing the Law includes [LHR88] where we presented a proof which states
that any object-oriented program written in bad style can be transformed systematically into
a program obeying the Law of Demeter. The implication of this proof is that the Law of
Demeter does not restrict what a programmer can solve, it only restricts how he or she solves
it. We have also formulated interpretations of the Law for multiple programming languages
[LH89b]. Third party commentary on the Law includes [Boo91, Sak88, Bud91, Gra91]. The
thesis of Casais [Cas90] examines the Law in depth and assesses its favorable impact on the
problem of providing automatic support for rewriting code in response to changes in the class
hierarchy. A slight dissenting voice was raised by Wirfs-Brock et. al [WBW89] who prefer a
function centered approach to object-oriented design rather than the data centered approach
of Demeter.
The examples in this paper are written in the extended notation of the Demeter system.
describes Demeter and its notation. The sections which follow will define the Law of
Demeter both formally and through examples, examining both practical and theoretical issues.
Demeter
The key contribution of the Demeter system is to improve programmer productivity by several
factors. This is achieved in a number of ways. First, Demeter provides a comprehensive
standard library of utilities. Second, a significant amount of code is generated from the programmers
object-oriented design. Third, Demeter includes a number of tools that automate
common programming practices.
The key ideas behind the Demeter system are to use a more expressive class notation than in
existing object-oriented languages and to take advantage of the added information by providing
many custom-made utilities. These utilities are provided for a specific object-oriented language
like C++ or Flavors and greatly simplify the programming task.
Examples of utilities Demeter generates or provides generically are: class definitions in a programming
language, application skeletons, parsers, pretty printers, type checkers, object edi-
tors, re-compilation minimizers, pattern matchers and unifiers. The Demeter system helps the
programmer define the classes (both their structure and functionality) with several support
tools, including a consistency checker (semantic rules and type checking at the design level), a
learning tool which learns class definitions from example object descriptions, an LL(1) corrector
and an application-development plan generator [Lie88] [LR88]. The explanations and examples
presented in this paper are written in the extended Demeter notation which is described below.
One of the primary goals of the Demeter system is to develop an environment that eases the
evolution of a class hierarchy. Such an environment must provide tools for the easy updating of
existing software (the methods or operations defined on the class hierarchy). We are striving to
produce an environment that will let software be 'grown' in a continuous fashion. We believe a
continuous-growth environment will lead to a rapid prototyping/system-updating development
cycle.
The primary input to the system is a collection of class definitions. This collection is called
a class dictionary. Classes are described in Demeter using three kinds of class definitions:
construction, alternation, and repetition. The class dictionary shown in Figure 1 partially
defines the structure of a lending library. 1
1. A construction class definition is used to build a class from a number of other classes and
is of the form
class C has parts
class C
Here C is defined as being made up of n parts (called its instance variable values),
each part has a name (called an instance variable name) followed by a type (called an
instance variable type). This means that for any instance (or member) of class C the name
refers to a member of class SC i
. The example shown in Figure 1 describes a
library class as consisting of a reference section, a loan section, and a journal section.
We use the following naming convention: instance variable names begin with a lower case
letter and class names begin with an upper case letter.
2. An alternation class definition allows us to express a union type. A class definition of the
We use two notations in the Demeter system. This introductory paper uses the extended notation. A
concise notation based on EBNF is used in later papers of the thesis. The abstract syntax of the concise and
the abstract syntax of the extended notation are identical: only the "syntactic sugar" is changed.
class Library has parts
reference
loan
journal
class Library
class BookIdentifier is either
ISBN or LibraryOfCongress
class BookIdentifier
class ReferenceSec has parts
archive : Archive
class ReferenceSec
class Archive has parts
class Archive
class BooksSec has parts
refBooks
class BooksSec
class ListofBooks is list
repeat fBookg
ListofBooks
class Catalog is list
repeat fCatalogEntryg
class Book has parts
class Book
Figure
1: Library class dictionary
class C is either
A or B
class C
states that a member of C is a member of class A or class B (exclusively). For example, the
definition of BookIdentifier in Figure 1, expresses the notion that when somebody refers
to the identifier of a book they are actually referring to its ISBN code or its Library of
Congress code.
3. A repetition class definition is simply a variation of the construction class definition where
all the instance variables have the same type and the programmer does not specify the
number of instance variables involved. The class definition
class C is list
repeat fAg
defines members of C to be lists of zero or more instances of A.
3 Forms of the Law
The Law of Demeter has two basic forms: the object form and the class form. The object form
is the primary form. However, it is not possible to statically check code with respect to the
object form. The two versions of the class form are compile-time checkable approximations.
The two versions of the class form are called the strict form and the minimization form. The
strict version rigorously restricts the dependencies between classes. However, in practice, it is
difficult to completely adhere to the strict version. These potential 'law-breaking' situations
are discussed below. The minimization version is the weakest expression of the Law and is
phrased as a guideline rather than a strict rule. It allows additional dependencies between
classes but asks the object-oriented programmer to minimize them and to document them by
declaring special acquaintance classes.
3.1 Object form
The object version of the Law is based on the concept of preferred supplier objects. These are
defined as follows:
supplier object to a method M is an object to which a
message is sent in M. The preferred supplier objects to method M are:
ffl the immediate parts of self or
ffl the argument objects of M or
ffl the objects which are either objects created directly in M or objects in
global variables.
The programmer determines the granularity of the phrase "immediate subparts" of self for the
application at hand. For example, the immediate parts of a list class are the elements of the
list. The immediate parts of a "regular" class object are the objects stored in its instance
variables.
In theory, every object is a potential supplier to any particular method. When a supplier
object is sent a message in a method, the flow of control passes from the method to a method
implemented for the message receiver. However, the presence of dynamic binding and method
overriding in object-oriented programming languages can make it difficult to statically determine
how control flows from one method to the next. By restricting the set of supplier objects
we can contain the level of difficulty per method.
Object version of the Law of Demeter: Every supplier object
to a method must be a preferred supplier.
The object form expresses the spirit of the basic law and serves as a conceptual guideline
for the programmer to approximate. While the object version of the Law expresses what is
really wanted, it is hard to enforce at compile-time [LHR88]. The object version serves as an
additional guide whenever the strict class version of the Law accepts a program which appears
to be in bad style or when the strict class version of the Law rejects a program which appears
to be in good style.
Client. Method M is a client of method f attached to class C if inside M message f
is sent to an object of class C or to C. If f is specialized in one or more subclasses
then M is only a client of f attached to the highest class in the hierarchy. Method M
is a client of class C if it is a client of some method attached to C.
Supplier. If M is a client of class C then C is a supplier to M. Informally, a supplier
class to a method is a class whose methods are called in the method.
Acquaintance class. A class C1 is an acquaintance class of method M attached to
class C2, if C1 is a supplier to M and C1 is not
ffl the same as C2
ffl a class used in the declaration of an argument of M
ffl a class used in the declaration of an instance variable of C2
Preferred-acquaintance class. A preferred-acquaintance class of method M is
either a class of objects created directly in M or a class used in the declaration of a
global variable used in M.
Preferred-supplier class. Class B is called a preferred supplier to method M
(attached to class C) if B is a supplier to M and one of the following conditions
holds:
ffl B is used in the declaration of an instance variable of C or
ffl B is used in the declaration of an argument of M, including C and its super-
classes, or
ffl B is a preferred acquaintance class of M.
Table
1:
3.2 Class form
The class form's versions are expressed in terms of classes and can be supported by a compile-time
law-enforcement tool. Paralleling the object form, the strict version is based on the notion
of preferred supplier which is defined in table 1.
Figure
shows five examples of massages being sent to objects and of preferred-supplier classes.
To send a message f to object s, we use the C++ notation (s ! f() is the same as "send s the
message f')'. In Figure 2 class B is a preferred supplier to method M and M is a preferred client
of B. The ';' is the comment character which starts a comment line.
class C has parts
implements interface
class C
Case 1: Instance variable class.
class C has parts
none
implements interface
class C
Case 2: Argument class.
class B has parts
none
implements interface
in C++ self is called this
class B
Case 3: Argument class (self).
class C has parts
none
implements interface
newObject is a new B instance
fcalls newObject ! f()g
class C
Case 4: Newly created object class.
class C has parts
none
implements interface
class C
Case 5: Global class. s is global of type B.
In each case, class B is a preferred supplier to M
Figure
2: Examples of preferred suppliers.
As before, every class in an object-oriented program is a potential supplier of any method.
However, it is best to limit the suppliers to a method to a small set of preferred classes. To
define these preferred suppliers we introduce the concept of an acquaintance class of a method
([Sak88], [HB77]). A precise definition of an acquaintance relies on a class version of the
supplier concept. Informally, a method's supplier class is a class whose methods are called in
the method.
The definitions make a distinction between the classes associated with the declaration of the
method and the classes used in the body of the method. The former includes the class where
the method is attached, its superclasses, the classes used in the declarations of the instance
variables and the classes used to declare the arguments of the method. In some sense, these are
an 'automatic' consequence of the method declaration. They can be easily derived from the
code and shown by a browser. All other supplier classes to the method are introduced in the
body of the method. They can only be determined by a careful reading of the implementation.
This second set of classes are the acquaintance classes. To show these classes within a code
browser would require a complete symbol table of the program.
The set of acquaintance classes are further partitioned into a preferred acquaintance subset and
it's complement. A method's preferred acquaintance class is either a class of objects created
directly in the method (by calling the acquaintance class's constructor) or a class used to
declare a global variable used in the method.
Given these definitions, the strict version of the Law of Demeter's class form says:
of the Law of Demeter : Every supplier class to a
method must be a preferred supplier.
There are several benefits which result from applying the strict version of the Law's class form.
For example, if the interface of class A changes, then only the preferred-client methods of class
A and its subclasses require modification (provided that the changes required in the preferred
client methods do not change the interfaces of those classes). A class's interface can change in
many ways. For example, the programmer might modify an interface by changing an argument
or return type, by changing the name of a method, or by adding or deleting a method. A
class's preferred-client methods are usually a small subset of all the methods in a program; this
reduces the set of methods that need to be modified. This benefit clearly shows that the Law
of Demeter limits the repercussions of change.
Using the Law can also control the complexity of programming. For example, a programmer
reading a method needs to be aware of only the functionality of the method's preferred supplier
classes. These preferred suppliers are usually a small subset of the set all the classes in the
application and furthermore, they are "closely related" to the class to which the method is
attached. This relationship makes it easier to remember those classes and their functionality.
The second class version is more lenient than the strict form because it allows some non-preferred
supplier classes. In practice, it makes sense to allow some of these other acquaintance
classes. However, we suggest that the programmer clearly document the violations in order to
recover the Law's benefits. Acquaintance classes are typically used for three reasons:
ffl Stability: If a class is stable and/or if its interface will be kept upwardly compatible, it
makes sense to use it as an acquaintance class in all methods. The programmer specifies
such "global" acquaintance classes separately and they are included in the acquaintance
classes of all methods.
ffl Efficiency: The programmer might need to directly access instance variables of certain
other classes to increase run-time efficiency. In C++ terminology, these are classes of
which the method is a friend function.
ffl Object construction.
The permissive minimization version of the Law of Demeter is stated as follows:
Minimization form of Law of Demeter: Minimize the number
of acquaintance classes of each method.
We can count the number of acquaintance classes for all methods to assess the level of conformance
of a program to the Law. If a class appears as an acquaintance class of several methods,
it is counted as many times as it appears.
If a statically typed language like C++ or Eiffel is extended with a facility to declare acquaintance
classes, the compiler can be modified in a straightforward way to check adherence to
the minimization version in the following sense: Each supplier that is an acquaintance class
must be explicitly declared in the list of the method's acquaintance classes. To easily check
the Law at compile time or even at design time, the programmer must provide the following
documentation for each method:
1. the types of each of the arguments and the result
2. the acquaintance classes.
The documentation gives programmers reading the method a list of the types they must know
about to understand the method. The compiler can check the completeness of each method's
documentation by examining the messages sent in the method and the classes of the objects
created directly by the method.
The motivation behind the Law of Demeter is to ensure that the software is as modular as
possible. The Law effectively reduces the occurrences of certain nested message sends (function
calls) and simplifies the methods.
The Law of Demeter has many implications for widely known software engineering principles.
Our contribution is to condense many of the proven principles of software design into a single
statement that can be easily followed by object-oriented programmers and easily checked at
compile-time.
Principles covered by the Law include:
Coupling control. It is a well-known principle of software design to have minimal coupling
between abstractions (like procedures, modules, methods) [EW88]. The coupling can be
along several links. An important link for methods is the "uses" link (or call/return
link) that is established when one method calls another method. The Law of Demeter
effectively reduces the methods the programmer can call inside a given method and
therefore limits the coupling of methods with respect to the "uses" relation. The Law
therefore facilitates reusability of methods and raises the software's level of abstraction.
Information hiding. The Law of Demeter enforces one kind of information hiding: structure
hiding. In general, the Law prevents a method from directly retrieving a subpart of
an object which lies deep in that object's ``part-of'' hierarchy. Instead, the programmer
must use intermediate methods to traverse the "part-of" hierarchy in controlled small
steps [LG86].
In some object-oriented systems, the programmer can protect some of the instance variables
or methods of a class from outside access by making them private. This important
feature complements the Law to increase modularity but is orthogonal to it. The Law
promotes the idea that the instance variables and methods which are public should be
used in a restricted way.
Information restriction. Our work is related to the work by Parnas et al. [PCW85]
[PCW86] on the modular structure of complex systems. To reduce the cost of software
changes in their operational flight program for the A-7E aircraft they restricted the
use of modules that provide information that is subject to change. We take this point
of view seriously in our object-oriented programming and assume that any class could
change. Therefore, we restrict the use of message sends (function calls) by the Law
of Demeter. Information restriction complements information hiding: Instead of hiding
certain methods, they are made public but their use is restricted. Information restriction
does not offer the same level of protection as information hiding. However, when hiding
it is not feasible, restriction offers a level of protection.
ffl Localization of information. 2 Many software engineering textbooks stress the importance
of localizing information. The Law of Demeter focuses on localizing class information.
When programmers study a method they only have to be aware of types which are very
closely related to the class to which the method is attached. They can effectively be
ignorant (and independent) of the rest of the system. As the saying goes, 'ignorance
is bliss'. This important aspect of the Law helps reduce programming complexity. In
addition, the Law also controls the visibility of message names. Programmers can only
use message names in the interfaces of the preferred-supplier classes to a given method.
ffl Structural induction. The Law of Demeter is related to the fundamental thesis of Denotational
Semantics. That is, "The meaning of a phrase is a function of the meanings of
its immediate constituents". This goes back to Frege's work on the principle of compositionality
in his Begriffsschrift [Hei67]. The main motivation behind the compositionality
principle is that it facilitates structural induction proofs.
5 Example
This section shows how to apply the Law of Demeter to a program that violates both the strict
and the minimization versions of the Law's class form. For this example, we use the classes
defined by the class dictionary fragment for a library shown in Figure 3.
The methods of the example are written in C++. However, the text should be comprehensible
for users of other object-oriented programming languages. In C++ terminology, a method is
called a 'function member' and an instance variable is called a `data member'. In the following
C++ code, the types of data members and function member arguments are pointer types to
classes.
The fragment of a C++ program in Figure 4 searches the reference section for a particular
book. (To keep the example small, we use direct access to instance variables instead of using
access methods.) The searchBadStyle function attached to ReferenceSec passes the message on to
its book (BooksSec), microfiche (MicroficheFiles) and document sections (Documents).
This function breaks the Law of Demeter. The first message marked /*/ sends the message
archMicrofiche to archive which returns an object of type MicroficheFiles. The method next sends
Peter Wegner pointed out this aspect of the Law.
class Library has parts
reference
loan
journal
class Library
class BookIdentifier is either
ISBN or LibraryOfCongress
class BookIdentifier
class ReferenceSec has parts
archive : Archive
class ReferenceSec
class Archive has parts
class Archive
class MicroficheFiles has parts
class MicroficheFiles
class Documents has parts
class Documents
class BooksSec has parts
class BooksSec
Figure
3: Library revisited
this returned object the search message. However, MicroficheFiles is not an instance variable or
argument type of class ReferenceSec.
Because the structure of each classes is clearly defined by the class dictionary, the programmer
might be tempted to accept the method searchBadStyle in Figure 4 as a reasonable solution.
But consider a change to the class dictionary. Assume the library installs new technology and
replaces the microfiche and document sections of the archive with CD-ROMs or Video-Discs:
class Archive has parts
class Archive
class CDRomFile has parts
class CDRomFile
The programmer now has to search all of the methods, including the searchBadStyle method,
for references to an archive with microfiche files. It would be easier to limit the modifications
only to those methods which are attached to class Archive. This is accomplished by rewriting
the methods in good style resulting in searchGoodStyle functions attached to ReferenceSec and
Archive.
Using good style also reduces the coupling respect to the "uses" relation: In the original
class ReferenceSec f
public:
Archive* archive;
boolean searchBadStyle(Book* book) f
return
boolean searchGoodStyle(Book* book) f
return
archive
class Archive f
public:
MicroficheFiles* archMicrofiche;
Documents* archDocs;
boolean searchGoodStyle(Book* book) f
return
class MicroficheFiles f
public:
boolean search(Book* book)
f.g
class Documents f
public:
boolean search(Book* book)
f.g
class Book f.g;
Figure
4: C++ fragment to search the reference section.
version, ReferenceSec was coupled with BooksSec, Archive, MicroficheFiles and Documents, but now
it is coupled only with BooksSec and Archive.
Another way to examine the effects of using the Law is to translate a program, in both good
and bad style, into a dependency graph. In the graphs, the nodes of the graph are classes. An
edge from class A to class B has an integer label which indicates how many calls are written in
the text of the functions of A to the functions of B. If a label is omitted from an edge, it means
that its value is 1. Access to an instance variable is interpreted as a call to read the instance
variable. Figure 5a shows the graph for the program which violates the Law of Demeter; Figure
5b shows the graph for the one that follows the Law.
Figure
5: Dependency graph representation
6 Valid violations
The Law of Demeter is intended to act as a guideline, not as an absolute restriction. The
minimization version of the Law's class form gives programmers a choice of how strongly they
want to follow the Law: The more non-preferred acquaintance classes used, the weaker the
adherence to the strict version. In some situations, the cost of obeying the strict version of the
Law may be greater than the benefits. However, when programmers willingly violate the Law,
they take on the responsibility of declaring the required acquaintance classes. This is critical
documentation for future maintenance of the software.
As an example of where the cost of applying the Law is higher than its benefits, consider the
following prototypical method which is in bad style, coded in both Flavors and C++:
Flavors:
(defmethod (C :M) (p)
C++:
void C::M(D* p)
where p is an instance of class A and F1 returns a subpart of p. If the immediate composition
of A changes the method M may have to change also because of F1.
There are two situations when it is reasonable to leave the above as it is:
ffl F1 is intended to serve as a "black box" and the programmer knows only about the
types of its arguments and the return type. In this case, the maintainer of F1 has the
responsibility to ensure that any updates to F1 are upwardly compatible so programmers
of the function are not penalized for using it.
ffl If run-time efficiency is important to the application, the use of mechanisms such as the
C++ friend function feature may be necessary. Friend functions should be used carefully,
since whenever the private members of a class change, the friend functions of the class
may also require change.
Consider another example that shows where the costs of using the Law might outweigh its
benefits. For an application which solves differential equations the class dictionary may have
the following definitions:
class ComplexNumber has parts
class ComplexNumber
Flavors:
(defmethod (Vector :R) (c :ComplexNumber)
( . (send (send c :realPart) :project self) .))
C++:
void Vector::R(ComplexNumber* c)
The method R is in the same form as M in the previous example and is in bad style for the same
reason. The question here is whether it is important to hide the structure of complex numbers
and to rewrite the method. In this application, where the concept of a complex number is well
defined and well understood, it is unnecessary to rewrite the method so that the Law is obeyed.
In general, if the application concepts are well defined and the classes which implement those
concepts are stable, in the sense that they are very unlikely to change, then such violations as
the above are acceptable.
Our experience has been that writing programs which follow the Law of Demeter decreases
the occurrences of nested message sending and decreases the complexity of the methods, but it
increases the number of methods. The increase in methods is related to the problem outlined
in [LG86] which is that there can be too many operations in a type. In this case the abstraction
may be less comprehensible, and implementation and maintenance are more difficult. There
might also be an increase in the number of arguments passed to some methods.
One way of correcting this problem is to organize all the methods associated with a particular
functional (or algorithmic) task into "Modula-2 like" module structures as outlined in [LR88].
The functional abstraction is no longer a method but a module which will hide the lower-level
methods.
7 Conforming to the Law
Given a method which does not satisfy the Law, how can a programmer transform it so that it
conforms to the Law? In [LHR88] we described an algorithm to transform any object-oriented
program into an equivalent program which satisfies the Law. In other words, we showed that
we can translate any object-oriented program into a "normal form" which satisfies the Law's
strict version.
There are other, less automatic, ways to achieve this goal which may help to derive more
readable or intuitive code. These also may help to minimize the number of arguments passed
to methods and the amount of code duplication. Two such techniques are called lifting and
pushing.
To explain these techniques, we need a preliminary definition. We say that class B is a part-
class of class A, if B is the class of one of A's instance variables or B is a part-class of a class
of one of A's instance variables.
Consider the method:
Flavors:
(defmethod (C :M) ()
(send (send self ':m1) ':m2))
C++:
void C::M()
and T is the class of the object returned by m1. T is not a preferred supplier class of M. We
distinguish two cases:
1. T is a part-class of C.
2. C is a part-class of T.
Lifting. This technique is applicable in the first case (T is a part-class of C). The idea is
to make m1 return an object of an instance variable or argument class of C and adjust m2
accordingly. Method m2 is lifted up in the class hierarchy, from being attached to class T to
being attached to an instance variable class of C.
For example, suppose a program is needed to parse an input using a grammar. A grammar
is made up of a list of rules (productions) indexed by rule name. A fragment of the parse
application is shown in Figure 6. This program fragment uses one acquaintance class (class
Body in the method parse for Grammar).
The problem with the fragment is that method lookUp of Grammar returns an object of class Body
which is not an instance variable class of Grammar. To transform the first method into good
style, we must make the lookUp method return an instance of Rule and then adjust parseDetails.
Figure
7 shows the modified version. The improved program fragment uses no acquaintance
class.
But this lifting approach does not always work, consider Figure 8. This program fragment uses
one acquaintance class (class Rule in method parse of Grammar). Here, we cannot transform the
first method into good style by lifting the return type of the lookUp method.
Pushing. This technique is applicable in cases 1 and 2 (i.e. T is a part class of C and C
is a part class of T respectively). The second case is slightly more complicated as it involves
class Grammar is list
repeat fRuleg
class Rule has parts
class Rule
Flavors:
(defmethod (Grammar :parse) (ruleName :type Symbol)
(send (send self ':lookUp ruleName) ':parseDetails))
(defmethod (Grammar :lookUp) (ruleName :type Symbol)
. (send (send rule ':lookUp ruleName) ':getBody))
(defmethod (Body :parseDetails) ()
.)
C++:
void Grammar::parse(Symbol* ruleName)
f.
return rule
void Body::parseDetails()
Figure
Example code that violates the Law of Demeter
Flavors
(defmethod (Grammar :parse) (ruleName :type Symbol)
(send (send self ':lookUp ruleName) ':parseDetails))
(defmethod (Grammar :lookUp) (ruleName :type Symbol)
. (send rule ':lookUp ruleName))
(defmethod (Rule :parseDetails) ()
.(send self ':getBody) .
C++:
void Grammar::lookUp(Symbol* ruleName)
Rule* Grammar::lookUp(Symbol* ruleName)
f.
return rule ! lookUp(ruleName);g
void Rule::parseDetails()
f. this ! getBody(); .g
Figure
7: New parse implementation
class Grammar has parts
class Grammar
class RuleList is list
repeat fRuleg
RuleList
class Rule has parts
class Rule
Flavors
(defmethod (Grammar :parse) (ruleName :type Symbol)
(send (send-self ':lookUp ruleName) ':parseDetails))
(defmethod (Grammar :lookUp) (ruleName :type Symbol)
returns object of type Rule
(send ruleList ':lookUp ruleName))
(defmethod (RuleList :lookUp) (ruleName :type Symbol)
. )
(defmethod (Rule :parseDetails) ()
. )
C++:
void Grammar::parse(Symbol* ruleName)
Rule* Grammar::lookUp(Symbol* ruleName)
f .
f . g
Figure
8: Law violation that cannot be fixed with the lifting technique.
traveling up the object hierarchy but the general technique is the same as for the first case.
The pushing technique is just a variation of the top-down programming technique of pushing
the responsibility for doing the work to a lower level procedure.
In the lifting example, a problem arose because the Grammar class has the task of sending the
parseDetails message. This task is really the responsibility of class RuleList which knows more
about Rule details than Grammar. Figure 9 shows an improved design that does not use any
acquaintance classes. This is also the technique used in Figure 4 to write searchGoodStyle.
Flavors
(defmethod (Grammar :parse) (ruleName)
(send self ':lookUpParse ruleName))
(defmethod (Grammar :lookUpParse) (ruleName)
(send ruleList ':lookUpParse ruleName))
(defmethod (RuleList :lookUpParse) (ruleName)
(send (send-self ':lookUp ruleName) ':parseDetails))
C++:
void Grammar::parse(Symbol* ruleName)
void Grammar::lookUpParse(Symbol* ruleName)
Figure
9: Example transformed with the pushing technique.
The redesign has introduced an additional method. If list classes are viewed as stable (for
example, as is the case in Smalltalk), there is no need for the redesign and it is justified to keep
the acquaintance class.
8 Conclusion
This paper introduced a simple rule which, when followed, results in the production of structured
and maintainable object-oriented software. The rule, called the "Law of Demeter",
encodes the ideas of data hiding and encapsulation in an easy to follow form for the object-oriented
programmer. The resulting code is more robust, allowing individual classes to be
redesigned while leaving most of the remaining software intact. Furthermore, by effectively
reducing the effects of local changes to a software system, adherence to the Law can reduce
many of the headaches of software maintenance.
But following the Law exacts a price. The greater the level of interface restriction (a refinement
of hiding), the greater the penalties are in terms of the number of methods, execution speed,
number of arguments to methods and sometimes code readability.
But in the long term these are not fatal penalties. We have found that packaging the related
methods and definitions together helps significantly in organizing the increased number
of smaller methods [Lie92]. This facility along with the support of an interactive CASE environment
can erase some of the penalties of following the Law. The Demeter System includes a
formalism, and a code generation mechanism, called Propagation Patterns [LXSL91, LHSLX92]
which removes most of the programming burden of following the Law. This utility generates
major parts of the required code. The execution-speed problem can be countered by using
preprocessor or compiler technologies like in-line code expansion or code optimization similar
to the way tail recursion optimization is done.
In the application of the Law throughout the development of the Demeter System the Law
never prevented us from achieving our algorithmic goals although some the methods needed to
be rewritten. This task was not difficult and the results were generally more satisfying.
Acknowledgements
We would like to thank Gar-Lin Lee for her feedback and contributions
during the development of the ideas in this paper. Thanks also to Jing Na who, along with
Gar-Lin, tested the practicality of using the Law during the production of some of the Demeter
system software. Mitch Wand was instrumental in initiating the investigation into the weak
and strong interpretations. Carl Wolf suggested that the object version of the Law is the one
to be followed conceptually. Special thanks are due to Arthur Riel who was a principal author
on earlier versions of this paper.
Members of the CLOS community (Daniel Bobrow, Richard Gabriel, Jim Kempf, Gregor Kicza-
les, Alan Snyder, etc.) have participated in the debate and/or formulation of the CLOS version
of the Law.
We would like to thank Markku Sakkinen for his interesting paper [Sak88] and his helpful mail
messages about the Law of Demeter. Cindy Brown and Mitch Wand convinced us that we
should use a more readable notation than EBNF and they helped us in designing it. Paul
Steckler and Ignacio Silva-Lepe made several contributions to the extended Demeter notation.
sectionBibliographic Note Earlier reports on the the work described in this paper have appeared
as [LHR88, LH89b, LH89a].
--R
An Introduction to Object-Oriented Programming
Managing class evolution in object-oriented systems
Assessing the quality of abstract data types written in Ada.
Laws for communicating parallel processes.
From Frege to G-odel
A Taste of Smalltalk.
Abstraction and Specification in Program De- velopment
Assuring good style for object-oriented pro- grams
Formulations and Benefits of the Law of Demeter.
Experience with a graph-based propagation pattern programming tool
Component Enhancement: An Adaptive Reusability Mechanism for Groups of Collaborating Classes.
Demeter: A CASE study of software growth through parameterized classes.
Propagation patterns: Graph-based specifications of cooperative behavior
The modular structure of complex systems.
Enhancing reusability with information hiding.
"the Law of Demeter"
Inheritance and the development of encapsulated software systems.
--TR
Abstraction and specification in program development
Inheritance and the development of encapsulated software systems
Demeter: a CASE study of software growth through parameterized classes
Comments on MYAMPERSANDldquo;the law of demeterMYAMPERSANDrdquo; and C++
Object-oriented programming: an objective sense of style
--CTR
Letha H. Etzkorn , Carl G. Davis, Automatically Identifying Reusable OO Legacy Code, Computer, v.30 n.10, p.66-71, October 1997
Fernando Berzal , Juan-Carlos Cubero , Nicolas Marin , Maria-Amparo Vila, Lazy Types: Automating Dynamic Strategy Selection, IEEE Software, v.22 n.5, p.98-106, September 2005
Edward B. Gamble, Jr. , Reid Simmons, The Impact of Autonomy Technology on Spacecraft Software Architecture: A Case Study, IEEE Intelligent Systems, v.13 n.5, p.69-75, September 1998
K. J. Lieberherr , A. J. Riel, Contributions to teaching object-oriented design and programming, ACM SIGPLAN Notices, v.24 n.10, p.11-22, Oct. 1989
Chris Houser, Manual and compiler for the terse and modular language DEM, ACM SIGPLAN Notices, v.31 n.12, p.41-51, Dec. 1996
Yang Liu , Salil Pradhan, The Demeter method: an efficient way to build adaptive software, ACM SIGICE Bulletin, v.22 n.1, p.7-19, July 1996
Nadir Yousfi, Measurement-driven restructuring of object oriented applications, ACM SIGPLAN OOPS Messenger, v.4 n.2, p.175-176, April 1993
Norman Wilde , Paul Matthews , Ross Huitt, Maintaining Object-Oriented Software, IEEE Software, v.10 n.1, p.75-80, January 1993
Karl J. Lieberherr , Ignacio Silva-Lepe , Cun Xiao, Adaptive object-oriented programming using graph-based customization, Communications of the ACM, v.37 n.5, p.94-101, May 1994
Jonathan Aldrich , Craig Chambers , David Notkin, ArchJava: connecting software architecture to implementation, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
Karl Lieberherr , David H. Lorenz , Pengcheng Wu, A case for statically executable advice: checking the law of demeter with AspectJ, Proceedings of the 2nd international conference on Aspect-oriented software development, p.40-49, March 17-21, 2003, Boston, Massachusetts
Adaptive programming in JAsCo, Proceedings of the 4th international conference on Aspect-oriented software development, p.75-86, March 14-18, 2005, Chicago, Illinois
Clark B. Archer , Michael C. Stinson, Object-oriented software product metrics (tutorial), Proceedings of the 1998 ACM SIGCPR conference on Computer personnel research, p.305-306, March 26-28, 1998, Boston, Massachusetts, United States
Korson , John D. McGregor, Understanding object-oriented: a unifying paradigm, Communications of the ACM, v.33 n.9, p.40-60, Sept. 1990
Jilles van Gurp , Jan Bosch, Design erosion: problems and causes, Journal of Systems and Software, v.61 n.2, p.105-119, March 2002
Steve Freeman , Tim Mackinnon , Nat Pryce , Joe Walnes, Mock roles, objects, Companion to the 19th annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, October 24-28, 2004, Vancouver, BC, CANADA
Norman Wilde , Ross Huitt, Maintenance Support for Object-Oriented Programs, IEEE Transactions on Software Engineering, v.18 n.12, p.1038-1044, December 1992
Mira Mezini , Karl Lieberherr, Adaptive plug-and-play components for evolutionary software development, ACM SIGPLAN Notices, v.33 n.10, p.97-116, Oct. 1998
Richard Helm , Ian M. Holland , Dipayan Gangopadhyay, Contracts: specifying behavioral compositions in object-oriented systems, ACM SIGPLAN Notices, v.25 n.10, p.169-180, Oct. 1990
Shahram Javey , Kin'ichi Mitsui , Hiroaki Nakamura , Tsuyoshi Ohira , Kazu Yasuda , Kazushi Kuse , Tsutomu Kamimura , Richard Helm, Architecture of the XL C++ browser, Proceedings of the 1992 conference of the Centre for Advanced Studies on Collaborative research, November 09-12, 1992, Toronto, Ontario, Canada
Martin Hitz , Behzad Montazeri, Chidamber and Kemerer's Metrics Suite: A Measurement Theory Perspective, IEEE Transactions on Software Engineering, v.22 n.4, p.267-271, April 1996
Grel Hedin, Attribute extension - a technique for enforcing programming conventions, Nordic Journal of Computing, v.4 n.1, p.93-122, Spring 1997
Robert V. Binder, Design for testability in object-oriented systems, Communications of the ACM, v.37 n.9, p.87-101, Sept. 1994
Simon Gibbs , Eduardo Casais , Oscar Nierstrasz , X. Pintado , Dennis Tsichritzis, Class management for software communities, Communications of the ACM, v.33 n.9, p.90-103, Sept. 1990
Neville Churcher , Warwick Irwin , Ron Kriz, Visualising class cohesion with virtual worlds, Proceedings of the Asia-Pacific symposium on Information visualisation, p.89-97, January 01, 2003, Adelaide, Australia
Jens Palsberg , Cun Xiao , Karl Lieberherr, Efficient implementation of adaptive software, ACM Transactions on Programming Languages and Systems (TOPLAS), v.17 n.2, p.264-292, March 1995
Tomoyuki Aotani , Hidehiko Masuhara, SCoPE: an AspectJ compiler for supporting user-defined analysis-based pointcuts, Proceedings of the 6th international conference on Aspect-oriented software development, March 12-16, 2007, Vancouver, British Columbia, Canada
Evaluating the Effect of a Delegated versus Centralized Control Style on the Maintainability of Object-Oriented Software, IEEE Transactions on Software Engineering, v.30 n.8, p.521-534, August 2004
K. Lieberherr , C. Xiao, Formal Foundations for Object-Oriented Data Modeling, IEEE Transactions on Knowledge and Data Engineering, v.5 n.3, p.462-478, June 1993
Neville Churcher , Warwick Irwin, Informing the design of pipeline-based software visualisations, proceedings of the 2005 Asia-Pacific symposium on Information visualisation, p.59-68, January 01, 2005, Sydney, Australia
Roger S. Chin , Samuel T. Chanson, Distributed, object-based programming systems, ACM Computing Surveys (CSUR), v.23 n.1, p.91-124, March 1991
K. J. Lieberherr , C. Xiao, Object-Oriented Software Evolution, IEEE Transactions on Software Engineering, v.19 n.4, p.313-343, April 1993
Rebecca J. Wirfs-Brock , Ralph E. Johnson, Surveying current research in object-oriented design, Communications of the ACM, v.33 n.9, p.104-124, Sept. 1990
Karl J. Lieberherr, Controlling the Complexity of Software Designs, Proceedings of the 26th International Conference on Software Engineering, p.2-11, May 23-28, 2004
Karl Lieberherr , Boaz Patt-Shamir , Doug Orleans, Traversals of object structures: Specification and Efficient Implementation, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.2, p.370-412, March 2004
Doug Lea, Christopher Alexander: an introduction for object-oriented designers, ACM SIGSOFT Software Engineering Notes, v.19 n.1, p.39-46, Jan. 1994
Marwan Abi-Antoun , Jonathan Aldrich , Wesley Coelho, A case study in re-engineering to enforce architectural control flow and data sharing, Journal of Systems and Software, v.80 n.2, p.240-264, February, 2007
Antero Taivalsaari, On the notion of inheritance, ACM Computing Surveys (CSUR), v.28 n.3, p.438-479, Sept. 1996
Gary T. Leavens, Introduction to the literature on object-oriented design, programming, and languages, ACM SIGPLAN OOPS Messenger, v.2 n.4, p.40-53, Oct. 1991 | modularity;coupling control;object forms;object-oriented programs;law of demeter;structured induction;encapsulation;information restriction;user interfaces;information hiding;object-oriented programming;demeter system;class forms;good style;high-level interface;programming environments;information localization |
624907 | Recognizing Design Decisions in Programs. | The authors present a characterization of design decisions that is based on the analysis of programming constructs. The characterization underlies a framework for documenting and manipulating design information to facilitate maintenance and reuse activities. They identify and describe the following categories of design decisions: composition and decomposition; encapsulation and interleaving; generalization and specialization; representation; data and procedures; and function and relation. The authors discuss how to recognize and represent design decisions. | to express solutions to problems, other concerns such as target machine characteristics intrude. The middle
ground between specifications and code is more nebulous. Webster 1 surveys the variety of notations and
graphical representations that have been used. The design process as a whole can be described as repeatedly
taking a description of intended behavior (whether specification, intermediate representation, or code) and
refining it. Each refinement reflects an explicit design decision. Each limits the solution to a class of
implementations within the universe of possibilities.
Design involves making choices among alternatives. Too often, however, the alternatives that are considered
and the rationale for the final choice are lost. One reason design information is lost is that the
design representations currently in use are not expressive enough. While they are adequate for describing
the cumulative results of a set of decisions, particularly in regard to the structure of components and how
they interact, they do not attempt to represent the incremental changes that come with individual design
decisions. Also, they fail to describe the process by which decisions are reached, including the relevant
problem requirements and the relative merits of the alternative choices. The well known tendency for system
structure to deteriorate over time is accelerated when the original structure and intent of the design are not
retained with the code.
Design decisions are not made in isolation. Often a solution idea is best expressed through several
interrelated decisions. Unless the interdependencies are explicitly documented, the unwary maintenance
programmer will fail to notice all of the implications of a proposed change. Design ideas that are expressed
via interrelated decisions are called delocalized information by Balzer 2 and delocalized plans by Soloway. 3
If design decisions and their rationale were captured during initial program development, and if a suitable
notational mechanism existed to describe their interdependencies, then several aspects of software engineering
would profit. First, initial development would benefit from the increased discipline and facilitated communication
provided by the notation. Opportunities for software reuse would be multiplied by the availability
of design information that could be reused as is or transformed to meet new requirements. Finally, software
maintenance would be vastly improved by the explicit recognition of dependencies and the availability of
rationale.
Studying various areas Computer Science reveals several categories of design decisions. Abstraction mechanisms
in programming languages provide evidence of the need to express design ideas in code. Semantic
relationships from data base theory support the modeling of information structures from a variety of fields.
Finally, examination of tools used for reverse engineering and software maintenance indicate decisions that
have been found useful in understanding existing programs.
2.1 Composition and Decomposition
Probably the most common design decision made when developing a program is to split it into pieces. This
can be done, for example, by breaking a computation into steps or by defining a data structure in terms of
its fields. Introducing a construct and then later decomposing it supports abstraction by allowing decisions
to be deferred and details hidden. Complexity is managed by using an appropriate name to stand for a
collection of lower level details.
If a "top-down" approach is taken to design, then a program is decomposed into pieces. If a "bottom-up"
approach is used, then a program is composed from available sub-components. Regardless of the approach,
the result is that a relationship has been established between an abstract element and several more detailed
components.
Data and control structures are programming language features that support these decisions. For exam-
ple, a loop is a mechanism for breaking a complex operation into a series of simpler steps. Likewise, arrays
and record structures are ways of collecting related data elements into a single item. Of course, building
an expression from variables, constants, and operators is an example of composition. So too is building a
system from a library of components.
2.2 Encapsulation and Interleaving
Structuring a program involves drawing boundaries around related constructs. Well-defined boundaries or
interfaces serve to limit access to implementation details while providing controlled access to functionality by
clients. The terms encapsulation, abstract data types, and information hiding, are all related to this concept.
Encapsulation is the decision to gather selected parts of a program into a component, variously called a
package, cluster, or module. The component's behavior is restricted by a protocol or interface so that other
parts of the system can only interact with the component in limited ways. Parnas 4 introduced the term
information hiding to describe this approach to structuring a system.
Encapsulation is a useful aid to both program comprehension and maintenance. A decision to encapsulate
the implementation of a program component reflects the belief that the encapsulated construct can be thought
of as a whole with a behavior that can be described by a specification that is much smaller than the total
amount of code contained within the component. If the component hides the details of a major design
decision, then when that decision is altered during later maintenance, side-effects of the change are limited.
The alternative to encapsulation is interleaving. It is sometimes useful, usually for reasons of efficiency,
to intertwine two computations. For example, it is often useful to compute the maximum element of a vector
as well as its position in the vector. These could be computed separately, but it is natural to save effort by
doing them in a single loop. Interleaving in this way makes the resulting code harder to understand and
modify. A number of useful interleaving transformations have been collected by Feather. 5
2.3 Generalization and Specialization
One of the most powerful features of programming languages is their ability to describe a whole class of
computations using a subprogram parameterized by arguments. Although procedures and functions are
usually thought of as abstractions of expressions, the ability to pass arguments to them is really an example
of generalization. The decision concerning which aspects of the computation to parameterize is one of the
architectural decisions made during software design.
Generalization is a design decision in which a program specification is satisfied by relaxing some of its
constraints. For example, a program might be required to compute the logarithm of a limited set of numbers.
The requirement could be satisfied by providing access to a general purpose library function for computing
logarithms. The library function would be capable of computing logarithms of all of the set of required
numbers as well as many others. The decision to use the library function is a generalization decision.
Abstractions other than numerical computations may also be parameterized. The Ada programming
language provides a generic facility that allows data types and functions to parameterize packages and
subprograms. Many languages provide macro capabilities that parameterize textual substitutions. Variant
records in Pascal and Ada and type unions in C are examples of the use of a single general construct to
express a set of special cases, depending on the value of a discriminant field.
Another example of generalization concerns interpreters for virtual machines. It is often useful for a
designer to introduce a layer of functionality that is controlled by a well-defined protocol. The protocol can
be thought of as the programming language for the virtual machine implemented by the layer. The decision
to introduce the protocol reflects the desire to provide more generality than a set of disparate procedures
would offer.
Specialization is a design decision related to generalization. Specialization involves replacing a program
specification by a more restricted one. Often an algorithm can be optimized based on restrictions in the
problem domain or facilities of the programming language. Although these optimizations can dramatically
improve performance, they have a cost in lengthening the program text and making it harder to understand.
Another manifestation of this can be seen in the early stages of the design process. Often specifications are
expressed in terms of idealized objects such as infinite sets and real numbers. Actual programs have space
and precision limitations. Thus a program is necessarily a special case of a more general computational
entity.
In object-oriented programming languages such as Smalltalk and C++, the designer is provided with
a collection of existing class definitions. A class provides an implementation for objects that belong to it.
Knowledgable developers can quickly implement new classes by specializing existing classes. A new class is
said to inherit the common functionality from its more general predecessor.
Generalization and specialization decisions have long-term implications on the program being developed.
It is easier to reuse or adapt a generalized component than a restricted one. Generality has a cost, however.
Generalized components may be less efficient than specially tuned versions. Moreover, there is often more
effort required to test a component intended for wide application than its more specific counterpart.
2.4 Representation
Representation is a powerful and comprehensive design decision. Representation is used when one abstraction
or concept is better able to express a problem solution than another. This may arise because the target
abstraction more ably captures the sense of the solution or because it can be more efficiently implemented
on the target machine. For example, a programmer may choose a linked list to implement a pushdown stack.
Bit vectors are used to represent finite sets. Representation is the decision to use one construct in place of
another functionally equivalent one.
Representation must be carefully distinguished from specialization. If a (possibly infinite) pushdown
stack is implemented by a fixed length array, then two decisions have been made. The first decision is that
for the purposes of this program a bounded length stack will serve. This is a specialization decision. Then
the bounded stack can be readily implemented by a fixed length array and an index variable. This is a
representation choice.
When the distinction between specialization and representation is kept in mind, representation can be
seen to be a flexible and symmetric decision. In one context it may be appropriate to represent one construct
by another. In a different situation, the inverse representation might be used. For example, operations on
vectors are usually implemented by a loop. In the presence of vector processing hardware, however, the
compiling system may invert the representation to reconstruct the vector operation.
Another example of representation comes from the early stages of design. Formal program specifications
are often couched in terms of universal and existential quantification; e.g. "All employees who make over
$50,000 per year." Programming languages typically use loops and recursion to represent these specifications.
2.5 Data and Procedure
Variables are not necessary in order to write programs; values can always be explicitly recomputed. Program
variables have a cost in terms of the amount of effort required to comprehend and modify a program. On
the other hand, they can serve to improve the efficiency of the program and, by a judicious choice of names,
serve to clarify its intent.
Programmers must be aware of the invariants relating the program variables when inserting statements
into a program. For example, suppose a maintenance programmer is investigating a loop that reads records
from a file and keeps count of the number of records read. The programmer has been asked to make the loop
disregard invalid records. Because the counter is used to satisfy design dependencies between this loop and
other parts of the program, the programmer must modify the semantics of the counter. The programmer
must choose from among three alternatives: counting the total number of records, counting the number of
valid records, or doing both. To make the correct choice, the programmer must determine how the counter
is used later in the program. In this case, the programmer can replace references to variables with the
computations that produced their most recent values. The resulting statements can be rearranged in order
to reconstruct the high-level operations applied to the file. Having done this, the programmer can confront
the semantic problems raised by the distinction between valid and invalid records. Once those semantic
problems have been solved, components can again be delocalized and assignment statements reintroduced.
The introduction of variables constrains the sequence in which computations may be made. This increases
the possibility of errors when modifications made during maintenance accidently violate an implicit ordering
constraint or when variables are computed in the wrong order.
The alternative to introducing a variable is to recompute values when they are needed. This is sometimes
used to make a program more readable. A reader does not have to search the program for the declaration
and assignments to a variable but can directly use local information. Optimizing compilers often reduce the
cost associated with recomputation, particularly where constant expressions are involved.
The decision to repeat a computation or to save the result of the computation in a variable reflects
the deeper concept of the duality of data and procedure. The implementation of a finite state machine
is an example where the data/procedure decision is apparent. In the data-oriented approach, possibilities
for the machine's next state are recorded in a two dimensional array, often called the ``next-state'' table.
Alternatively, the next-state information can be computed directly in code for each of the states. Although
this may seem unusual, it is exactly the technique that is used to speed up lexical analyzers. Token classes are
first represented as regular expressions and then as states in a state machine. The states are then compiled
directly into case/switch statements in the target programming language. The reason for doing this is
efficiency: in the procedural version the cost of indexing into the array is avoided.
2.6 Function and Relation
Logic programming languages allow programs to be expressed as relations between sets of data. For example,
sorting is described as the relationship of two sets, both of which contain the same members, one of which
is ordered. In Prolog, this might be described by the following rule.
If S1 is given as input, then a sorted version S2 is produced. But if, instead, an ordered version S2 is
provided, then unordered permutations are produced in S1. The decision as to which variable is input and
which is output can be left up to the user at run-time instead of the developer at design time.
Formal functional specifications are often non-deterministic in this regard. If there is a preferred di-
rection, then the designer may use a function instead of a relation to express it. But this may reflect an
implementation bias rather than a requirement.
Of course, more traditional programming languages do not support non-deterministic relationships. Even
in Prolog it may be impossible, for any given problem, to write a set of rules that works equally well in both
directions. Thus, the designer is usually responsible for selecting the preferred direction of causality; that is,
which variables are input and which are output.
An alternative approach is to provide separate functions that support both directions. For example, in a
student grading system, it may be useful to provide a function that, when given a numeric grade, indicates
the percentage of students making that grade or higher. It may also be of value to provide the inverse
function that, when given a percentage, returns the numeric grade that would separate that proportion of
the students.
Software maintenance and reuse activities require the detection of design decisions in existing code, which
is a part of reverse engineering. Reverse engineering is the process of constructing a higher level description
of a program from a lower level one. Typically, this means constructing a representation of the design of a
program from its source code. The process is bottom-up and incremental; low level constructs are detected
and replaced by their high-level counterparts. If this process is repeated, gradually the overall architecture
of the program emerges from the programming language-dependent details.
The program below is taken from a paper by Basili and Mills 6 in which they use flow analysis and techniques
from program proving to guide the comprehension process and document the results. It will be used
as a realistic example of production software in which design decisions can be recognized. The program is
shown in Figure 1.
022 C
043 IF (ABS(XM) .LE. TOL1) GO TO 90
048 IF (ABS(E) .LT. TOL1) GO TO 70
053 IF
078 IF (P .GE. ABS(0.5*E*Q)) GO TO 70
100 90
Figure
ZEROIN finds the root of a function, F, by successively shrinking the interval in which it must oc-
cur. It does this by using one of several approaches (bisection, linear interpolation, and inverse quadratic
interpolation), and it is the interleaving of the approaches that complicates the program.
3.1 Interleaving of Program Fragments
A casual examination of the program indicates that it contains two WRITE statements that provide diagnostic
information when the program is run. In fact, these statements display the progress that the program
makes in narrowing the interval containing the root. The execution of the WRITE statements is controlled
by the variable IP. IP is one of the program's input parameters, and an examination of the program indicates
that it is not altered by the program and is used for no other purpose.
This leads to the conclusion that the overall program can be decomposed into two pieces, the root finder
and the debugging printout. To make the analysis of the rest of the program simpler, the diagnostic portion
can be removed from the text being considered. This involves removing statements numbered 016, 017, 029,
and 030 and modifying line 001 to remove the reference to IP.
The lines that have been removed are themselves analyzable. In fact, the job of producing the debugging
printout has been decomposed into two tasks. The first produces a header line, and the second prints out a
description of the interval upon every iteration of the loop.
3.2 Representation of Structured Control Flow in Fortran
Basili and Mills begin their analysis by examining the control flow of the program. In fact, the version of
FORTRAN used in this program has a limited set of control structures that forces programmers to use GOTO
statements to simulate the full range of structured programming constructs. In ZEROIN, for example, lines
010-012 implement a repeat-until loop, lines 031-037 serve as an if-then statement, and lines 050-068 are
an if-then-else. These lines are the result of representation decisions by the original developer. They can
be detected by straightforward analysis such as that typically performed by the flow analysis phase of a
compiler.
Another technique for expressing control flow is illustrated in this program. In several cases (lines 043-044,
048-049, and 077-078), an elaborate branch condition is broken up into two consecutive if statements, both
branching to the same place. Each pair could easily be replaced by a single if with multiple conditions, thus
further simplifying the control flow structure of the program at the expense of complicating the condition
being tested.
3.3 Interleaving by Code Sharing
Further analysis of the control flow of the program indicates that lines 085 and 086 comprise the else part
of an if-then-else statement. Moreover, these lines are "branched into" from lines 048 and 049. The two
assignment statements are really being shared by two parts of the program. That is, two execution streams
are interleaved because they share common code. Although this makes the program somewhat shorter and
assures that both parts are updated if either is, it makes understanding the program structure more difficult.
In order to express the control flow more cleanly, it is necessary to construct a structured version. This
requires that the shared code be duplicated so that each of the sharing segments has its own version. If the
common statements were more elaborate, a subroutine could be introduced and called from both sites. As
it is, it is a simple matter to duplicate the two lines producing two properly formed conditional constructs.
3.4 Data Interleaving by Reusing Variable Names
An unfortunately common practice in programs is to use the same variable name for two unrelated purposes.
This naturally leads to confusion when trying to understand the program. It can be thought of as a kind
of interleaving where, instead of two separable segments of code being intertwined at one location in the
program, two aspects of the program state share the use of the same identifier. This occurs twice in ZEROIN,
with the identifiers TOL1 (in lines 011-012 and in the remainder of the program) and Q (on line 064 through
the right hand side of line 068 and the remainder of the program, including the left hand side of line 068).
Instances of this practice can be detected by data flow analysis.
3.5 Generalization of Interpolation Schemes
ZEROIN exhibits a situation where two sections of code use alternative approaches to compute the values
of the same set of variables. Both lines 057-059 and 064-068 are responsible for computing the values of the
variables P and Q. The determination of which approach to use is based on a test made on line 053.
This is an example of specialization. Both computations and the test can be replaced conceptually by
a more general expression that is responsible for computing P and Q based on the current values of the
variables A, B, C, FA, FB, FC, and XM. This has the further benefit of localizing the uses of the variables
S and R inside of the new expression.
There are really several design issues involved here. First, both code segments result from the decomposition
of the problem into pieces expressed by a series of assignment statements. Then, the realization
that both segments are specializations of a more general one allows the details of the individual cases to be
hidden away. This, in turn, makes the code shorter and easier to understand.
3.6 Variable Introduction
A common programming practice is to save the result of a computation in order to avoid having to recompute
the same value at a later time. If the computation is involved, this practice can result in a significant savings
at run time with a modest cost.
In ZEROIN, this practice has been used extensively. In particular, there has been a concerted effort to
save the results of calls to the user-supplied function, F, in the variables FA, FB, and FC. Because F may
be arbitrarily complex, this practice may be the most important determinant of the ultimate efficiency of
ZEROIN.
An examination of the program reveals that FA, FB, and FC always contain the results of applying F
at the points A, B, and C, respectively. From the point of view of understanding the algorithm, these three
additional variables do not provide a significant abstraction. On the contrary, they require a non-trivial effort
to understand and manipulate. Replacing them by their definitions makes the resulting program easier to
understand.
When readability is the goal, there are two factors to be weighed in deciding whether to write the program
using a variable name or replacing it by its value. On the one hand, each new variable places a burden on
the person trying to understand the program. The variable must be read and its purpose understood and
confirmed. On the positive side, variables can serve as valuable abbreviations for the computation that they
replace. It is easier to understand a variable with a carefully chosen mnemonic name than the complex
expression it represents. In the case of ZEROIN, the variables FA, FB, and FC provide little in the way of
abstraction. P and Q, on the other hand, abbreviate significant computations, albeit without the benefits of
mnemonic names. XM lies somewhere in the middle.
3.7 Generalization of Interval Computation
Now that the recognition of some intermediate decisions has clarified the structure of the program, the same
sort of observation can be made about lines 048-086. They have the function of assigning values to the
variable D and E based on the values of the variables A, B, C, D, E, F, TOL1, and XM. The fact that the
list of variables is so long indicates that this segment is highly interleaved with the rest of the program.
Nevertheless, it is of value to indicate that the only explicit effect of these lines of code is to set these two
variables.
It should also be noted that, as in Section 3.5, there are several instances of specialization. Lines 079-080
and 085-086 are selected based on the tests on lines 077-078. Likewise, lines 082 through 086 and the lines
between 050 and 081 are special cases selected on the basis of the tests on lines 048 and 049.
3.8 Program Architecture
Once the analysis described above has been performed, it is possible to appreciate the overall structure of
the program. Based on the test made on line 044, the program can be seen to use the variable B to hold
approximations of the root of the function. B is modified on lines 092 and 093 by either XM or D. The
sections on lines 025-028 and lines 031-037 act as adjustments that are made in special situations.
Another conclusion that is now apparent is that A gets its value only from B, while C gets its value only
from A. Thus A, C, and B serve as successively better approximations to the root. In fact, except under
special circumstances, A and C have identical values. Likewise, E normally has the same value as D. The
resulting architecture of the program is shown in Figure 2.
028 initialization
loop
conditional adjustment 1
043 if (close enough to final answer)
092 compute new value of B
conditional adjustment 2
Figure
It is not sufficient to simply recognize design decisions in code. Once recognized, the decisions must be
organized in such a way that they can be effectively used by maintenance programmers and reuse engineers.
The organization chosen serves as a representation for design information.
There are numerous methods for designing software and numerous representations for the intermediate
results. Typically, several are used during the design of a program, some during the architectural stages
and others during low-level design. Still others may be used during the maintenance stage if the original
developers have given way to a separate maintenance staff. It may consequently be difficult to recreate and
reuse the original representation.
A usable representation for design information must be easy to construct during development and easy
to reconstruct during reverse engineering. Once constructed, it must facilitate queries and report generation
in order to support software maintenance activities. It must provide a mechanism for attaching available
documentation. Also, it must support automation. In particular, the representation must be formal enough
that its components can be automatically manipulated. For example, it is desirable to be able to determine
if a previously developed partial description of a software component is reusable in a new situation. A
representation for design information must allow all types of design information to be attached. This includes
high-level specifications, architectural overviews, detailed interfaces, and the resulting source code. It is
also desirable that the representation support requirements tracing, informal annotations, and versioning
information.
Several approaches to organizing design information have been proposed. Biggerstaff 7 is concerned with
relating code fragments to information from the problem domain. Software reuse will be facilitated if a new
problem's requirements can be easily matched against a description of existing software. He is building the
Desire system to explore his approach. Blackburn 8 is also concerned with reuse. He proposes a network of
design information where fragments are connected by one of two relationships, either "IS-DECOMPOSED-
INTO" (decomposition) or "IS-A" (specialization). Coleman and Gallimore report on FPD, a framework
for program development. 9 Arcs in their network model correspond to refinements steps taken during the
design. Each refinement engenders a proof obligation to guarantee the correctness of the step taken.
5 CONCLUSION
Software maintenance and reuse require of their practitioners a deep understanding of the software being
manipulated. That understanding is facilitated by the presence of design documentation. Effective documentation
should include a description of the structure of the software together with details about the
decisions which lead to that structure.
Design decisions occur where the abstract models and theories of an application domain confront the
realities of limited machines and imperfect programming languages. If the design decisions can be recon-
structed, then there is greater hope of being able to maintain and reuse the mountains of undocumented
software confronting us.
--R
"Mapping the Design Representation Terrain: A Survey,"
"A 15 Year Perspective on Automatic Programming,"
"Designing Documentation to Compensate for Delocalized Plans,"
"On the Criteria To Be Used in Decomposing Systems into Modules,"
"A Survey and Classification of Some Program Transformation Approaches and Techniques,"
"Understanding and Documenting Programs,"
"Design Recovery for Maintenance and Reuse,"
"Toward a Theory of Software Reuse Based on Formal Methods,"
"A Framework for Program Development,"
--TR
A survey and classification of some program transformation approaches and techniques
Designing documentation to compensate for delocalized plans
Design Recovery for Maintenance and Reuse
Computer Methods for Mathematical Computations
--CTR
Jorge L. Diaz-Herrera, The Importance of Static Structures in Software Construction, IEEE Software, v.10 n.3, p.75-87, May 1993
Erich Buss , John Henshaw, A software reverse engineering experience, Proceedings of the 1991 conference of the Centre for Advanced Studies on Collaborative research, October 28-30, 1991, Toronto, Ontario, Canada
Erich Buss , John Henshaw, Experiences in program understanding, Proceedings of the 1992 conference of the Centre for Advanced Studies on Collaborative research, November 09-12, 1992, Toronto, Ontario, Canada
Kamalakar Karlapalem , Qing Li , Chung-Dak Shum, HODFA: an architectural framework for homogenizing heterogeneous legacy databases, ACM SIGMOD Record, v.24 n.1, p.15-20, March 1995
Stephen B. Ornburn , Richard J. LeBlanc, Jr., Building, modifying and using component generators, Proceedings of the 15th international conference on Software Engineering, p.391-402, May 17-21, 1993, Baltimore, Maryland, United States
Joel Troster , John Henshaw , Erich Buss, Filtering for quality, Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative research: software engineering, October 24-28, 1993, Toronto, Ontario, Canada
Julio Cesar Sampaio do Prado Leite, Working results on software re-engineering, ACM SIGSOFT Software Engineering Notes, v.21 n.2, p.39-44, March 1996
Nenad Marovac, Guidelines for embedded software documentation, ACM SIGSOFT Software Engineering Notes, v.19 n.2, p.22-28, April 1994
Spencer Rugaber, Cataloging design abstractions, Proceedings of the 2006 international workshop on Role of abstraction in software engineering, May 21-21, 2006, Shanghai, China
Forrest Shull , Filippo Lanubile , Victor R. Basili, Investigating Reading Techniques for Object-Oriented Framework Learning, IEEE Transactions on Software Engineering, v.26 n.11, p.1101-1118, November 2000
Carmen Zannier , Frank Maurer, A qualitative empirical evaluation of design decisions, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Carmen Zannier , Mike Chiasson , Frank Maurer, A model of design decision making based on empirical results of interviews with software designers, Information and Software Technology, v.49 n.6, p.637-653, June, 2007
Spencer Rugaber, The use of domain knowledge in program understanding, Annals of Software Engineering, v.9 n.1-4, p.143-192, 2000
Hausi A. Mller , Jens H. Jahnke , Dennis B. Smith , Margaret-Anne Storey , Scott R. Tilley , Kenny Wong, Reverse engineering: a roadmap, Proceedings of the Conference on The Future of Software Engineering, p.47-60, June 04-11, 2000, Limerick, Ireland
M. G. J. van den Brand , P. Klint , C. Verhoef, Reverse engineering and system renovationan annotated bibliography, ACM SIGSOFT Software Engineering Notes, v.22 n.1, p.57-68, Jan. 1997 | interleaving;design decisions;reuse activities;design information;encapsulation;composition;specialization;representation;decomposition;generalization;programming constructs;programming;maintenance;software engineering |
626675 | Efficient Instruction Sequencing with Inline Target Insertion. | Inline target insertion, a specific compiler and pipeline implementation method for delayed branches with squashing, is defined. The method is shown to offer two important features not discovered in previous studies. First, branches inserted into branch slots are correctly executed. Second, the execution returns correctly from interrupts or exceptions with only one program counter. These two features result in better performance and less software/hardware complexity than conventional delayed branching mechanisms. | Introduction
The instruction sequencing mechanism of a processor determines the instructions to be fetched from
the memory system for execution. In the absence of branch instructions, the instruction sequencing
mechanism keeps requesting the next sequential instructions in the linear memory space. In this
This research has been supported by the National Science Foundation (NSF) under Grant MIP-8809478, Dr. Lee
Hoevel at NCR, the Joint Services Engineering Programs (JSEP) under Contract N00014-90-J-1270, the National
Aeronautics and Space Administration (NASA) under Contract NASA NAG 1-613 in cooperation with the Illinois
Computer laboratory for Aerospace Systems and Software (ICLASS), and the Office of Naval Research under Contract
N00014-88-K-0656.
W. W. Hwu is with the Department of Electrical and Computer Engineering, University of Illinois, Urbana-
Champaign, Illinois, 61801.
3 P. P. Chang is with the Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124. The work
presented in this paper was conducted while he was with the Department of Electrical and Computer Engineering,
University of Illinois, Urbana-Champaign, Illinois, 61801.
sequential mode, it is easy to maintain a steady supply of instructions for execution. Branch
instructions, however, disrupt the sequential mode of instruction sequencing. Without special
hardware and/or software support, branches can significantly reduce the performance of pipelined
processors by breaking the steady supply of instructions to the pipeline [26].
Many hardware methods for handling branches in pipelined processors have been studied
[39][28][9][29][17][10]. An important class of hardware methods, called Branch Target Buffers (or
Branch Target Caches), use buffering and extra logic to detect branches at an early stage of the
pipeline, predict the branch direction, fetch instructions according to the prediction, and nullify
the instructions fetched due to an incorrect prediction[28]. Branch Target Buffers have been
adopted by many commercial processors [28][16]. The performance of such hardware methods is
determined by their ability to detect the branches early and to predict the branch directions accu-
rately. High branch prediction accuracy, about 85-90% hit ratio, has been reported for hardware
methods[39][28][29]. Another advantage of using Branch Target Buffers is that they do not require
recompilation or binary translation of existing code. However, the hardware methods suffer
from the disadvantage of requiring a large amount of fast hardware to be effective[28][20]. Their
effectiveness is also sensitive to the frequency of context switching [28].
Compiler-assisted methods have also been proposed to handle branches in pipelined processors.
Table
lists three such methods. Delayed Branching has been a popular method to absorb branch
delay in microsequencers of microprogrammed microengines. This technique has also been adopted
by many recent processor architectures including IBM 801[37], Stanford MIPS[14], Berkeley RISC
[33], HP Spectrum [3], SUN SPARC [43], MIPS R2000 [25], Motorola 88000[30], and AMD 29000[1].
In this approach, instruction slots immediately after a branch are reserved as the delay slots for
that branch. The number of delay slots has to be large enough to cover the delay for evaluating the
branch direction. During compile-time, the delay slots following a branch are filled with instructions
that are independent of the branch direction, if the data and control dependencies allow such code
movement[13]. Regardless of the branch direction, these instructions in the delay slots are always
executed. McFarling and Hennessy reported that the first delay slot can be successfully filled by
the compiler for approximately 70% of the branches, and the second delay slot can be filled only
25% of the time[29]. It is clear that delayed branching is not effective for processors requiring more
than one slot.
Another compiler-assisted method, called Delayed Branches with Squashing, has been adopted
by some recent processors to complement delayed branching[29][15][8][30][23]. That is, the method
is used when the compiler cannot completely fill the delay slots for delayed branching. In this
scheme, the number of slots after each branch still has to be large enough to cover the branch
delay. However, instead of moving independent instructions into branch delay slots, the compiler
can fill the slots with the predicted successors of the branch. If the actual branch direction differs
from the prediction, the instructions in the branch slots are scratched (squashed or nullified) from
the pipeline.
On the least expensive side, the hardware predicts all conditional branches to be either always
taken (as in Stanford MIPS-X [8]) or always not-taken (as in Motorola 88000 [30]). Predicting all
the instructions to be taken achieves about 65% accuracy whereas predicting not-taken does about
[11]. Predicting all the branches to be either taken or not taken limits the performance of
delayed branches with squashing. Furthermore, filling the branch slots for predicted-taken branches
requires code copying in general. Predicting all branches to be taken can result in a large amount
of code expansion.
McFarling and Hennessy proposed Profiled Delayed Branches with Squashing. In this scheme,
an execution profiler is used to collect the dynamic execution behavior of programs such as the
preferred direction of each branch[29]. The profile information is then used by a compile-time code
restructurer to predict the branch direction and to fill the branch slots according to the prediction.
In order to allow each branch to be predicted differently, an additional bit to indicate the predicted
direction is required in the branch opcode in general[23]. Through this bit, the compiler can
convey the prediction decision to the hardware. McFarling and Hennessy also suggested methods
for avoiding adding a prediction bit to the branch opcode. Using pipelines with one and two
branch slots, McFarling and Hennessy showed that the method can offer comparable performance
with hardware methods at a much lower hardware cost. They suggested that the stability of using
execution profile information in compile-time code restructuring should be further evaluated.
This paper examines the extension of McFarling and Hennessy's idea to processors employing
deep pipelining and multiple instruction issue. These techniques increase the number of slots for
each branch. As a result, four issues arise. First, there are only 3 to 5 instructions between branches
in the static program (see Section 4.2) . In order to fill a large number of slots (on the order of
ten), one must be able to insert branches into branch slots. Questions arise regarding the correct
execution of branches in branch slots. Second, the state information about all branch instructions
in the instruction pipeline becomes large. Brute force implementations of return from interrupts
and exceptions can involve saving/restoring a large amount of state information of the instruction
sequencing mechanism. Third, the code expansion due to code restructuring can be very large.
It is important to control such code expansion without sacrificing performance. Fourth, the time
penalty for refilling the instruction fetch pipeline due to each incorrectly predicted branch is large.
It is very important to show extensive empirical results on the performance and stability of using
profile information in compile-time code restructuring. The first three issues were not addressed by
McFarling and Hennessy [29]. The second issue was not addressed by previous studies of hardware
support for precise interrupt [18] [40].
In order to address these issues, we have specified a compiler and pipeline implementation
method for Delayed Branches with Squashing. We refer to this method as Inline Target Insertion
to reflect the fact that the compiler restructures the code by inserting predicted successors
of branches into their sequential locations. Based on the specification, we show that the method
exhibits desirable properties such as simple compiler and hardware implementation, clean inter-
rupt/exception return, moderate code expansion, and high instruction sequencing efficiency. We
also provide a proof that Inline Target Insertion is correct. Our correctness proof of filling branch
slots with branch instructions is also applicable to a previously proposed hardware scheme [34].
The paper is organized into five sections. Section 2 presents background and motivation for
Inline Target Insertion. Section 3 defines the compiler and pipeline implementation, proves the
correctness of the proposed implementation, and suggests a clean method to return from interrupt
and exception. Section 4 provides empirical results on code expansion control and instruction
sequencing efficiency. Section 5 offers concluding remarks regarding the cost-effectiveness and
applicability of Inline Target Insertion.
Background and Motivation
2.1 Branch Instructions
Branch instructions reflect the decisions made in the program algorithm. Figure 1(a) shows a C
program segment which finds the largest element of an array. There are two major decisions in the
algorithm. One decides if all the elements have been visited and the other decides if the current
element is larger than all the other ones visited so far.
With the register allocation/assignment assumption in Figure 1(b), a machine language program
can be generated as given in Figure 2. There are three branches in the machine language program.
Instruction D ensures that the looping condition is checked before the first iteration. Instruction I
checks if the loop should iterate any more. Instruction F determines if the current array element
is larger than all the others visited so far.
The simplified view of the machine language program in Figure 2 highlights the effect of
branches. Each arc corresponds to a branch where the head of an arc is the target instruction.
The percentage on each arc indicates the probability for the corresponding branch to occur in
execution. The percentages can be derived by program analysis and/or execution profiling. If
the percentage on an arc is greater than 50%, it corresponds to a likely branch. Otherwise, it
corresponds to an unlikely branch.
The instructions shown in Figure 2(a) are static instructions. These are the instructions generated
by the compilers and machine language programmers. During program execution, each static
instruction can be executed multiple times due to loops. Each time a static instruction is executed,
it generates a dynamic instruction. A dynamic branch instruction which redirects the instruction
fetch is called a taken branch.
2.2 Instruction Sequencing for Pipelined Processors
The problems with instruction sequencing for pipelined processors are due to the latency of decoding
and/or executing branches. A simple hardware example suffices to illustrate the problem
of instruction sequencing for pipelined processors. The processor shown in Figure 3 is divided
into four stages: instruction fetch (IF ), instruction decode (ID), instruction execution (EX), and
result write-back (WB). The instruction sequencing logic is implemented in the EX stage. The
sequencing pipeline consists of the IF , ID, and EX stages of the processor pipeline. When a
compare-and-branch 4 instruction is processed by the EX stage 5 , the instruction sequencing logic
determines the next instruction to fetch from the memory system based on the comparison result.
The dynamic pipeline behavior is illustrated by the timing diagram in Figure 4. The vertical
dimension gives the clock cycles and the horizontal dimension the pipeline stages. For each cycle,
the timing diagram indicates the pipeline stage in which each instruction can be found.
The pipeline fetches instructions sequentially from memory until a branch is encountered. In
Figure
4, the instructions to be executed are
the direction of branch I is not known until cycle 7. By this time instructions J and K have
already entered the pipeline. Therefore, in cycle 8 instruction E enters the pipeline while J and
K are scratched. The nonproductive cycles introduced by incorrectly fetching J and K reduce the
throughput of the pipeline.
2.3 Deep Pipelining and Multiple Instruction Issue
The rate of instruction execution is equal to the clock frequency times the number of instructions
executed per clock cycle. One way to improve the instruction execution rate is to increase the clock
frequency. The pipeline stages with the longest delay (critical paths) limit the clock frequency.
Therefore, subdividing these stages can potentially increase the clock frequency and improve the
overall performance. This adds stages in the pipeline and creates a deeper pipeline. For example,
if the instruction cache access and the instruction execution limit the clock frequency, subdividing
these stages may improve the clock frequency. A timing diagram of the resultant pipeline is shown
in
Figure
5. Now four instructions are scratched if a compare-and-branch redirects the instruction
fetch. For example, I 2
may be scratched if I 1
redirects the instruction fetch.
Another method to improve instruction execution rate is to increase the number of instructions
executed per cycle. This is done by fetching, decoding, and executing multiple instructions per
cycle. This is often referred to as multiple instruction issue [44] [12] [27] [31] [32] [19] [35] [36]
. The timing diagram of such a pipeline is shown in Figure 6. In this example, two
4 Although the compare-and-branch instructions are assumed in the example, the methods in this paper apply to
condition code branches as well.
5 Although unconditional branch instructions can redirect the instruction fetch at the ID stage, we ignore the
optimization in this example for simplicity.
instructions are fetched per cycle. When a compare-and-branch (I 1
reaches the EX stage, five
instructions may be scratched from the pipeline. 6
As far as instruction sequencing is concerned, multiple instruction issue has the same effect
as deep pipeling. They both result in increased number of instructions which may be scratched
when a branch redirects the instruction fetch. 7 Combining deep pipelining and multiple instruction
issue will increase the number of instructions to be scratched to a relatively large number. For
example, the TANDEM Cyclone processor requires 14 branch slots due to deep pipeline and multiple
instruction issue[16]. 8 The discussions in this paper do not distinguish between deep pipelining and
multiple instruction issue; they are based on the number of instructions to be scratched by branches.
3 Inline Target Insertion
Inline Target Insertion consists of a compile-time code restructuring algorithm and a run-time
pipelined instruction fetch algorithm. The compile-time code restructuring algorithm transforms a
sequential program P s to a parallel program P p . Inline Target Insertion is correct if the instruction
sequence generated by executing P p on a pipelined instruction fetch unit is identical to that
generated by executing P s on a sequential instruction fetch unit. In this section, we first formally
define the sequential instruction fetch algorithm. Then, we formally define the code restructuring
algorithm and the pipelined instruction fetch algorithm of Inline Target Insertion. From the formal
models of implementation, we will derive a proof of correctness.
3.1 Sequential Instruction Fetch
In a sequential instruction fetch unit, I s (t) is defined as the dynamic instruction during cycle t.
The address of I s
(t) will be referred to as A s
(t)). The target instruction of a branch instruction
I s (t) will be referred to as target(I s (t)). The next sequential instruction of a branch instruction
6 The number of instructions to be scratched from the pipeline depends on the instruction alignment. If I2 rather
than I1 were a branch, four instructions would be scratched.
7 A difference between multiple instruction issue and deep pipelining is that multiple likely control transfer instructions
could be issued in one cycle. Handling multiple likely control transfer instructions per cycle in a multiple
instruction issue processor is not difficult in Inline Target Insertion. The details are not within the scope of this
paper.
8 The processor currently employs an extension to the instruction cache which approximates the effect of a Branch
Target Buffer to cope with the branch problem.
I s (t) will be referred to as fallthru(I s (t)). The sequential instruction fetch algorithm (SIF ) is
shown below.
Algorithm SIF begin
(t) is a taken branch) then
else
A s
The correct successors of a dynamic instruction I s
(t) is defined as the dynamic instructions to
be executed after I s (t) as specified by SIF . The k th correct successors of I s (t) will be denoted
as CS(I s (t); k). It should be noted that CS(I s (t); k). For a sequential program, P s ,
whose execution starts from instruction I 0
, the instruction sequence is (I 0
n) is the first terminating instruction.
3.2 Compiler Implementation
The compiler implementation of Inline Target Insertion involves compile-time branch prediction
and code restructuring. Branch prediction marks each static branch as either likely or unlikely.
The prediction is based on the estimated probability for the branch to redirect instruction fetch
at the run time. The probability can be derived from program analysis and/or execution profiling.
The prediction is encoded in the branch instructions.
The predicted successors (PS) of an instruction I are the instructions which tend to execute after
I . The definition of predicted successors is complicated by the frequent occurrence of branches.
refer to the k th predicted successor of I . The predicted successors of an instruction
can be defined recursively:
9 In the discussions, all address arithmetics are in terms of instruction words. For example, address / address+1
advances the address to the next instruction.
1. If I is a likely branch, then PS(I ; 1) is target(I). Otherwise PS(I ; 1) is fallthru(I).
2.
1).
For example, one can identify the first five predicted successors of F in Figure 2 as shown
below. Since F is a likely branch, its first predicted successor is its target instruction H . The
second predicted successor of F is I , which is a likely branch itself. Thus the third predicted
successor of F is I's target instruction E.
The code restructing algorithm for Inline Target Insertion is shown below. It is also illustrated
by
Figure
7.
Algorithm ITI(N) begin
1. Open N insertion slots after every likely branch 10 .
2. For each likely branch I , adjust its target label from the address of PS(I ; 1) to
(the address of PS(I
3. For each likely branch I , copy its first N predicted successors (PS(I ; 1),PS(I; 2),
its slots 11 . If some of the inserted instructions are branches, make
sure they branch to the same target after copying. 12
It is possible to extend the proofs to a non-uniform number of slots in the same pipeline. The details are not in
the scope of this paper.
This step can be performed iteratively. In the first iteration, the first predicted successors of all likely branches are
determined and inserted. Each subsequent iteration inserts one more predicted successor for all the likely branches.
It takes N iterations to insert all the target instructions to their assigned slots.
12 This is trivial if the code restructuring works on assembly code. In this case, the branch targets are specified as
labels. The assembler automatically generates the correct branch offset for the inserted branches.
The goal of ITI is to ensure that all original instructions find their predicted successors in the
next sequential locations. This is achieved by inserting the predicted successors of likely branches
into their next sequential locations.
We refer to the slots opened by the ITI Algorithm as insertion slots instead of more traditional
terms such as delay slots or squashing delay slots. The insertion slots are only associated with likely
branches. The instructions in the insertion slots are duplicate copies. All the others are original.
This is different from what the terms delay slots and squashing delay slots usually mean. They often
refer to sequential locations after both likely and unlikely branches, which can contain original as
well as duplicate copies.
Figure
8 illustrates the application of IT to a part of the machine program in Figure
2. Step 1 opens two insertion slots for the likely branches F and I . Step 2 adjusts the branch label
so that F branches to H I branches to 2. Step 3 copies the predicted successors of F
(H and I) and I (E and F ) into the insertion slots of F (H 0 and I 0 ) and I(E 0 and F 0 ). Note that
the offset is adjusted so that I 0 and F 0 branches to the same target instructions as I and F . The
reader is encouraged to apply IT to the code for more insights into the algorithm.
With Inline Target Insertion, each instruction may be duplicated into multiple locations. There-
fore, the same instruction may be fetched from one of the several locations. The original address,
of a dynamic instruction is the address of the original copy of I . The fetch address, A f (I), of
a dynamic instruction I is the address from which I was fetched. In Figure 8, the original address
of both I and I 0 is the address of I . The fetch addresses of I and I 0 are their individual addresses.
It should be noted that IT I moves fallthru(I) of a likely branch I to A which is
an original address.
3.3 Sequencing Pipeline Implementation
The sequencing pipeline is divided into N +1 stages. The sequencing pipeline processes all instructions
in their fetch order. If any instruction is delayed due to a condition in the sequencing pipeline
(e.g. instruction cache miss), all the other instructions in the sequencing pipeline are delayed. This
includes the instructions ahead of the one being delayed. The net effect is that the entire sequencing
pipeline freezes. This ensures that the relative pipeline timing among instructions is accurately exposed
1to the compiler. It guarantees that when a branch redirects instruction fetch, all instructions
in its insertion slots have entered the sequencing pipeline. Note that this restriction only applies to
the instructions in the sequencing pipeline, the instructions in the execution pipelines (e.g., data
memory access and floating point evaluation) can still proceed while the instruction sequencing
pipeline freezes.
The definition of time in instruction sequencing separates the freeze cycles from execution cycles.
Freeze cycles do not affect the relative timing among instructions in the sequencing pipeline. In
this paper, cycle t refers to the t th cycle of program execution excluding the freeze cycles. I(k; t) is
defined as the dynamic instruction at the k th stage of the sequencing pipeline during cycle t. The
implementation keeps an array of fetch addresses for all the instructions in the sequencing pipeline.
The fetch address for the instruction at stage i in cycle t will be referred to as A f
A hardware function REF ILL 13 is provided to reload the instruction fetch pipeline from any
original address. REF ILL is called when there is a program startup, an incorrect branch prediction,
or a return from interrupt/exception. It is easy to guarantee that the program startup address is
an original address. We will show in the next subsection that the appropriate original address for
a program to resume after incorrect branch prediction and interrupt/exception handling is always
available.
REF ILL(pc) begin
A f
The pipelined instruction fetch algorithm (PIF ) that is implemented in hardware is shown
below. The sequencing pipeline fetches instructions sequentially by default. Each branch can
13 REFILL is excluded from the accounting of time when proving correctness of Inline Target Insertion. REFILL
may be physically implemented as loading an initial address into Af (I(1; t)) and subsequently computing Af
. REFILL is included in the accounting of time when evaluating the
performance of Inline Target Insertion (Section 4).
redirect the instruction fetch and/or scratch the subsequent instructions when it reaches the end
of the sequencing pipeline. If a branch redirects the instruction fetch, the next fetch address is the
adjusted target address determined in Algorithm IT I . If the decision of a branch is incorrectly
predicted, it scratches all the subsequent instructions from the sequencing pipeline.
Algorithm P IF (N) begin
is not a branch) then
A f
else if (I(N is likely and is taken) then
A f
else if (I(N is unlikely and is not taken) then
A f
else if (I(N is unlikely but is taken) then
else if (I(N is likely but is not taken) then
Figure
9(a) shows a timing diagram for executing the instruction sequence
I ! E) of the machine program in Figure 8(a). With Inline Target Insertion (Figure 8(e)), the
instruction sequence becomes In this case, the branch decision for F
is predicted correctly at compile time. When F reaches the EX stage in cycle 4, no instruction is
scratched from the pipeline. Since F redirects the instruction fetch, the instruction to be fetched by
the IF stage in cycle 5 is E 0 (the adjusted target of F ) rather than the next sequential instruction
G.
Figure
9(b) shows a similar timing diagram for executing the instruction sequence
G). With Inline Target Insertion, the instruction fetch sequence becomes
In this case, the branch decision for F is predicted incorrectly at the compile time. When F reaches
the EX stage in cycle 4, instructions H 0 and I 0 are scratched from the pipeline. Since F does not
redirect the instruction fetch, the instruction fetch pipeline is refilled from the next sequential
instruction G.
3.4 Correctness of Implementation
Branches are the central issue of Inline Target Insertion. Without branches, the sequencing
pipeline would simply fetch instructions sequentially. The instructions emerging from the sequencing
pipeline would be the correct sequence. Therefore, the correctness proofs of the compiler and
pipeline implementation will focus on the correct execution of branches. For pipelines with many
slots, it is highly probable to have branches inserted into insertion slots (see Section 4.2). In the
case where there are no branches in insertion slots, the correctness follows from the description
of the ITI Algorithm. All branch instructions would be original and they would have their first
predicted successors in the next N sequential locations. Whereas a branch instruction in an
insertion slot cannot have all its N predicted successors in the next N sequential locations. For
example, in Figure 8(e), questions arise regarding the correct execution of F 0 . When F 0 redirects
the instruction fetch, how do we know that the resulting instruction sequence is always equivalent
to the correct sequence F
Insertion is correct if the instruction sequence that is generated by
n) is the first stop instruction.
We shall prove that the instruction sequence that is issued by (P IF , P p
is identical to that
by (SIF , P s
Unfortunately, it is difficult to compare the output of PIF and SIF on a step
by step basis. We will first identify sufficient conditions for (PIF , P p ) to generate the same
instruction sequence as (SIF , P s ), and then show that these conditions are guaranteed by Inline
Target Insertion.
To help the reader to read the following lemmas and theorems, we list important terms in Table
2. We define two equality relations on the state variables of the instruction fetch pipeline.
Theorem 1 states that these two equality relations are sufficient to ensure the correctness of
Inline Target Insertion.
Theorem 1 If R(t) and S(t) are true for all t, then I(N
Proof: The theorem can be proved by induction on t.
Induction basis: From the definition of REF ILL, I(N
Induction step: Assuming P (t) is true, show
Case 1: I(N is not an incorrectly predicted branch.
According to P IF , I(N+1; implies that I(N; 1). For
a correctly predicted instruction I(N is equal to CS(I(N 1).
Hence, I(N
Case 2: I(N is unlikely but is taken.
PIF performs at t. According to the definition of REF ILL,
which is CS(I(N+1; t); 1). Hence, I(N+1;
Case 3: I(N is likely but is not taken.
PIF performs t. According to the definition of REF ILL and
Because I(N +1; t) is a
likely branch, IT I allocates N insertion slots after A
is at A is not taken, CS(I(N
It should be noted that, if I(N is a likely branch, the original copy of fallthru(I(N is always at
according to ITI. Therefore, Ao(I(N argument for REFILL.
Theorem 1 shows that R(t) and S(t) are sufficient to ensure correct execution. Therefore, we
formulate the next theorem as the ultimate correctness proof of Inline Target Insertion.
Theorem 2 IT I and PIF ensure that R(t) and S(t) are true for all t.
Theorem 2 has a standard induction proof. We start by proving that R(0) and S(0) are true.
Then we show that, if R(t) and S(t) are true, R(t+1) and S(t+1) are also true. Because PIF and
IT I are complex algorithms, we need to consider several cases in each step of the proof. Instead
of presenting the proof as a whole, we will first present several lemmas, from which the proof of
Theorem 2 naturally follows.
is performed at time t so that I entry is I(N
are true.
Proof:
IT I ensures that the original instructions find their N predicted successors in their next N
sequential addresses. R(t naturally follows the definition of REFILL.
implied by the definition of REFILL. Because
Therefore,
shows that refilling the instruction fetch pipeline from an original address ensures that
R(t+1) and S(t+1) are true. The instruction sequence pipeline is initialized by
)),
where I 0
is the entry point of a program. It follows from Lemma 1 that R(0) and S(0) are true.
We proceed to prove that, if R(t) and S(t) are true, S(t + 1) is also true. We first prove for
the case when I(N is fetched from its original address, and then prove for the case when
is fetched from one of its duplicate addresses.
Lemma 2 If R(t) and S(t) are true and A f (I(N
is also true.
Proof:
is fetched from its original address, I(N cannot be a likely branch. We
need to consider only the following two cases.
Case 1: I(N is not a branch or is an unlikely branch which is not taken.
PIF performs A f
Adding 1 to both sides of S(t) results in A f
Because IT I allocates insertion slots only for likely branches and I(N is not a likely
branch, the original addresses of I(N must be adjacent to each
other. In other words, A
true.
Case 2: I(N is an unlikely branch but is taken.
PIF performs REF t. Correctness of S(t
from Lemma 1. Note that A is an original (and therefore legal) address
for REFILL.The case where I(N is fetched from an insertion slot is fairly difficult to prove. We
will first prove an intermediate lemma.
Lemma 3 If A f (I(N 1)), then there must be a k that satisfies all
the following four conditions.
likely branch.
(3) There can be no likely branches between I(N inclusively.
(4) There is no incorrectly predicted branch between I(N inclusively.
Proof:
not fetched from its original address, it must be fetched from an insertion
slot. Therefore, there must be at least one likely branch among the N instructions fetched before
1). The one that is fetched closest to I(N (1), (2), and (3).
We can prove (4) by contradiction. Assume that there was an incorrectly predicted branch
between I(N inclusively. Then, a REF ILL was performed after
at an original address. Because there was no likely branch between I(N
inclusively, I(N must be fetched from its original address. This is a contradiction to the
precondition of this Lemma: A f (I(N Therefore, our assumption
that there was an incorrectly predicted branch between I(N
cannot be true.Lemma 4 If A f
and S(t) are true, then
is also true.
Proof:
We will use the k found in Lemma 3.
Case 1:
is a likely branch. In this case, P IF performs A f
implies that I(N; 1). Because PIF performs A f
and A
Case 2:
(1) Because I(N likely branch, PIF performed A f
A
(2) Because likely branch, I(N;
A
(3) Because there was no likely branch between I(N inclusively,
From (1), (2) and (3), A f
(5) Because there was no likely branch between I(N inclusively,
A
From (4) and (5), A f could be included in Case 2 of the proof. We separate the two cases to make the proof more clear.
Lemma 2 and Lemma 4 collectively ensure that, if S(i) and R(i) are true for
is also true. We proceed to show that R(t + 1) is also true.
Lemma 5 If R(t), S(t), and S(t + 1) are true, then R(t + 1) is also true.
Proof:
Case 1: I(N is an incorrectly predicted branch.
For this case, PIF performs a REFILL. Lemma 1 ensures that I(i; t
after a REFILL.
It remains to be shown that the argument to REF ILL is an original address. If I(N +1; t) is
an unlikely branch, the argument to REFILL is A which is an original
address.
is a likely branch, the argument to REF ILL is A f
1. According to
is a
likely branch, ITI ensures that A
Case 2: I(N is not an incorrectly predicted branch.
(1) From Lemma 2 and Lemma 4, A f
(2) According to IT I, an original instruction can find its predicted successors in the next
sequential instructions. Therefore, must be PS(I(N to be placed in
A
(3) Because I(N is not an incorrectly predicted branch, P IF performs "f or
1::N do A f
t))". Therefore, R(t) implies that I(i;
From (2) and (3), R(t + 1) is true.Proof of Theorem 2 By induction on t. It follows from Lemma 1 that R(0) and S(0) are true.
From Lemma 2, Lemma 4, and Lemma 5, if R(t) and S(t) are true, R(t are also
true.
3.5 Interrupt/Exception Return
The problem of interrupt/exception return arises when interrupts and exceptions occur to instructions
in insertion slots. For example, assume that the execution of code in Figure 8(e) involves
an instruction sequence, Branch F is correctly predicted to be
taken. The question is, if H 0 caused a page fault, how much instruction sequencing information
must be saved so that the process can resume properly after the page fault is handled? If one saved
only the address of H 0 , the information about F being taken is lost. Since H 0 is a not a branch,
the hardware would assume that I 0 was to be executed after H 0 . Since I 0 is a likely branch and
is taken, the hardware would incorrectly assume that G and H resided in the insertion slots of I 0 .
The instruction execution sequence would become which is incorrect.
The problem is that resuming execution from H 0 violated the restriction that an empty sequencing
pipeline always starts fetching from an original instruction. The hardware does not have
the information that H 0 was in the first branch slot of F and that F was taken before the page
occurred. Because interrupts and exceptions can occur to instructions in all insertion slots
of a branch and there can be many likely branches in the slots, the problem cannot be solved by
simply remembering the branch decision for one previous branch.
A popular solution to this problem is to save all the previous N fetch addresses plus the fetch
address of the re-entry instruction. During exception return, all the N will be
used to reload their corresponding instructions to restore the instruction sequencing state to before
the exception. The disadvantage of this solution is that it increases the number of states in the
pipeline control logic and can therefore slow down the circuit. The problem becomes more severe
for pipelines with a large number of slots.
In Inline Target Insertion, interrupt/exception return to an instruction I is correctly performed
by available in the form of A f (Theorem 2).
One can record the original addresses when delivering an instruction to the execution units. This
guarantees that the original address of all instructions active in the execution units are available.
Therefore, when an interrupt/exception occurs to an instruction, the processor can save the original
address of that instruction as the return address. Lemma 1 ensures that R(t are
true after REF ILL from an original address.
Figure
shows the effect of an exception on the sequencing pipeline. Figure 10(a) shows the
timing of a correct instruction sequence Figure 8(e) without
exception. Figure 10(b) shows the timing with an exception to H 0 . When H 0 reaches the end of the
sequencing pipeline (EX stage) at t, its A availble in the form of A f (I(1; 2. This
address will be maintained by the hardware until H 0 finishes execution 16 . When an exception is
detected, A
saved as the return address. During exception return, the sequencing pipeline
resumes instruction fetch from H , the original copy of H 0 . Note that the instruction sequence
produced is H which is equivalent to the one without exception.
Note that the original copies must be preserved to guarantee clean implementation of inter-
rupt/exception return. In Figure 8(e), if normal control transfers always enter the section at
there is an opportunity to remove E and F after Inline Target Insertion to reduce code size. How-
ever, this would prevent clean interrupt/exception return if one occurs to E 0 or F 0 . Section 4.2
presents an alternative approach to reducing code expansion.
3.6 Extension to Out-of-order Execution
Inline Target Insertion can be extended to handle instruction sequencing for out-of-order execution
machines [46] [47] [45] [18] [19] [41] . The major instruction sequencing problem for out-of-order execution
machines is the indeterminate timing of deriving branching conditions and target addresses.
It is not feasible in general to design an efficient sequencing pipeline where branches always have
their conditions and target addresses at the end of the sequencing pipeline. To allow efficient
out-of-order execution, the sequencing pipeline must allow the subsequent instructions to proceed
whenever possible.
To make Inline Target Insertion and its correctness proofs applicable to out-of-order execution
machines, the following changes should be made to the pipeline implementation.
1. The sequencing pipeline is designed to be long enough to identify the target addresses for
program-counter-relative branches and for those whose target addresses can be derived without
interlocking.
2. When a branch reaches the end of the sequencing pipeline, the following conditions may occur:
The real original address does not have to be calculated until an exception is detected. One can simply save
Af (I(1; t)) and only calculate Ao(I(N when an exception actually occurs. This avoids requiring an extra
subtractor in the sequencing pipeline.
(a) The branch is a likely one and its target address is not available yet. In this case, the
sequencing pipeline freezes until the interlock is resolved.
(b) The branch is an unlikely one and its target address is not yet available. In this case, the
sequencing pipeline proceeds with the subsequent instructions. Extra hardware must be
added to secure the target address when it becomes available to recover from incorrect
branch prediction. The execution pipeline must also be able to cancel the effects of the
subsequent instructions emerging from the sequencing pipeline for the same reason.
(c) The branch condition is not yet available. In this case, the sequencing pipeline proceeds
with the subsequent instructions. Extra hardware must be added to secure the repair
address to recover from incorrect branch prediction. The execution pipeline must be
able to cancel the effects of the subsequent instructions emerging from the sequencing
pipeline for the same reason.
If a branch is program counter relative, both the predicted and alternative addresses are available
at the end of the sequencing pipeline. The only difference from the original sequencing pipline model
is that the condition might be derived later. Since the hardware secures the alternative address, the
sequencing state can be properly recovered from incorrectly predicted branches. If the branch target
address is derived from run-time data, the target address of a likely branch may be unavailable
at the end of the sequencing pipeline. Freezing the sequencing pipeline in the above specification
ensures that all theorems hold for this case. As for unlikely branches, the target address is the
alternative address. The sequencing pipeline can proceed as long as the alternative address is
secured when it becomes available. Therefore, all the proofs above hold for out-of-order execution
machines.
Experimentation
The code expansion cost and instruction sequencing efficiency of Inline Target Insertion can only be
evaluated empirically. This section reports experimental results based on a set of production quality
software from UNIX 17 and CAD domains. The purpose is to show that Inline Target Insertion is
17 UNIX is a trademark of AT&T.
an effective method for achieving high instruction sequencing efficiency for pipelined processors.
All the experiments are based on the an instruction set architecture which closely resembles MIPS
R2000/3000[25] with modifications to accommodate Inline Target Insertion. The IMPACT-I C
Compiler, an optimizing C compiler developed for deep pipelining and multiple instruction issue
at the University of Illinois, is used to generate code for all the experiments [4][21][6][7].
4.1 The Benchmark
Table
3 presents the benchmarks chosen for this experiment. The C lines column describes the
size of the benchmark programs in number of lines of C code (not counting comments). The runs
column shows the number of inputs used to generate the profile databases and the performance
measurement. The input description column briefly describes the nature of the inputs for the
benchmarks. The inputs are realistic and representative of typical uses of the benchmarks. For
example, the grammars for a C compiler and for a LISP interpreter are two of ten realistic inputs
for bison and yacc. Twenty files of several production quality C programs, ranging from 100 to
3000 lines, are inputs to the cccp program. All the twenty original benchmark inputs form the input
to espresso. The experimental results will be reported based on the mean and sample deviation
of all program and input combinations shown in Table 3. The use of many different real inputs to
each program is intended to verify the stability of Inline Target Insertion using profile information.
The IMPACT-I compiler automatically applies trace selection and placement, and has removed
unnecessary unconditional branches via code restructuring [4][6].
4.2 Code Expansion
The problem of code expansion has to do with the frequent occurrence of branches in programs.
Inserting target instructions for a branch adds N instructions to the static program. In Figure 8,
target insertion for F and I increases the size of the loop from 5 to 9 instructions. In general, if Q is
the probability for static instructions to be likely branches among all the benchmarks),
Inline Target Insertion can potentially increase the code size by N Q (180% for
One may argue that the originals of the inserted instructions may be deleted to save space if the flow of control
allows. We have shown, however, preserving the originals is crucial to the clean return from exceptions in insertion
slots (see Section 3.5).
Because large code expansion can significantly reduce the efficiency of hierarchical
memory systems, the problem of code expansion must be addressed for pipelines with a large
number of slots.
Table
4 shows the static control transfer characteristics of the benchmarks. The static cond.
(static uncond.) column gives the percentage of conditional (unconditional) branches among all
the static instructions in the programs. The numbers presented in Table 4 confirm that branches
appear frequently in static programs. This shows for the need for being able to insert branches in
the insertion slots (see Section 3.4). The high percentage of branches suggests that code expansion
must be carefully controlled for these benchmarks.
A simple solution is to reduce the number of likely branches in static programs using a threshold
method. A conditional branch that executes fewer number of times than a threshold value is
automatically converted into an unlikely branch. An unconditional branch instruction that executes
a fewer number of times than a threshold value can also be converted into an unlikely branch whose
branch condition is always satisfied. The method reduces the number of likely branches at the
cost of some performance degradation. A similar idea has been implemented in the IBM Second
Generation RISC Architecture[2].
For example, if there are two likely branches A and B in the program. A is executed 100 times
and it redirects the instruction fetch 95 times. B is executed 5 times and it redirects the instruction
fetch 4 times. Marking A and B as likely branches achieves correct branch prediction 99 (95+4)
times out of a total of 105 (100+5). The code size increases by 2 N . Since B is not executed
nearly as frequently as A, one can mark B as an unlikely branch. In this case, the accuracy of
branch prediction is reduced to be 96 (95+1) times out of 105. The code size only increases by
N . Therefore, a large saving in code expansion could be achieved at the cost of a small loss in
performance.
The idea is that all static likely branches cause the same amount of code expansion but their
execution frequency may vary widely. Therefore, by reversing the prediction for the infrequently
executed likely branches reduces code expansion at the cost of slight loss of prediction accuracy.
This is confirmed by results shown in Table 5. The threshold column specifies the minimum
dynamic execution count per run, below which, likely branches are converted to unlikely branches.
The E[Q] column lists the mean percentage of likely branches among all instructions and the SD[Q]
column indicates the sample deviations. The code expansion for a pipeline with N slots is N E[Q].
For example, for with a threshold value of 100, one can expect a 2.2% increase in the static
code size. Without code expansion control (threshold=0), the static code size increase would be
36.2% for the same sequencing pipeline. For another example, for a 11-stage sequencing pipeline
with a threshold value of 100, one can expect about 11% increase in the static code size.
code expansion control (threshold=0), the static code size increase would be 181% for the
same sequencing pipeline. Note that the results are based on control intensive programs. The code
expansion cost should be much lower for programs with simple control structures such as scientific
applications.
4.3 Instruction Sequencing Efficiency
The problem of instruction sequencing efficiency is concerned with the total number of dynamic
instructions scratched from the pipeline due to all dynamic branches. Since all insertion slots are
inserted with predicted successors, the cost of instruction sequencing is a function of only N and
the branch prediction accuracy. The key issue is whether the accuracy of compile-time branch
prediction is high enough to ensure that the instruction sequencing efficiency remains high for large
values of N .
Evaluating the instruction sequencing efficiency with Inline Target Insertion is straighforward.
One can profile the program to find the frequency for the dynamic instances of each branch to go
in one of the possible directions. Once a branch is predicted to go in one direction, the frequency
for the branch to go in other directions contributes to the frequency of incorrect prediction. Note
that only the correct dynamic instructions reach the end of the sequencing pipeline where branches
are executed. Therefore, the frequency of executing incorrectly predicted branches is not affected
by Inline Target Insertion.
In
Figure
11(a), the execution frequencies of F and I are both 100. E and F redirect the
instruction fetch 80 and 99 times respectively. By marking F and I as likely branches, we predict
them correctly for 179 times out of 200. That is, 21 dynamic branches will be incorrectly predicted.
Since each incorrectly predicted dynamic branch creates N nonproductive cycles in the sequencing
pipeline, we know that the instruction frequencing cost is 21*N . Note that this number is not
changed by Inline Target Insertion. Figure 11(b) shows the code generated by ITI(2). Although
we do not know exactly how many times F and F 0 were executed respectively, we know that their
total execution count is 100. We also know that the total number of incorrect predictions for F
and F 0 is 20. Therefore, the instruction sequencing cost of Figure 11(b) can be derived from the
count of incorrect prediction in Figure 11(a) multiplied by N .
Let P denote the probability that any dynamic instruction is incorrectly predicted. Note
that this probability is calculated for all dynamic instructions, including both branches and non-
branches. The average instruction sequencing cost can be estimated by the following equation:
relative sequencing cost per instruction
If the peak sequencing rate is 1=K cycles per instruction, the actual rate would be (1
cycles per instruction 19 .
Table
4 highlights the dynamic branch behavior of the benchmarks. The dynamic cond. (dy-
namic uncond.) column gives the percentage of conditional (unconditional) branches among all
the dynamic instructions in the measurement. The dynamic percentages of branches confirm that
branch handling is critical to the performance of processors with large number of branch slots. For
example, 20% of the dynamic instructions of bison are branches. The P value for this program
is the branch prediction miss ratio times 20%. Assume that the sequencing pipeline has a peak
sequencing rate of one cycle per instruction and it has three slots 3). The required
prediction accuracy to achieve a sequencing rate of 1.1 cycles per instruction can be calculated as
follows:
The prediction accuracy must be at least 83.3%.
Table
6 provides the mean and sample deviation of P for a spectrum of threshholds averaged
over all benchmarks. Increasing the threshhold effectively converts more branches into unlikely
branches. With 2, the relative sequencing cost per instruction is 1.036 per instruction for
threshhold equals zero (no optimization). For a sequencing pipeline whose peak sequencing rate
is one instruction per cycle, this means a sustained rate of 1.036 cycles per instruction. For a
sequencing pipeline which sequences k instructions per cycle, this translates into 1:036=k (.518
This formula provides a measure of the efficiency of instruction sequencing. It does not take external events such
as instruction misses into account. Since such external events freeze the sequencing pipeline, one can simply add the
extra freeze cycles into the formula to derive the actual instruction fetch rate.
cycles per instruction. When the threshhold is set to 100, the relative sequencing
cost per instruction is 1.04. With 10, the relative sequencing cost per instruction is 1.18
for threshhold equals zero (no optimization). When the threshhold is set to 100, the sequencing
cost per instruction instruction becomes 1.20. Comparing Table 5 and Table 6, it is obvious that
converting infrequently executed branches into unlikely branches reduces the code expansion at
little cost of instruction sequencing efficiency.
5 Conclusion
We have defined Inline Target Insertion, a cost-effective instruction sequencing method extended
from the work of McFarling and Hennessy[29]. The compiler and pipeline implementation offers
two important features. First, branches can be freely inserted into branch slots. The instruction
sequencing efficiency is limited solely by the accuracy of compile-time branch prediction. Second,
the execution can return from an interruption/exception to a program with one single program
counter. There is no need to reload other sequencing pipeline state information. These two features
make Inline Target Insertion a superior alternative (better performance and less software/hardware
complexity) to the conventional delayed branching mechanisms.
Inline Target Insertion has been implemented in the IMPACT-I C Compiler to verify the compiler
implementation complexity. The software implementation is simple and straightforward. The
IMPACT-I C Compiler is used in experiments reported in this paper. A code expansion control
method is also proposed and included in the IMPACT-I C Compiler implementation. The code
expansion and instruction sequencing efficiency of Inline Target Insertion have been measured for
UNIX and CAD programs. The experiments involve the execution of more than a billion in-
structions. The size of programs, variety of programs, and variety of inputs to each program are
significantly larger than those used in the previous experiments.
The overall compile-time branch prediction accuracy is about 92% for the benchmarks in this
study. For a pipeline which requires 10 branch slots and fetches two instructions per cycle, this
translates into an effective instruction fetch rate of 0.6 cycles per instruction(see Section 4.3). In
order to achieve the performance level reported in this paper, the instruction format must give
the compiler complete freedom to predict the direction of each static branch. While this can be
easily achieved in a new instruction set architecture, it could also be incorporated into an existing
architecture as an upward compatible feature.
It is straightforward to compare the performance of Inline Target Insertion and that of Branch
Target Buffers. For the same pipeline, the performance of both are determined by the branch
prediction accuracy. Hwu, Conte and Chang[20] performed a direct comparison between Inline
Target Insertion and Branch Target Buffers based on a similar set of benchmarks. The conclusion
was that, without context switches, Branch Target Buffers achieved an instruction sequencing
efficiency slightly lower than Inline Target Insertion. Context switches could significantly enlarge
the difference[28]. All in all, Branch Target Buffers have the advantages of binary compatibility
with existing architectures and no code expansion. Inline Target Insertion has the advantage of
not requiring extra hardware buffers, better performance, and performance insensitive to context
switching.
The results in this paper do not suggest that Inline Target Insertion is always superior to
Branch Target Buffering. But rather, the contribution is to show that Inline Target Insertion is a
cost-effective alternative to Branch Target Buffer. The performance is not a major concern. Both
achieve very good performance for deep pipelining and multiple instruction issue. The compiler
complexity of Inline Target Insertion is simple enough not to be a major concern either. This has
been proven in the IMPACT-I C Compiler implementation. If the cost of fast hardware buffers and
context switching are not major concerns but binary code compatibility and code size are, then
Branch Target Buffer should be used. Otherwise, Inline Target Insertion should be employed for
its better performance characteristics and lower hardware cost.
Acknowledgements
The authors would like to thank Michael Loui, Guri Sohi, Nancy Warter, Sadun Anik, Thomas
Conte, and all members of the IMPACT research group for their support, comments and sugges-
tions. We also like to thank the anonymous referees for their comments which were extremely
helpful in improving the quality of this paper. This research has been supported by the National
Science Foundation (NSF) under Grant MIP-8809478, Dr. Lee Hoevel at NCR, the Joint Services
Engineering Programs (JSEP) under Contract N00014-90-J-1270, the National Aeronautics and
Space Administration (NASA) under Contract NASA NAG 1-613 in cooperation with the Illinois
Computer laboratory for Aerospace Systems and Software (ICLASS), and the Office of Naval
Research under Contract N00014-88-K-0656.
--R
"Am29000 Streamlined Instruction Processor, Advance Informa- tion,"
"IBM Second-Generation RISC Machine Organization,"
"Beyond RISC: High Precision Architecture"
"Trace Selection for Compiling Large C Application Programs to Microcode"
"Forward Semantic: A Compiler-Assisted Instruction Fetch Method For Heavily Pipelined Processors"
"Control Flow Optimization for Supercomputer Scalar Pro- cessing"
"Aggressive Code Improving Techniques Based on Control Flow Analysis"
"Architecture Tradeoffs in the Design of MIPS-X"
"An Evaluation of Branch Architectures"
"Branch Folding in the CRISP Microprocessor: Reducing Branch Delay to Zero"
"A Characterization of Processor Performance in the VAX-11/780"
"Percolation of Code to Enhance Parallel Dispatching and Execution"
"Optimizing Delayed Branches"
"MIPS: A VLSI Processor Architecture"
"Design Decisions in SPUR"
"Multiple Instruction Issue in the NonStop Cyclone Processor"
"Highly Concurrent Scalar Processing"
"Checkpoint Repair for High Performance Out-of-order Execution Machines"
"Exploiting Concurrency to Achieve High Performance in a Single-chip Microar- chitecture"
"Comparing Software and Hardware Schemes For Reducing the Cost of Branches"
"Inline Function Expansion for Compiling Realistic C Pro- grams"
"Efficient Instruction Sequencing with Inline Target Insertion"
"i860(TM) 64-bit Microprocessor"
"Available Instruction-Level Parallelism for Superscalar and Superpipelined Machines"
MIPS R2000 RISC Architecture
The Architecture of Pipelined Computers
"On the Number of Operations Simultaneously Executable in Fortran-likePrograms and Their Resulting Speedup"
"Branch Prediction Strategies and Branch Target Buffer Design"
"Reducing the Cost of Branches"
"The Design of the 88000 RISC Family"
"Measuring the Parallelism Available for Very Long Instruction Word Architectures"
"HPS, A New Microarchitecture: Rationale and Introduction"
"A VLSI RISC"
"WISQ: A Restartable Architecture Using Queues"
"Multiple Instruction Issue and Single-chip Processors"
"The Performance Potential of Multiple Functional Unit Processors"
"The 801 Minicomputer"
"Espresso-MV: Algorithms for Multiple-Valued Logic Minimization"
"A Study of Branch Prediction Strategies"
"Implementation of Precise Interrupts in Pipelined Processors"
"Limits on Multiple Instruction Issue"
"Tradeoffs in Instruction Format Design for Horizontal Ar- chitectures"
The SPARC(TM) Architecture Manual
"Detection and Parallel Execution of Independent Instruc- tions"
"An Instruction Issuing Approach to Enhancing Performance in Multiple Functional Unit Processors"
"An Efficient Algorithm for Exploiting Multiple Arithmetic Units"
"Instruction Issue Logic in Pipelined Supercomputers"
--TR
Design decisions in SPUR
An instruction issuing approach to enhancing performance in multiple functinal unit processors
Highly concurrent scalar processing
Reducing the cost of branches
HPS, a new microarchitecture: rationale and introduction
Branch folding in the CRISP microprocessor: reducing branch delay to zero
WISQ: a restartable architecture using queues
Architectural tradeoffs in the design of MIPS-X
Checkpoint repair for high-performance out-of-order execution machines
The performance potential of multiple functional unit processors
Trace selection for compiling large C application programs to microcode
Multiple instruction issue and single-chip processors
Tradeoffs in instruction format design for horizontal architectures
Available instruction-level parallelism for superscalar and superpipelined machines
Limits on multiple instruction issue
Inline function expansion for compiling C programs
Comparing software and hardware schemes for reducing the cost of branches
Forward semantic: a compiler-assisted instruction fetch method for heavily pipelined processors
Control flow optimization for supercomputer scalar processing
Multiple instruction issue in the NonStop cyclone processor
Implementation of precise interrupts in pipelined processors
The Design of the 88000 RISC Family
Optimizing delayed branches
The 801 minicomputer
A study of branch prediction strategies
A Characterization of Processor Performance in the vax-11/780
Hpsm
--CTR
Apoorv Srivastava , Alvin M. Despain, Prophetic branches: a branch architecture for code compaction and efficient execution, Proceedings of the 26th annual international symposium on Microarchitecture, p.94-99, December 01-03, 1993, Austin, Texas, United States
Oliver Rthing , Jens Knoop , Bernhard Steffen, Sparse code motion, Proceedings of the 27th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p.170-183, January 19-21, 2000, Boston, MA, USA
Sofine Tahar , Ramayya Kumar, A Practical Methodology for the Formal Verification of RISC Processors, Formal Methods in System Design, v.13 n.2, p.159-225, Sept. 1998 | compiler;instruction sequencing;exceptions;interrupts;pipeline;branch slots;pipeline processing;delayed branches;inline target insertion;parallel programming;squashing;program counter;program compilers |
626722 | Finite Precision Error Analysis of Neural Network Hardware Implementations. | Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated. | Introduction
The high speed desired in the implementation of many neural network algorithms, such as
back-propagation learning in a multilayer perceptron (MLP), may be attained through the
use of finite precision hardware. This finite precision hardware; however, is prone to errors.
A method of theoretically deriving and statistically evaluating this error is presented and
could be used as a guide to the details of hardware design and algorithm implementation.
The paper is devoted to the derivation of the techniques involved as well as the details of
the back-propagation example. The intent is to provide a general framework by which most
neural network algorithms under any set of hardware constraints may be evaluated.
Section 2 demonstrates the sources of error due to finite precision computation and
their statistical properties. A general error model is also derived by which an equation for
the error at the output of a general compound operator may be written. As an example,
error equations are derived in Section 3 for each of the operations required in the forward
retrieving and error back-propagation steps of an MLP. Statistical analysis and simulation
results of the resulting distribution of errors for each individual step of an MLP are also
included in this section. These error equations are then integrated, in Section 4, to predict
the influence of finite precision computation on several stages (early, middle, final stages)
of back-propagation learning. Finally, concluding remarks are given in Section 5.
2 Sources of Error in Finite Precision Computation
For a finite precision computation of a nonlinear operation of multiple variables, several
sources of error exist. For example, in the computation of y = OE(wx), the two input
variables, w and x, have input errors ffl w and ffl x , respectively, whose sources are prior
finite precision data manipulations. There are errors generated due to the finite precision
computation of involved operators. More specifically, the finite precision multiplication of
the two variables generates one error, ffl 3 . Similarly, the finite precision nonlinear operator
OE generates the other error, ffl OE . Therefore, the resulting finite precision result ~
y is equal to
~
where we assume that the error product ffl w ffl x is negligible, and a first order Taylor series
approximation is used.
The input errors are propagated through the operators. For example, the multiplication
of the two variables with finite precision errors propagates error, i.e., wffl x This
propagated error along with the generated finite precision multiplication error, ffl 3 , further
propagates through the nonlinear operator, resulting in the total finite precision error,
The total finite precision error ffl y imposed on y will then become the input finite precision
error of variable y for future operations.
2.1 Error Generation and Propagation by Successive Operators
A compound operator, which is produced by successive operators OE
is shown in Figure (1). Error at the input, ffl x , and error generated in each operator, ffl OE i
is propagated through the remaining operators to the output. We can approximate the
output error, ffl y , in terms of ffl x , ffl OE i
, and OE i [8]. From Figure (1),
Figure
1: Successive operators generating and propagating error, where
If y i is defined as the intermediate result after the first i successive operators,
and
. (5)
Carrying out similar expansion for all intermediate values, we can rewrite ~ y to be
Y
where
k (1) is defined to be 1.
The product shown is just the chain rule for derivative, @y
, which can be further approximated
by the derivative without error, @y
. Note that this approximation is equivalent
to the approximation already made in the first order Taylor series.
Y
Y
Therefore,
@y
@x
@y
2.2 Error Generation and Propagation by General Compound Operators
The effects of finite precision error at the output of a general system of compound operators
with multiple input variables can be calculated through an extension of the previous analysis
for successive operators of a single variable [8]. The following steps are employed:
1. Break the computation into a calculation graph (see the example given in Figure (2)).
The general calculation graph is made up of n operators, fOE i g, and m system inputs,
g.
2. Number the operators, OE , such that the intermediate generated inputs,
fy k g, to an operator, OE i , have lower indices than the operator output, y i . By extending
Eq. (8) to multiple inputs, the total finite precision error, ffl y , is given as [8]:
@y
@y
3. Using the calculation graph, the partial derivatives,
and
, are evaluated
and substituted into Eq. (9) to give an equation for ffl y .
4. Statistical methods discussed below are then used to evaluate the error in Eq. (9).
Methods include the computation of mean and variance for various functions of random
variables, as well as approximations using the central limit theorem.
2.3 Common Techniques of Finite Precision Computations
Three common techniques are used in finite precision computations: truncation, jamming,
and rounding. The truncation operator simply chops the q lowest order bits off of a number
and leaves the new lowest order bit, in the 2 r -th place, unchanged. The jamming operator
chops off the q lowest order bits of a number and forces the new lowest order bit, in the 2 r -th
place, to be a "1" if any of the q removed bits were a "1"; otherwise, the new lowest order
bit retains its value. This operation is equivalent to replacing the 2 r -th bit with the logical
OR of the 2 r -th bit and the q bits which are chopped off. The jamming operator has the
advantage of generating error with zero mean, but generates error with a higher variance
compared with the truncated one. The rounding operator also chops off the q lowest order
bits of a number which will have its new lowest order bit in the 2 r -th place. If the q bit
value chopped off is greater than or equal to 2 r01 , the resulting value is incremented by 2 r ;
otherwise, it remains unchanged [1].
The error generated by truncating, jamming, or rounding techniques may be considered
to be a discrete random variable distributed over a range determined by the specific
technique being employed. For a statistical view of the error, it is desirable to know the
mean and variance of the error generated by each of these three techniques. For a discrete
random variable, x, the mean is given by:
and the variance is:
is usually assumed to be uniformly distributed.
Truncation: Truncation generates error which is uniformly distributed in the range [02 r
with each of the 2 q possible error values being of equal probability. Therefore,
1. Mean and variance may then be computed.
Jamming: The error generated by jamming is not uniformly distributed as the probability
of the error being zero is twice the probability of the error holding any of its other possible
values. The range of error is
and x This results in 2 q+1 0 1 possible error values.
The mean and variance of jamming error are then
Rounding: Rounding generates error which is uniformly distributed in the range [02 r01 ,
with each of the 2 q possible error values being of equal probability. Therefore,
1. The mean and variance are computed.
Nonlinear Functions of Discrete Random Variables: For a nonlinear function of a
discrete random variable, x, the mean and variance are given by:
2.4 Statistical Properties of Independent Random Variables
For two independent random variables, x and y, with means - x ; - y and variances oe 2
y ,
and a constant a, the following properties of mean and variance can be shown [7].
1. Multiplication by a constant.
2. Sum of two independent random variables.
3. Product of two independent random variables.
2.5 Statistical Properties of Sum of Independent Random Variables
Expected Squared Error: The expected squared error can be written in terms of the
mean and variance of the error. Consider a set of errors which are independent random
variables, fffl i g, with mean - and variance oe 2 . Then, the expected value of the average sum
of the squares of fffl i g can be written,
"N
Noting that oe
the expected value of the average sum squared errors is
equal
Central Limit Theorem: The central limit theorem [7] states that if fx i g are independent
random variables, then the density of their sum, properly normalized, tends to
a normal curve as N ! 1. For discrete random variables, the probabilities tend to the
samples of a normal curve. Normalization can be achieved in a couple of different ways. If
are discrete random variables with mean - and variance oe 2 , then the sum x;
results in -
Invoking the central limit theorem, the probability that the sum of the random variables,
x, is equal to the discrete value x k which approaches the sample of a normal curve when N
is large.
oe x
Sum of Products of Independent Random Variables: The central limit theorem
can be extended to cover the case where the random variable being summed is a product
of random variables. Say that for independent random variables fx i g and fy i g,
then the probability density of the random variable xy approaches a normal curve for large
N with mean and variance equal to
xy (28)
3 Application to Neural Network Retrieving and Learning
It has been shown that operations in both the retrieving and the learning phases of most of
the neural network models can be formulated as a linear affine transformation interleaved
with simple linear or nonlinear scalar operation. In terms of the hardware implementations,
all these formulations call for a MAC (multiply and accumulation) processor hardware
[4, 5, 1]. Without loss of generality, we will specifically discuss the multilayer perceptron
neural network model and the back-propagation learning [10, 9].
3.1 Forward Retrieving and Back Propagation of an MLP
Given a trained (fixed-weight) MLP, the retrieving phase receives the input test pattern,
fx 0;i g, and propagates forward through the network to compute the activation values of
the output layer, fx L;j g, which will be used as the indicator for classification or regression
purposes. On the other hand, in the commonly used learning phase of an MLP, the input
training pattern, fx 0;i g, is first propagated forward through the network and the activation
values, fx l;j g, are computed according to the same forward operations used in the retrieving
phase. Then the output activation values, fx L;j g, are compared with the target values
g, and the value of the output delta, fffi L;j g, for each neuron in the output layer is
derived. These error signals are propagated backward to allow the recursive computation
of the hidden delta's, fffi l;j g, as well as the update values of the weights, f1w l;i;j g, at each
layer. While many methods have been proposed to accelerate learning by estimating the
local curvature of the training error surface using second order derivative information, the
discussion of these methods are beyond the scope of this paper, and can be referred to [3].
The operations in the forward retrieving of an L-layer perceptron can be formulated as
a forward affine transformation interleaved with a nonlinear scalar activation function:
where x l;i denotes the activation value of the i th neuron at the l th layer, w l+1;i;j denotes the
synaptic weight interconnecting the i th neuron at the l th layer and the j th neuron at the
th layer. The nonlinear activation function, f(1), is usually taken to be sigmoidal.
The learning of an MLP follows the iterative gradient descent approach with the following
update at each presentation of a training data pair [10, 9]:
where the computation of the back-propagated error, ffi l;j , can again be formulated as a
backward affine transformation interleaved with the post-multiplication of the derivatives
of the nonlinear activation function:
with initial output-layer propagated error being
3.2 Finite Precision Analysis of Forward Retrieving
Explicitly following the procedure discussed in Section 2.2, the calculation graph of the
forward retrieving operation, with simplified notation (see Eq. (29)), in an MLP is shown
in
Figure
@ @R @ @R09 0x n w n;j
f y
Figure
2: Calculation graph for the forward retrieving, of an MLP, where 3
denotes a truncation, jamming, or rounding operator.
To carry out the analytical formula as given in Eq. (9) for the forward retrieving of an
MLP, several partial derivatives need to be computed from Eq. (33):
@y
@w i;j
@y
@y i3
@y i3
@y i+
@y i+
@y
By substituting the values for the partial derivatives in Eqs. (34) to (37) and the
generated and propagated errors for variables and operators into Eq. (9),
ffl y i3
3.3 Finite Precision Analysis of Output
From Eq. (32), the calculation graph for the computation of back-propagated error in an
output neuron, with simplified notation, is shown in Figure (3).
@ @R 00
@ @R 02&3
y j0
Figure
3: Calculation graph for in an output neuron.
Again, to carry out the analytical formula as given in Eq. (9) for the output delta
computation of an MLP, the partial derivatives of Eq. (39) are evaluated.
@y
@y
@y
@y j0
Substituting for the partial derivatives and individual error terms, the overall finite precision
error for the output delta computation is:
3.4 Finite Precision Analysis of Hidden
From Eq. (31), the calculation graph for the computation of back-propagated error in a
hidden layer neuron, with simplified notation, is shown in Figure (4).
Following similar partial derivative evaluations using Eq. (45), we can again compute
the finite precision error for the hidden delta (see Eq. (9)):
ffl y k3
ffl y k+
@ @R 02&3
Figure
4: Calculation graph for in a hidden neuron.
3.5 Finite Precision Analysis of Weight Update
From Eq. (30), the calculation graph for the computation of the weight update (without
the momentum term), with simplified notation, is shown in Figure (5).
@ @R 02&3
y j3
Figure
5: Calculation graph for
Following similar partial derivative evaluations using Eq. (47), we can again compute
the finite precision error for weight update (see Eq. (9)):
3.6 Statistical Evaluation of the Finite Precision Errors
Given the analytical expressions of all the finite precision errors associated with the forward
retrieving and back-propagation of an MLP, a statistical evaluation of these errors is
undertaken. This evaluation is based on the mean and variance analysis using truncation,
jamming, and rounding techniques, and also based on statistical properties of independent
random variables, sums of independent random variables, and sums of products of independent
random variables discussed in Sections 2.3, 2.4, and 2.5. The first step is to choose
the precision of each component which will be employed at each step in the problem. A
practical limited precision implementation of the MLP algorithm might use precisions as
follows [2].
1. All neurons have 8-bit (8 bits to the right of the decimal) inputs, outputs, and targets
with range [0:0; 1:0).
2. All weights and biases use 16 bits (one sign bit, 3 bits to the left and 12 bits to the
right of the decimal) with range [08:0; 8:0).
3. The output ffi is 8 bits (one sign bit and 7 bits all to the right of the decimal) with
range [00:5; 0:5).
4. Hidden ffi uses 16 bits (one sign bit, 3 bits to the left and 12 bits to the right of the
decimal) with range [08:0; 8:0).
Finite Precision Error in Forward Retrieving: The expected forward retrieving error
can be calculated for both a single layer of neurons and for multiple layers of neurons by
propagating upward the finite precision errors of the lower layers. First, a simplification may
be made to Equation (38). The multiply and accumulate steps can be computed without
generating any error if enough bits (e.g., 24 bits) are used for each of the intermediate steps,
y i3 , and y i+ . In this case, ffl y i3
This is practical since the expense of accumulator
precision is very small.
The 3 operator now reduces the final 24-bit sum to an 8-bit (one sign bit, 3 bits to the
left and 4 bits to the right of the decimal) value which is used as the input for the sigmoid
look-up table. Equation (38) may now be rewritten as
Before invoking the central limit theorem on the sums, it is necessary to know the
distributions of the random variables w ij , ffl w ij
.
Table
1 shows the
the statistically evaluated values for each contributing component of Eq. (49) based on
the assumptions of bit sizes given above. The evaluation starts with 16-bit weights which
are uniformly distributed across the entire range, [08; 8), and weight error comes from
truncating 24-bit weights to 16 bits. For an input neuron, x i is an 8-bit value uniformly
distributed over [0:0; 1:0) which has been truncated from a 24-bit value. Therefore, ffl x i
is
the truncation error with 16. The distribution of f 0
j is approximated as
a function of a normally distributed random variable. ffl 3 is the error generated when the
accumulated 24-bit value is jammed to become an 8-bit value, and ffl f j
is the error generated
in the lookup table which is approximated to be the same as rounding 2 bits at the
place.
R.V. Type q r - oe 2
rounding 8 -12
truncation
jamming
rounding 2 -5
Table
1: Mean and variance of variables in the forward retrieving calculation.
Based on the precision assumptions given in Table (1) with various sizes of bit-allocation
to the weights, Figure (6) shows the statistically evaluated average sum of the squares of
ffl y , as defined in Eqs. (49) and (24), due to the finite precision computation of a single step
forward retrieving. In these evaluations, for any weight bit number (say k-bits), the weight
always contains one sign bit, 3 left-of-decimal bits, and k04 right-of-decimal bits. The lower
solid curve shows the statistical evaluation of finite precision error introduced in neurons
of the first hidden layer and the upper solid curve shows that of the second hidden layer
(or the output layer in a 2-layer perceptron). Note that the statistical evaluation of errors
shows a dive-in at around 8-bit weights. This fact suggests that for the implementation of
finite precision hardware for only the forward retrieving purpose, we can train the network
using high precision computation (under the constraint of only sign plus 3 bits to the left
of the decimal), and then download the well trained finite precision portion (8 bits total,
or 4 bits to the right of the decimal) of the weights into the hardware. The performance
degradations due to this finite precision conversion is almost negligible.
Weight Bits
Average
Squared
Figure
Statistically evaluated and simulation evaluated values of E[ffl 2
introduced to
neurons of the first and second hidden layers.
Simulations are also conducted to verify the statistical evaluations. A 2-layer perceptron
with 100 inputs, 100 hidden neurons and 100 output neurons is simulated. 100 sets of
randomly generated 100-dimensional input data are tested and averaged. All the weights of
the network are also randomly generated. The average sum of the squares of the differences
between the finite precision computation and the full precision (64 bits for all the contributing
components) computation can thus be obtained. The lower dashed curve shows
the simulated evaluation of finite precision error introduced in neurons of the first hidden
layer, and the upper dashed curve shows that of the output layer. Both curves match quite
consistently with those produced by statistical evaluations.
4 Finite Precision Analysis for Iterative Learning
As discussed in Section 3.1, back-propagation learning involves all four consecutive steps
of computations: forward retrieving, output delta computation, hidden delta computation,
and weight updating. Therefore, for each weight updating iteration at the presentation of
any training pattern, the finite precision errors ffl 1w introduced to f1w l;i;j g given in Eq.
(48) is in fact a propagated result of the error generated from the previous three steps.
Therefore, the final mathematical expression of finite precision error for a single weight
updating iteration can be formulated in a straightforward manner based on the existing
derivations given in Eqs. (38), (44), (46), and (48). The statistical evaluation value for
the average sum of the squares of weight updating error ffl 1w due to the finite precision
computation of a single learning iteration can thus be computed.
The back-propagation learning discussed above is simply a nonlinear optimization problem
based on simple gradient descent search, with an elegant way of computing the gradients
using the chain rule on the layers of the network. This gradient descent search updates the
weights based only on the first derivative approximation of the error surface with the up-dating
of each individual weight being independent of the others [6]. Therefore, even if the
approach is computationally efficient, it can behave very unwisely and converge very slowly
for complex error surfaces. Due to the strong influence introduced in the gradient descent
search approximation, the real effect to the learning convergence and accuracy due to finite
precision computation will be difficult to measure. Therefore, the statistically evaluated
average sum of the squares of ffl 1w , by itself, can not determine a network's propensity to
learn.
4.1 Ratio of Finite Precision Errors
A more meaningful measure, ae, which potentially indicates the effect of finite precision
computation to the weight updating, can be defined as the ratio of the statistical average
sum of the squares of finite precision weight updating error ffl 1w and that of full precision
weight updating magnitude 1w 2
This ratio serves as a useful indicator of the additional impact caused by finite precision
computation on top of that caused by the gradient descent approximation of back-propagation
learning. The ratio depends not only on the number of bits assigned to the finite
precision computation, but on the current stage of learning progress, which can be specified
by the distribution of the difference between the desired and actual output g. More
specifically,
where we assume that ffl x L;j
0, since the ability to learn should depend on the ability
to learn these finite precision values.
Based on the same practical choices of finite precision bit size given in Section 3.6 vs.
the number of bits (say k bits) assigned to the weights fw ij g and weight updates f1w ij g,
we can statistically evaluate this ratio at several different stages of learning. Figure 7 shows
the statistical evaluation values of the finite precision ratio for the weights connecting the
neurons between the hidden and the output layers in a 2-layer MLP. 4 different values of
are used, which represent four different stages of learning: early stage
middle stage convergence stage convergence
stage Figure 8 shows the statistical evaluation values of the finite precision
ratio for the weights connecting the neurons between the hidden and the input layers for
these four different values of fi.
Note that the finite precision ratio curves gradually (along various stages of learning)
dive around the region where the number of bits for weights are 12-14 bits for soft convergence
stage, 14-16 for hard convergence stage of learning, and then get almost steady
when the number of bits is increased further. This dive point indicates the potential strong
disturbance to the convergence and accuracy of back-propagation leaning in the long run.
Therefore, it gives a good guideline to which number of bits are required for the weights
in the learning so as to have similar learning convergence and accuracy as that attained
using high precision computation. It is also interesting to note that at the later stage of the
learning the impact of the finite precision error is getting larger due to the smaller values
of fi when the network is fine-tuning its weights.
Weight Bits
Finite
Precision
Figure
7: The statistical evaluation values of the finite precision ratio for the top-layer
weights in a 2-layer MLP. Four different stages of learning are evaluated.
4.2 Simulation Results for Iterative Learning
To verify the theoretical evaluation of the impact caused by the finite precision computation,
a simple regression problem is designed, which maps 2-dimensional inputs, fx 1
1-dimensional outputs, fyg:
An MLP containing 2 input neurons, 8 hidden neurons, and 1 output neuron, is adopted.
There are 256 pairs of randomly selected data for training. Finite precision learning simu-
Weight Bits
Finite
Precision
Figure
8: The statistical evaluation values of the finite precision ratio for the hidden-layer
weights in a 2-layer MLP. Four different stages of learning are evaluated.
lations were performed based on the same choices of bit sizes for each component given in
Section 3.6 vs. different number of bits assigned to fw ij g and f1w ij g. Figure 9 shows the
average (of 256 training data) squared difference between the desired and actual outputs of
the 2-D regression problem after the network converges (a hard convergence is usually required
in this kind nonlinear regression problem). Note that, at the predicted point (around
15-16 bits of weights), the squared difference curve dives. That implies the inability to converge
to the desired mapping when the number of bits for the weights is less than 16 bits.
Similar supporting results are also observed in the XOR classification problem using an
MLP with 2 inputs, 3 hiddens, and 1 output (see Figure 10). Due to the classification nature
of the XOR problem, a soft convergence is good enough for the termination of training.
Therefore, at the predicted point of 12-13 bits of weights, the squared difference curve dives.
Another interesting observation is worthwhile to mention: the total finite precision error
in a single iteration of weight updating is mainly generated in the final jamming operators
in the computation of the output delta, hidden delta, and weight update. Therefore, even
though it is required to have at least 13 to 16 bits assigned to the computation of the weight
update and stored as the total weight value, the number of weight bits in the computation
of forward retrieving and hidden delta steps of learning can be as low as 8 bits without
excessive degradation of learning convergence and accuracy.0.020.060.10.140.184
Weight Bits
Average
Squared
Figure
9: The average squared differences between the desired and actual outputs of the
2-D regression problem after the network converges.
0.10.20.3
Weight Bits
Average
Squared
Figure
10: The average squared differences between the desired and actual outputs of the
problem after the network converges.
Concluding Remarks
The paper is devoted to the derivation of the finite precision error analysis techniques for
neural network implementations, especially analysis of the back-propagation learning of
MLP's. This analysis technique is proposed to be more versatile and to prepare the ground
for a wider variety of neural network algorithms: recurrent neural networks, competitive
learning networks, and etc. All these networks share similar computational mechanisms as
those used in back-propagation learning. For the forward retrieving operations, it is shown
that 8-bit weights are sufficient to maintain the same performance as using high precision
computation. On the other hand, for network learning, at least 14-16 bits of precision
must be used for the weights to avoid having the training process divert too much from the
trajectory of the high precision computation.
--R
A VLSI architecture for high-performance
Implementation limits for artificial neural networks.
From nonlinear optimization to neural network learning.
Parallel architectures for artificial neural nets.
A unified architecture for artificial neural networks.
Recursive least squares learning algorithms for neural net- works
Pizer with Victor L.
Learning internal representations by error propagation.
Beyond regression: New tools for prediction and analysis in the behavior science.
--TR
A unified systolic architecture for artificial neural networks
Learning internal representations by error propagation
--CTR
Ming Zhang , Stamatis Vassiliadis , Jos G. Delgado-Frias, Sigmoid Generators for Neural Computing Using Piecewise Approximations, IEEE Transactions on Computers, v.45 n.9, p.1045-1049, September 1996
Cesare Alippi , Luciano Briozzo, Accuracy vs. Precision in Digital VLSI Architectures for Signal Processing, IEEE Transactions on Computers, v.47 n.4, p.472-477, April 1998
Yongsoon Lee , Seok-Bum Ko, An FPGA-based face detector using neural network and a scalable floating point unit, Proceedings of the 5th WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, p.315-320, November 01-03, 2006, Dallas, Texas
Cesare Alippi, Randomized Algorithms: A System-Level, Poly-Time Analysis of Robust Computation, IEEE Transactions on Computers, v.51 n.7, p.740-749, July 2002 | feedforward neural nets;low precision;neural network algorithms;neural network hardware;multilayer perceptron;forward retrieving;silicon area;finite precision computation;neural chips;error analysis;back-propagation learning;parallel processing;system cost |
626771 | Reconfigurability and Reliability of Systolic/Wavefront Arrays. | The authors study fault-tolerant redundant structures for maintaining reliable arrays. In particular, they assume that the desired array (application graph) is embedded in a certain class of regular, bounded-degree graphs called dynamic graphs. The degree of reconfigurability (DR) and DR with distance (DR/sup d/) of a redundant graph are defined. When DR and DR/sup d/ are independent of the size of the application graph, the graph is finitely reconfigurable (FR) and locally reconfigurable (LR), respectively. It is shown that DR provides a natural lower bound on the time complexity of any distributed reconfiguration algorithm and that there is no difference between being FR and LR on dynamic graphs. It is also shown that if both local reconfigurability and a fixed level of reliability are to be maintained, a dynamic graph must be of a dimension at least one greater than the application graph. Thus, for example, a one-dimensional systolic array cannot be embedded in a one-dimensional dynamic graph without sacrificing either reliability or locality of reconfiguration. | Introduction
Highly parallel pipelined structures such as systolic or wavefront arrays are attractive
architectures for achieving high throughput [9]. Examples of important potential applications
include digital signal processing [11, 2], large-scale scientific computation on
arrays for solving partial differential equations [12], and simulating lattice-gas automata
[14]. As such array processors become larger, the reliability of the processing elements
(PE's) becomes a critical issue, and it becomes necessary to use fault-tolerant techniques
- both at the time of fabrication [15] and at runtime. Defective PE's must be located,
and the architecture reconfigured to substitute good PE's for bad.
In certain runtime applications like avionics and space flight, fault tolerant techniques
must be able to restore proper operation after failures as fast as possible. For this purpose,
distributed reconfiguration algorithms executed in parallel by the PE's themselves have
been studied in [13, 17]. In [5] a fault-tolerant multiprocessor is developed for space
applications that also employs a distributed reconfiguration approach for the topology
of a chordal skip-link ring. In this paper, we study the complexity of algorithms for
reconfiguring arrays after failures, and focus especially on runtime fault tolerance.
In most literature on fault tolerance, faults are confined to processing elements only
and it is assumed that all switches and connections [1, 10, 3, 18] are perfect. This is
not valid when the number of switches and connections becomes large. In this paper we
will use a graph model that takes into account failures of switches and interconnection
wires as well as PE's. PE's and switches will be represented by nodes of the graph in the
obvious way, and a connection between two elements in the computational structure will
be represented by a node inserted in the edge between the appropriate two nodes in the
graph model. Each node of the graph will have associated with it a probability of failure
".
To achieve fault tolerance, we add redundancy to the system. After a failure the original
working architecture is reconfigured by replacing some nodes that were being used
by redundant nodes. A good fault tolerant structure is one where the number of nodes
that need to be changed after failure is as small as possible. In this paper, we define a
measure of this adaptability, the degree of reconfigurability (DR), and analyze this measure
on a class of very regular graphs called dynamic graphs [16, 6, 7, 8]. We also analyze
a stricter measure, called the degree of reconfigurability with distance, DR d , which takes
into account the total distance between original nodes and replacing nodes. Our goal is
to investigate the relation between the structure of dynamic graphs, their reliability, and
their fault-tolerant capability as measured by their degree of reconfigurability.
The case when DR is independent of the size of the system is especially important
because it represents the situation when the amount of change necessary to repair the
system depends only on the number of failed nodes, but not on the size of the system. In
this case, we say the graph is finitely reconfigurable. Similarly, if DR d , the total distance
cost of changes is independent of the size of system, we say that it is locally reconfigurable.
Actually, in section 3, we show if the redundant system is a dynamic graph, it is locally
reconfigurable if and only if it is finitely reconfigurable. Given a desired working structure,
we will discuss what kinds of redundant structures are possible or impossible to maintain
at a fixed level of reliability, while at the same time being locally reconfigurable. In
particular, our main result is that if we wish to maintain both local reconfigurability, and
a fixed level of reliability, the dynamic graph must be of dimension at least one greater
than the application graph, which are shown in section 4 and section 5.
Definitions and Mathematical Framework
labelsecdef1 A VLSI/WSI array architecture can be represented as a graph E).
Each node of the graph G can be regarded as a processor, and an edge of G is a connection
between two processors. We assume that the nodes failed independently, each with
probability ffl. As mentioned above, a node in our graph model can represent a PE, a
switch, or interprocessor connection.
Real working architectures are considered to be a family of graphs, G a , called application
a ) denotes the ith application graph of G a . For example, G a
can be a family of linear arrays indexed by number of nodes, so G n
a is an n-node linear
array. We always assume each G i
a is connected and that for each value of n, there exists
a unique i. Since we need to add redundant nodes or edges to increase reliability, the
embedding structures, G r , called redundant graph, are also represented as a family of
r ) denotes the ith redundant graph of G r . Each pair of nodes in V i
r
is associated with a value, distance, defined by a function D
r \Theta V i
the set of natural numbers; D i (a; a) = 0. This distance can be regarded as the physical
distance between two nodes, or some cost, such as the communication cost.
Given two graphs G define the embedding function
be the image of V 1 .
Given an embedding function let the mapping set S(-) be the set of pairs,
represents the difference between two embedding
functions - and - 0 .
Given G a and G r , the following function will determine which graph in G r will be the
redundant graph of the ith application graph.
Definition 2.1 An Embedding Strategy for G a and G r is a function ES : G a ! G r , i.e.,
a
r , G j
r is the redundant graph for G i
a .
If ES(G i
a
r , and k nodes of G j
r have failed, the failed nodes and all the edges
incident to them will be removed and G j
r becomes a new subgraph -
r ). The
procedure of finding a new embedding function - i
r is called reconfiguration.
Definition 2.2 Given G a , G r and ES, the maximum fault-tolerance of G i
a , MFT (G i
a
is the maximum number of nodes that can be allowed to fail arbitrarily in ES(G i
a ) such
that ES(G i
a ) can still find a subgraph isomorphic to G i
a . In addition, FT (G i
a ) is given
which is some fixed number - MFT (G i
a ) for each i.
Definition 2.3 Given G a , G r , ES and Fault Tolerance FT (G i
a
a ) for each i,
the quadruple (G a called an Embedding Architecture, EA.
array
linear array
Figure
1: Example of G a and G r :
For example in figure 1, G a is a family of linear arrays, and G r is a family of triple-
modular-redundancy (TMR) arrays obtained by triplicating each node of a linear array
to be three nodes, called a module. Let G n
a ) be the n-module array, and let its
corresponding FT (G n
a ) be 2 for all n.
For simplicity, if the context is clear, we will always assume the ith application graph
maps to the ith redundant graph, i.e., ES(G i
a
r . Let - i: G i
a
r , be the initial
embedding function for the ith application graph G i
a .
Definition 2.4 Given an Embedding Architecture, define the Initial Embedding, IE, to
be a set of - i
0 for all G i
a in the family.
For the above example in figure 1, an initial embedding can be a set of - i
0 such that
each node of G i
a maps to the bottom node of each module of G i
r .
Given an embedding architecture for a G i
a , after k nodes have failed, obviously there
may be many different embedding functions - k 's. But, the difference between S(- i
should be as small as possible for the purpose of real-time fault-tolerance.
Suppose that the number of nodes in G i
a is n. Given EA, IE and that k - FT (G i
a
nodes have failed, let the cost of reconfiguration of G i
a , 4(k; n), be the minimum of
k )j over all the possible embedding functions - i
When there is no - i
k , 4(k; n) = 1. We also want to measure the total distance between
original nodes and replacing nodes after reconfiguration. The total distance cost
of reconfiguration for G i
a , 4 d (k; n) is similarly defined to be the following:
When there is no - i
1. Under a given EA and IE, let DR(k; n), the Degree
of Reconfigurability for G i
a , be the maximum of 4(k; n) over all possible k failures in
a
failures of k nodes
The Degree of Reconfigurability with distance, DR d (k; n), is defined similarly (change 4
to be 4 d in the above equation).
Return to the example in figure 1. Let the distance between two nodes in the same
module be one, and the distance between two nodes, one in module i and the other in
module j, be 1. In this case DR(k; n) and DR d (k; n) for G n
a are both k, since
for any k - FT (G n
a need only change k nodes in the same modules as
the k faulty nodes, and the distance between two nodes in the same module is one.
Definition 2.5 An Embedding Architecture, EA is finitely reconfigurable (resp. locally
reconfigurable), if there exists an Initial Embedding, IE, such that for all the G i
a 2 G a ,
DR(k; n)(resp. DR d (k; n)), can be bounded from above by a function of k but not n.
For example, the embedding architecture for linear arrays in the example above is both
LR and FR, since for each G i
a , DR(k; n) = DR d (k; n) - k.
We show in the following lemma that Hayes' h-FT (n
which is an h-fault-tolerant graph for an n-node loop application graph, is not finitely
reconfigurable.
Figure
2: Hayes' 4-FT single loop.
The nth application graph G n
a is an n-node single loop, and the embedding strategy is
to map G i
a to its so-called Hayes' h-FT (n + h)-node single loop. Thus, G n
r is defined by
the following procedure, where we assume for this example that h is even.
a single-loop graph C n+h with nodes.
Join every node x i of C n+h to all nodes at index distance j from x i , for all j
satisfying
The resulting graph G n
r is an h-FT (n Hayes [4] shows
that its MFT (G n
a Let the distance between node x i and x j be
All the computations in the proof are based on indices mod n+h, and all the indices are
in G r . The graph in Figure 2 is an example for
Lemma 2.1 The above embedding architecture with h, mapping the
n-node single loop to Hayes' h-FT (n + h)-node single-loop graph, is neither FR nor LR
if h is o(n 1
Proof: We assume there is an adversary A who always tries her best to select failures
that show that DR(k; n) is not bounded by a function of k only. No matter what the
initial - n
working nodes must be distributed among the nodes of G n
r . Define
a segment S to be a sequence of consecutively numbered working nodes
in G n
r , where x i\Gamma1 and x j+1 are non-working redundant nodes. Denote the length of the
segment S by suppose the h non-working nodes, ordered by their
indices, form the sequence
For each x i j there is a segment S j (it may
be null) starting from x i j +1 . Thus,
There must exist a segment S such that l(S
. Without
loss of generality, assume that S is from node x 1 to node x l(S ) .
The adversary can choose the middle node x d of segment S to be faulty, that is
e. Pick a reconfiguration that is optimal in the sense that the fewest possible
number of nodes in G n+h
r are changed. Let m be the number of nodes in S which are
changed in this reconfiguration, Let C be such a sequence of m nodes,
ordered by their indices. We know x d must be replaced by one node, say x
d , and if x 0
d is
a working node, it must be replaced by another node. Thus, there is a sequence ' C of
working nodes in S in this sequence of replacements, starting with x d and ending at a
working node that is replaced by the first node x r outside S . First, we divide S into
many small subsegments with length w, where
h+ 1), and represent them as a
sequence (S
d be in subsegment S
. Without loss of generality, assume
that the index of x r is larger than the largest index of a node in C; i.e., r
We claim that there must exist at least one node in C in the subsegment S
k or S
1 .
Suppose not. Let x r replace x i in C and let a and b be the two nodes connected to x i
in the initial working subgraph. Since connections must be of length at most
the distance between x i and the last node in S (and also the first node in S ) is ?
we know a and b must be in S . If a or b is not in C, say a, because a is not replaced,
x r must be connected to a after the reconfiguration. But we know that i
r ? l(S ) from the assumption, so it is impossible that x r is connected to a. Thus, we
know that a and b are in C, say that a is replaced by a 0 . Denote the sequence of original
working nodes starting from x i toward one direction in the original working subgraph by
and the sequence after reconfiguration by fx r ; a 0 ; a 0; a :g. If a 0 2 S ,
because a 0 replaces a, a 0 must be in C. Since the index of a 0 is impossible for
a 0 to be connected to x r . Thus, a 0 is not in S . In summary, we know that if x
x r 62 S , then a is in C and a 0 is not in S . Repeating the argument, using a instead of x i
and a 0 instead of x r , we can get the result that a 1 is in C and a 0
1 is not in S . Continuing
in this way, it follows that all the nodes a; a 1 ; a are in C and nodes a 0 ; a 0
are
not in S , but this is impossible, since there are only finite number of nodes in C. Thus,
our claim is correct.
We claim next that in each pair of the subsegments (S
l , S
there exists at least one node in C. We have proved that it is true for the first pair of sub-segments
Assume it is true for all the pairs of subsegments from
not in S
k\Gammaj , and S
g.
Since x d 2 C 0 , from the way that x r is chosen we know there must exist one node in C 0
which is replaced by a node outside of C 0 . If, in S
k\Gammaj+1 and S
j , there does not exist a node
in C 0 , the same argument as above results in the same contradiction. Thus, in each pair
of subsegments in S , there is at least one node which has been replaced. The number of
nodes in C must therefore be at least
number of nodes
that is an unbounded function of n need to be changed. Thus, DR(k; n) is not bounded
by a function of k only, under any initial embedding function - n
therefore the Hayes'
embedding architecture is not finitely reconfigurable. It is obvious that the total distance
between original nodes and their replacing nodes is also an increasing function of n, so
it is not LR either. 2
Our next example is an embedding architecture that is finitely reconfigurable, but not
locally reconfigurable. Choose G a as in figure 1 to be a family of linear arrays, and G r as
in figure 3 to be a family of complete graphs on a row. Let ES map G n
a to G n+h
r and let
a h, for each G n
a in G a . The distance between node i and node j is defined to
be After one node has failed, say node 2, we can take any spare node to replace
it , say node shown in figure 3.
After reconfiguration
Initial embedding
n+h
n+h
Figure
3: An example that is FR but not LR.
Lemma 2.2 If h is o(n) the above embedding architecture is FR, but not LR.
Proof: It is obvious that such an EA is finitely reconfigurable, since any spare node
can replace any other node, so that only k faulty nodes need be changed after k nodes
fail. Considering G n
a and G n+h
r , under any initial embedding, there must exist a sequence
of working nodes in G n+h
r with consecutive indexes of length - n=(h 1), by the same
argument as in lemma 2.1. Choosing the middle node of such a path to be faulty, the
distance between any spare node and the faulty node must be
the distance is an increasing function of n. Thus, this EA is not locally
reconfigurable. 2
3 Degree of Reconfigurability for Dynamic Graphs
In applications we are interested in graphs which are very regular and of bounded degree.
An interesting and useful class of such graphs are called dynamic graphs [16, 6, 7, 8],
which model regular systolic and wavefront arrays in a natural way. An undirected
k-dimensional dynamic graph G defined by a finite digraph G
called the static graph, and a k-dimensional labeling of edges
The vertex set V x is a copy of V 0 at the integer lattice point x and V k is the union of all
. Let a x be the copy of node a 2 V 0 in the vertex set V x and let b y be
the copy of node b 2 V 0 in the vertex set V y . Nodes a x and b y are connected if (a; b)
and the difference between the two lattice point y and x is equal to the labeling T k (a; b).
Therefore, the dynamic graph is a locally-finite infinite graph consisting of repetitions of
the basic cell interconnected by edges determined by the labeling T k . In figure 4, we
show an example of a two-dimensional static graph G 0 and its corresponding dynamic
graph G 2 .
For g. The graph with vertex set V x and
edges with both end points only in V x is called the x-th cell of G k , C Given
a dynamic graph, we can contract all the nodes in the same cell to one node and delete
the edges totally within the cell. This contracted graph is called the cell-dynamic graph,
x6=y
We give an example in figure 5, which
is the cell-dynamic graph corresponding to G 2 in figure 4.
Given a static graph G 0 , we define F j to be the finite subgraph of G k such that
each dimension of F j has j cells, i.e., F
x
x;y
We define the family F of k-dimensional
dynamic graphs to be the set of F j , where j - 1.
There are different ways to define distance in dynamic graphs. For example, one
reasonable definition of the distance function D is to define the distance between two
nodes, one in vertex set V x , and the other in V y , to be the Euclidian distance in k-dimensional
space between point x and point y if x and y are in different cells, and
one if they are in the same cell. We say that a distance function D satisfies property
5 (triangle inequality), if the distance between nodes a and b is less than or equal to
the total distance of any path from a to b. Of course Euclidian distance satisfies 5.
The following lemma will show that when the set of redundant graphs G r is a family of
A static graph G
c
d
a
d
c
a
d c
a
d c
a
d c
a b
d c
a
d c
a
d c
a
d c
a b
(0,
c
(0,
(1,
d
a
(0,
Figure
4: An example of G 0 and the corresponding dynamic graph G 2 .
Figure
5: The cell-dynamic graph G c of G 2 .
dynamic graphs and the distance function satisfies 5, then any embedding architecture
is LR if and only if it is FR. In the rest of paper, we assume that D satisfies property 5.
Lemma 3.1 When G r is a family of dynamic graphs and its distance function satisfies 5,
the embedding architecture is locally reconfigurable if and only if it is finitely reconfigurable.
Proof: Given an EA, if this EA is LR, we know by definition that the total distance
cost of any k failures can be expressed as a function f(k), where f is a function of k
only. We know the distance between any two nodes is at least one, so the number of
nodes changed must be - f(k). Thus, this EA is also FR.
Suppose that it is FR. We know that for each G n
a 2 G a , after k nodes have failed, at
most a function of k, say f(k), nodes must be changed in the original working subgraph.
Let a 1 be the node in G n
a such that the distance in G n
r between - n
is the
maximum over all the nodes in V n
a .
Because there are at most f(k) nodes which are changed by - n
k , there exists a path in
the application graph G n
a with at most f(k) edges from a 1 to an unchanged node a 2 , i.e.
c be the maximum distance between any two nodes connected by an
edge, which is a constant independent of k and n by definition. The distance D between
node - n
is at most c \Delta f(k) by property 5, the triangle ineqality.
Similarly, the distance between node - n
is at most c \Delta f(k). Since
is at most 2c \Delta f(k). Therefore
the total distance of the f(k) changed nodes is at most 2c \Delta f(k) 2 because there are at most
pairs that are changed. EA is therefore locally reconfigurable from the definition.Finite reconfigurability is desirable in practice, especially for real-time fault tolerance,
because it shows that after k nodes have failed, at most a function of k nodes need to
be changed, independent of the size of the application graph. Lemma 3.2 will show
that the degree of reconfigurability DR provides a lower bound on the time complexity
of any distributed reconfiguration algorithm, and shows one reason this measure DR is
important. We assume in what follows that it takes one time step to send a message
through an edge.
Lemma 3.2 When G i
a is an n-node application graph and G r is a family of d-dimensional
dynamic graphs, the time complexity of any distributed reconfiguration algorithm, is
d
is the number of nodes that have failed.
Proof: After k nodes have failed, we must change at least DR nodes to reconfigure.
We can assume that a distributed reconfiguration algorithm is initiated by a neighbor
node, called a source node, of each faulty node after this neighbor node has detected
the failure. We need to inform at least DR nodes in G i
r that they are assigned different
nodes in G i
a . Thus, the time to broadcast this fault information is a lower bound on the
time complexity of any distributed reconfiguration algorithm.
Let the corresponding static graph be G its labelling be T d . The
maximum edge distance c in one dimension is the
g. Let m be equal to (jV 0 j \Theta 2c) d . We can always contract the nodes of G d into groups
of at most m nodes to obtain a d-dimensional reduced graph G 0
c ), such that
1 g. Each node of V 0
c , called a class here, represents at most m nodes of the dynamic
graph. Note that m is a constant by definition.
After t time steps, one source node can inform at most (2 \Delta t) d classes in a d-dimensional
reduced graph, so at most (2 \Delta t) d \Delta m nodes have been reached. Since there are at most
source nodes, where c 1 is the maximum degree in G r , the total number of nodes that
can be informed after t time steps is at most (2 \Delta t) d \Delta mk. There are DR nodes that need
to be informed, so t should be at
d
4 Impossibility of an LR-reliable Embedding of Dynamic
Graphs from Dimension d to d
In this section we restrict attention to dynamic graphs, and consider the relationship
between reconfigurability and reliability. In particular, we ask whether a given embedding
architecture can be finite and locally reconfigurable, and at the same time maintain
a given level of reliability. Without the constraint of being FR or LR, we can simply
construct a redundant graph to be many replications of the application graph, achieving
high reliability, but at the price of using large amounts of hardware and being difficult
to reconfigure. Our main result is Theorem 4.5: when mapping from d-dimensions to
d-dimensions, we cannot maintain both local reconfigurability and reliability simultaneously
As lemma 3.1 shows, there is no difference between local and finite reconfigurability
for dynamic graphs, and thus we consider only local reconfigurability, without the loss of
generality. We define LR-reliability in our framework as follows. Given an EA which is
LR, the probability, for each i, that G i
r contains an isomorphic image of G i
a is
a
FT
r j. The following definition replaces definition 2.5 in the statistical case.
Definition 4.1 An Embedding Architecture is LR-reliable with reliability fi, if P (G i
a
fi for all the G i
a 2 G a .
The following lemma is useful in what follows.
Lemma 4.1 Given G a , G r and ES, for each i, let MFT (G i
a ) be the maximum number
of failures that allows the corresponding EA to be LR. If this MFT is upper-bounded
by a constant as n ! 1, there exists a constant fi such that EA cannot be LR-reliable
with reliability fi.
Proof: Let the upper bound on MFT be c. By the definition of MFT in the hypothesis
of the lemma, there exist c+1 nodes in the redundant graph G i
r such that after they have
failed, for any IE, EA cannot be LR. Therefore P (G i
a
. We know
n can be chosen large enough to make c+1 ! ffln, so the term corresponding to
the largest in the summation. Thus, the probability P (G i
a
, it is obvious that when n goes
to 1, P
a ) goes to 0. Thus, for some i, we always can pick a
a ). Therefore,
such an Embedding Architecture cannot be LR-reliable with reliability fi. 2
We want to study some properties of dynamic graphs if we insist on local reconfigurability
after some nodes have failed, since local reconfigurability is desirable in practical
implementations. The following lemma tells us that one-dimensional dynamic graphs
cannot be LR-reliable when the application graphs are linear arrays.
Lemma 4.2 When G a is a family of one-dimensional linear arrays and G r is a family
of one-dimensional dynamic graphs, there exists a constant fi such that no Embedding
Architecture is LR-reliable with reliability fi.
Proof: As in the proof of lemma 3.2, we can always build a reduced graph G 0
by contracting sets of size at most m nodes in G n
r to produce a one-dimensional linear
Figure
Example of a 2-dimensional 16-node web.
array. Each node of G 0
c now represents a class of a finite number of nodes. Note that m
is a constant number, since G 0 is a finite graph by definition.
For any initial embedding, the n nodes of G n
a are distributed into at least n=m contiguous
classes in G 0
c . If the adversary chooses all the nodes in the middle class of the
above n=m classes to be faulty, the initial working subgraph is separated into two halves.
We must shift at least half of the G n
a and therefore
n) nodes to get a new
working subgraph. Thus, if an embedding architecture is locally reconfigurable, its FT
must be bounded by a constant m. From lemma 4.1, we know there exists a constant fi,
such that EA cannot be LR-reliable with reliability fi. 2
To generalize lemma 4.2, we define an n d -node d-dimensional web to be a d-dimensional
graph
g. Thus, we
connect all adjacent points in the d-dimensional Euclidian space. For example, figure 6
shows a 2-dimensional 16-node web. The family of d-dimensional webs is indexed by n.
Theorem 4.3 If G a is a family of d-dimensional webs and G r is a family of d-dimensional
dynamic graphs, there exists a constant fi such that no Embedding Architecture is LR-
reliable with reliability fi.
Proof: We can always find a d-dimensional reduced graph G 0
c ) by contracting
the dynamic graph G n
r as we did in the proof of lemma 3.2. Without loss of generality,
we consider the most general case with all possible edges present, where V 0
ae Z d and
g. Each node
2nodes
Figure
7: The n paths in the proof of theorem 4.3.
of
c represents a class of m nodes of G n
r , where m is the constant in the proof of lemma
3.2.
First, we prove that there cannot be an embedding strategy that maps a d-dimensional
web to (d \Gamma 1)-dimensional dynamic graph. Suppose first an n \Theta n two-dimensional lattice
is projected to a one-dimensional dynamic graph. Among the n 2 nodes in the web, the
vertices on the path from vertex (0; 0) to (0; must be projected to at most n
consecutive classes. Similarly, each of the n paths horizontally from (0;
and vertically to the diagonal vertices must be
projected to at most n consecutive classes. We show these n paths in figure 7. Thus, all
the nodes on the paths must be in at most 2n classes, and there must exist one class
to which at least n=4 nodes are mapped. This is impossible, since each class only has
finite number of nodes. The same argument can be generalized easily to d-dimensional
lattices. Thus, we can restrict attention to the possibility of mapping a d-dimensional
web mapping to a d-dimensional dynamic graph.
We say a class in G 0
c is empty if there is no working node in it. In the application
graph the nodes which are adjacent must be mapped to one or adjacent classes. It is not
hard to see that in the initial embedding there cannot be an empty class surrounded by
an image of a line along the y dimension in a 2-d web
an image of a line along the x dimension in a 2-d web
a line between an inner central node and the border;
there are - n
lines passing through it
inner central class
2m classes
Figure
8: The inner central class in the proof of theorem 4.3.
non-empty classes. Consider a line of - n nodes in the n d -node d-dimensional web, as in
the proof of lemma 4.2. For any initial embedding these n nodes are distributed into at
least n=m classes that are linearly connected in G 0
c . These images of lines may zig-zag in
c , but must map to at least n=m contiguous classes. Therefore, there is a well-defined
inner central class which is \Omega\Gamma n=m) classes away from the border in the image of the web,
as shown in figure 8. Note that a line between the inner central class and the border may
not be the image of a line along one dimension in the web, but the line must contains
n) nodes in the web, as figure 8 shows.
If the adversary chooses all the nodes, at most m, in the inner central class to be
faulty, the original working subgraph has a central inner hole. We must
nodes in one direction to get a new isomorphic subgraph in G n
r . Therefore, to maintain
local reconfigurability, for any embedding architecture, FT must be upper-bounded by
m. From Lemma 4.1, we then know there exists a constant fi, such that EA cannot be
LR-reliable with reliability fi 2
We next modify the application graph so that each node x
connected only to nodes We call such a d-dimensional
graph a d-dimensional orthogonal lattice. To develop intuition for the gen-
interior of the image
non-empty class
empty class
Figure
9: A pseudo hole.
eral case of d-dimensional dynamic graphs, the following lemma extends theorem 4.3 to
two-dimensional orthogonal lattices.
Lemma 4.4 If G a is a family of two-dimensional orthogonal lattices and G r is a family
of two-dimensional dynamic graphs, there exists a constant fi such that no embedding
architecture is LR-reliable with reliability fi.
Proof: As in the proof of theorem 4.3, we know that a two-dimensional orthogonal
lattice cannot be embedded in a one-dimensional dynamic graph (we made no use of
diagonal edges in that proof). Without diagonal edges, however, the rest of the proof is
a bit more complicated.
An image of an application graph can be regarded as a polygon. We say an embedding
in G 0
c has a hole of size k, if there exist k consecutive empty classes in a line along one
dimension which are inside the polygon and surrounded by non-empty classes. Thus, the
example in figure 9 is excluded from our definition of hole.
We claim that after any embedding of a two-dimensional orthogonal lattice in a two-dimensional
dynamic graph, it is impossible that there is a hole of size 2. Assume our
claim is false, and denote the empty classes in a hole of size 2 by A and B. Index the
nodes in the two-dimensional orthogonal lattice G a by x ij . For notational convenience,
choose the origin so that x 00 is a particular node which is mapped to the nonempty class
immediately above A in G 0
c . We will refer to the vertical line in G a passing through x ij
as the vertical line Lx i .
vertical line Lx 1
vertical line Lx 0
A b
a
class
Figure
10: The image of vertical lines Lx 0 and Lx 1 .
We have the following observations about the images in G 0
c of vertical lines in the
orthogonal lattice G a . First, the images of the vertical lines Lx i and Lx i+1 cannot be
more than one class apart along one dimension. Because the image of each pair of nodes
x i+1;j is in the same class or adjacent classes, this follows by induction on j.
Second, the vertical line Lx 0 and Lx 1 (resp. Lx 0 and Lx \Gamma1 ) must pass on the same side
of A and B, as in figure 10, since there is no edge passing between A and B. According
to the above two observations, by induction on i, all the vertical lines Lx i must be on the
same side of A and B (either left or right), so A and B cannot be in the interior of the
image of G a . This contradiction proves that it is impossible to have a hole of size two.
As we did in theorem 4.3, the adversary can choose the two inner central classes in one
dimension to be faulty, and as before, there is no way to reconfigure G r so that those two
faulty classes are surrounded by non-empty classes. Thus, we must
n) nodes
in one dimension to get a new working subgraph. 2
Finally, we can extend this result to d dimensions. The line containing classes A
and B will be replaced by a (d \Gamma 1)-dimensional hyperplane in a d-dimensional dynamic
graph.
Theorem 4.5 If G a and G r are families of d-dimensional dynamic graphs, there exists a
constant fi such that no embedding architecture can be LR-reliable with reliability fi.
Proof: Given an application graph G a which is a dynamic graph, a reduced graph can
be built as before. Since the application graph is connected and a class is connected
only to its neighboring classes, there exists at least one edge along each dimension from
one class to its neighboring class. Therefore, any d-dimensional reduced graph contains
a subgraph which is isomorphic to a d-dimensional orthogonal lattice. We therefore
need only prove the theorem for the case of the application graph being a family of d-dimensional
orthogonal lattices. Again, the proof of theorem 4.3 shows that d-dimensional
orthogonal lattices cannot be embedded in 1)-dimensional dynamic graphs.
We claim that it is impossible that there exist a hole of size 2 d\Gamma1 in one hyperplane
dimensions (one coordinate is fixed) in the reduced graph. Assume our
claim is false. Call the above 2 d\Gamma1 classes an obstacle O. The obstacle is composed of two
empty classes along each of the (d \Gamma 1) dimensions in H. Call the fixed dimension of H
"vertical." By the same reasoning as in lemma 4.4, no vertical lines can pass through the
obstacle O, and the images of any two adjacent vertical lines must lie on the same side of
the obstacle O in the reduced graph. Therefore, the obstacle cannot be in the interior of
the reduced graph, so our claim is correct. The adversary then chooses the inner central
in H to be faulty. There is no way to reconfigure the redundant graph such
that those faulty classes are surrounded by non-empty classes. Thus, we must change
n) nodes in one dimension to get a new isomorphic subgraph. 2
5 Possibility of an LR-reliable Embedding of Dynamic
Graphs from Dimension d to d+1
Finally, we want to show that we really can embed d-dimensional dynamic graphs in
1)-dimensional dynamic graphs, while maintaining any desired high reliability and
local reconfigurability. We begin with the one-dimensional case.
Lemma 5.1 When G a is a family of linear arrays, there exists an Embedding Architecture
G r is a family of two-dimensional dynamic graphs, which can be LR-reliable with
any given fi.
Figure
11: An LR-reliable 2-dimensional dynamic graph.
Proof: We prove this by constructing a redundant graph G n
r for an n-node linear array G n
a
as shown in figure 11. G n
r has n columns and each column has s nodes. Let FT (G n
a
The initial embedding allocates each node of G n
a to a distinct column of G n
r , i.e. let
the initial isomorphic subgraph be the sequence (0; 0), (1; 0), ., (n; 0). If one node (i;
has failed , we choose (i; 1) as the replacing node, and if nodes (i; 0) and (i; 1) have failed,
we use (i to replace nodes (i \Gamma
using the above reconfiguration procedure, we change at most 2k \Gamma 1 nodes after any
a with respect to such an EA and IE
is locally reconfigurable.
We now want to show that given fi, we can find an s and G n
r with the desired
properties. Let -
r be a square piece of G n
r , an n \Theta n dynamic graph. Let p(n) be the
probability that -
r contains G n
a . We form a vertical pile of s=n such blocks to obtain
s \Theta n such dynamic graphs as in figure 12. After we connect each two adjacent squares,
the resulting graph is the same as G n
r .
Since connections between two squares can only increase the reliability, the probability
that there does not exist a working linear array in this big graph is . For
Figure
12: A pile of -
r for the proof of lemma 5.1.
any c, if s ?
cn \Delta logn
, the above probability will be ! 1=n c . Therefore, for any
reliability fi, we can find a sufficient large s to achieve reliability fi. 2
We can now prove the main result in this section.
Theorem 5.2 When G a is a family of d-dimensional dynamic graphs, there exists an
embedding architecture where G r is a family of (d+1)-dimensional dynamic graphs, which
can be LR-reliable with any given fi.
Proof: As before, we construct a reduced graph from the given dynamic application
graph G a . The most general form of a reduced graph is a web. Thus, without loss of
generality, we need only prove the theorem for the case of the application graph being
a family of d-dimensional webs. We can use the same construction and reconfiguration
method as we did in the previous lemma. 2
From the above reconfiguration method, after k - FT (G n
a ) nodes have failed, we need
to change at most 2 \Delta k nodes. The following corollary shows that when
reduce this to exactly k nodes.
Corollary 5.3 When G a is a family of linear arrays, there exists an embedding architecture
G r is a family of two-dimensional dynamic graphs with edge degree 4m
where m is any constant - 2, such that after any k - FT (G n
a ) nodes have failed, we only
need to change k nodes.
Figure
13: Dynamic graph construction for corollary 5.3.
Proof: First construct the dynamic graph as shown in figure 13, where there are s nodes
in each column: each node (i; j) connects to (i
m).
The reconfiguration method is the same as in lemma 5.1. Let FT (G n
a
a in the family, and allocate nodes of G n
a to different columns as before. The number
of nodes which need to be changed after k nodes in one column have failed is at most
d
e \Theta 2 \Gamma 1. This is the worst case, so DR(k; n) = max(d
2. 2
Similar constructions work for d dimensions.
6 Conclusions and Open Problems
Our main result is that it is difficult for dynamic graphs to maintain both local reconfigurability
and a fixed level of reliability. More precisely, the dynamic graph must be of
dimension at least one greater than the application graph to have both properties.
The problem of considering the tradeoffs among the size of redundant graphs (the
number of edges), reconfigurability, and reliability needs to be studied further. A class
of simple layered graphs with a logarithmic number of redundant edges is proposed
in [19] which can maintain both finite reconfigurability and a fixed level of reliability
for a wide class of application graphs. By sacrificing finite reconfigurability, they also
construct highly reliable structures with the asymptotically optimal number of edges
for one-dimensional and tree-like array architectures. However, the redundant graphs
resulting from the constructions are not dynamic graphs. It would be interesting to
consider the construction of redundant graphs that are restricted to be dynamic graphs,
which are more easily implemented than less regular graphs.
--R
"Diogenes: A methodology for designing fault-tolerant VLSI processing arrays,"
"Digital signal processing applications of systolic algorithms,"
"Configuration of VLSI arrays in the presence of defects,"
" A graph model for fault-tolerant computing systems, "
"Distributed reconfiguration and recovery in the advanced architecture on-board processor,"
"Testing for cycles in infinite graphs with periodic struc- ture,"
"Planarity testing of doubly periodic infinite graphs,"
"A semiring on convex polygons and zero-sum cycle problems,"
" Why systolic architectures?"
"Fault tolerant VLSI systolic arrays and two-level pipelines,"
"Wavefront array processor: Languages, architecture, and applications,"
"Fault-tolerant array processors using single track switches,"
"A scalable architecture for lattice-gas simulation, "
"Wafer-scale integration of systolic arrays,"
"Some problems on dynamic/periodic graphs,"
"Efficient algorithms for reconfiguration in VLSI/WSI arrays,"
"Reconfiguration architecture for VLSI processing ar- rays,"
"Explicit Constructions for Reliable Reconfigurable Array Architectures"
--TR
Configuration of VLSI Arrays in the Presence of Defects
Testing for cycles in infinite graphs with periodic structure
VLSI array processors
Fault-Tolerant Array Processors Using Single-Track Switches
A scalable architecture for lattice-gas simulations
Efficient Algorithms for Reconfiguration in VLSI/WSI Arrays
A semiring on convex polygons and zero-sum cycle problems | systolic arrays;bounded-degree graphs;wavefront arrays;fault tolerant computing;reconfigurable architectures;reconfigurability;finitely reconfigurable;reliable arrays;dynamic graphs;locally reconfigurable;time complexity;lower bound;fault-tolerant redundant structures;application graph;reliability |
626779 | Fault Injection and Dependability Evaluation of Fault-Tolerant Systems. | The authors describe a dependability evaluation method based on fault injection that establishes the link between the experimental evaluation of the fault tolerance process and the fault occurrence process. The main characteristics of a fault injection test sequence aimed at evaluating the coverage of the fault tolerance process are presented. Emphasis is given to the derivation of experimental measures. The various steps by which the fault occurrence and fault tolerance processes are combined to evaluate dependability measures are identified and their interactions are analyzed. The method is illustrated by an application to the dependability evaluation of the distributed fault-tolerant architecture of the Esprit Delta-4 Project. | Introduction
The evaluation of a fault tolerant system is a complex task that requires the use of different
levels of modeling (axiomatic, empirical and physical models) and related tools [1]. A large
number of studies (e.g., see [2-4]), have shown the prominence of the efficiency of the fault
tolerance algorithms and mechanisms (FTAMs) on the dependability of a wide range of
systems and architectures. Determination of the appropriate model for the fault tolerance
process and proper estimation of the associated coverage parameters are therefore essential in
any dependability evaluation study.
Compared to other possible approaches such as proving or analytical modeling, fault-injection
is particularly attractive [5-13]. By speeding up the occurrence of errors and failures,
fault injection is a method for testing the FTAMs with respect to their own specific inputs: the
faults that they are intended to tolerate.
This work was performed within the framework of PDCS, ESPRIT Basic Research Action n- 3092, (Predictably
Dependable Computing Systems). Some aspects of this research were put into practice on the testbed architecture
developed as part of the implementation validation activity of the ESPRIT Precompetitive Project n- 2252 Delta-4
(Definition and Design of an open Dependable Distributed system architecture) with the support of a Grant awarded by
the Midi-Pyr-n-es Regional Authority.
The authors are with the Laboratoire d'Automatique et d'Analyse des Syst-mes du Centre National de la Recherche
Scientifique (LAAS-CNRS) Toulouse, France. Jean Arlat was holding the Toshiba Endowed Chair at the Tokyo
Institute of Technology, Japan, during the preparation of the final manuscript of this paper.
As pointed out in [14], fault injection addresses both dimensions of FTAM validation: fault
removal and fault forecasting [15-16]. With respect to the fault removal objective, fault
injection is explicitly aimed at reducing, by verification, the presence of FTAM design and
implementation faults. Since such faults can cause incorrect behavior of the FTAMs when they
are faced with the faults they are intended to handle, we call them fault-tolerance deficiency
faults (in short, ftd-faults). From the verification viewpoint, fault injection therefore aims to
reveal such ftd-faults and to determine appropriate actions to correct the design or
implementation of the FTAMs. In the case of fault forecasting, the main issue is to rate, by
evaluation, the efficiency of the operational behavior of the FTAMs. This type of test thus
constitutes primarily a test of the FTAMs with respect to their overall behavioral specification.
In practice, this means estimating the parameters that characterize the operational behavior of
the FTAMs: coverage factors, dormancy, latency, etc.
Both dimensions are of interest for validating the FTAMs. The relationships and
complementarity between these two objectives, as well as the main characteristics of the ftd-
removal objective, are addressed in [14, 17, 18]. This paper focuses on the fault
forecasting objective.
The fault tolerance coverage estimations obtained through fault injection experiments are
estimates of conditional probabilistic measures characterizing dependability. They need to be
related to the fault occurrence and activation rates to derive overall measures of system
dependability. Such a necessary relationship is - at least conceptually - well established.
However, few studies consider its actual incorporation into the dependability evaluation of real
fault-tolerant systems. Among the most significant related studies, see the work reported in
[19], the ESS, SIFT and FTMP validation processes depicted in chapters 12, 16 and 17 of [20]
and, more recently, the evaluation of the MAFT architecture presented in [21].
This paper describes a dependability evaluation method based on fault injection that
establishes the link between the experimental evaluation of the coverage of the fault tolerance
process and the fault occurrence process. The paper also illustrates the application of the
method to the evaluation of a real system. Such an experiment-based evaluation method
combining fault injection experiments and analytical evaluation has been - along with formal
protocol verification activities - the central point in the validation of the distributed fault-tolerant
architecture of the ESPRIT Delta-4 Project (see [22] for a global description of the
validation tasks). Markov-based modeling and evaluation, and especially sensitivity analysis of
the impact of the coverage parameters (both coverage factors and latencies), helped to identify
the most significant parameters to be estimated from the fault injection experiments.
Conversely, the experiments not only made it possible to obtain the range of values for the
coverage parameters used in the analytical models, but also helped in the validation and refinement
of these models. In particular, the models were refined to capture specific behaviors
revealed by the experiments.
More recently, the study presented in [23] described an example of cross-fertilization
between experimental evaluation and analytical modeling. However, that study relied more on
the analysis of recorded field data than on fault injection experiments. The physical fault
injection experiments carried out on the Delta-4 prototype testbed made it possible to iterate the
evaluation process for validating the design assumptions (e.g., the fail-silence assumption) and
thus had an impact - albeit during the final phases - on the development of the Delta-4
architecture.
The paper defines and analyzes the relationships between experimental and analytical
dependability evaluation. The results obtained in the case of the evaluation of a real system
provide practical examples of such relationships. The remainder of this paper consists of four
sections. Section II depicts the main characteristics of a fault injection test sequence aimed at
evaluating the fault tolerance process. This section - adapted and extended from [24] -
summarizes some definitions and results that are necessary for the understanding of the
developments presented in the next section. Section III describes the main steps of the
integration of the fault occurrence and fault tolerance processes that were defined and fully
detailed in [25]. Section IV applies the method to the dependability evaluation of the Delta-4
distributed fault-tolerant architecture. Section V concludes the paper.
II. Experimental Evaluation of Fault Tolerance
The proposed experimental evaluation method embodies the concept of a fault injection test
sequence, characterized by an input domain and an output domain.
The input domain corresponds to a set of injected faults F and a set A that specifies the data
used for the activation of the target system and thus, of the injected faults.
The output domain corresponds to a set of readouts R that are collected to characterize the
target system behavior in the presence of faults and a set of measures M that are derived from
the analysis and processing of the FAR sets.
Together, the FARM sets constitute the major attributes that can be used to fully characterize a
fault injection test sequence. In practice, the fault injection test sequence consists of a series of
experiments; each experiment specifying a particular point in the {FxAxR} space.
A . Characterization of a Fault Injection Test Sequence
During each experiment in a fault injection test sequence, a fault from the F set is injected
that, in conjunction with the activity of the target system determines an error pattern
that constitutes a test input for the FTAMs to be validated. For increased confidence in the
estimates obtained, it is necessary to carry out a large number of experiments. For minimum
bias in the estimation, it is further recommended to select both F and A sets by statistical
sampling among the expected operational fault and activation domains of the target fault tolerant
system. Further issues concerning the combination of the F and A sets to produce error
patterns are discussed in detail in [14]; we focus here on the R and M sets characterizing the
experimental evaluation process.
The readouts collected in R during an experiment contribute to a characterization of the state
of the target system. This is achieved by way of the assertion or not of a set of predicates that
are meant to abstract the specification of the behavior of the target system and thus of the
FTAMs under test. Typical examples of predicates are: {fault_activated}, {fault_activated &
{error_signalled & proper service delivered}. Such predicates or their
combinations define the set of vertices of a graph that models the behavior of the target system
(or of the FTAMs) in the presence of faults. This graph can be either established a priori to
describe anticipated behaviors or obtained a posteriori from the analysis of the R set, which is a
form of model extraction from the experimental results (e.g., see [12]).
Figure
1 gives an example of such a graph. Transition 1 corresponds to the activation of an
injected fault as an error; the associated time defines the fault dormancy. Transition 2
represents the situation where an injected fault is not activated; such an experiment is not
significant when FTAM coverage is evaluated with respect to error patterns (resulting from
activated faults) rather than with respect to the faults injected. Transition 3 depicts the case of a
detected error; the associated time characterizes the latency of error detection. Transition 4
represents the case where an error is apparently tolerated although it was not detected whereas
transition 6 depicts the (normal) situation where the error is tolerated after having been
detected. Transitions 5 and 7 distinguish the cases of failure of the detection and tolerance
mechanisms. This graph depicts the faulty behavior observed during the experiments carried
out on the Delta-4 architecture. In particular, transition 4 characterizes a singular behavior, that
is not always easy to diagnose in practice since it may result from either (i) an activated fault
that remains hidden (latent) or (ii) a propagated error that is tolerated or that is eliminated by
some other - unobserved - mechanism.
Figure
further illustrates the types of predicates and system state transitions that can be
deduced from the readout set R, in the case of a single binary predicate p; three principal cases
are accounted for, depending on whether the predicate is expected (i) to maintain its value for
the whole interval that defines the observation domain for an experiment
(figure 2-a), or (ii) to change value once (figure 2-b) or several times (figure 2-c) during the
experiment.
A typical example of figure 2-a is the case of a reliability or availability predicate
characterizing the continuity of service delivery in the presence of faults (e.g., fault masking):
{ acceptable_results_delivered } and { erroneous_result_delivered }
The testability property, for which an error must be signalled whenever a fault is present, is
a possible example for figure 2-b:
{ error_signalled } and { error_not_signalled } (1)
Figure
2-c provides an example for the test of a fail-safe property defined as:
{ fault_not_activated - error_signalled } and
{ fault_activated - error_not_signalled }
where - and - denote respectively OR and AND connectives.
This corresponds to an alternating behavior between graph vertices v 0 and v 1 that may be
described by the decomposition of the predicate p into two elementary predicates of the types
shown in figure 2-b:
{ fault_activated } and { error_signalled }
where - is the NOT operator.
The observation of the instant of assertion of a predicate characterizes the temporal
performance of the FTAM under test; in particular for the predicate of figure 2-b, relation (1)
can be modified to:
Since relevant timing measurements are related to the instant of fault occurrence, it is simpler to
consider hereafter that the observation domain T is defined by the interval [0, T].
. Definition of Experimental Measures
We only summarize here the major experimental measures that can be derived from a fault
injection test sequence.
Let T p denote the random variable characterizing the instant of assertion of a predicate p
then the cumulative distribution function of the coverage (with respect to predicate p) can be
defined as:
Other related studies (e.g., see [26]) focus on the probability density function of the coverage.
Both approaches are equivalent in principle, however, we advocate the use of the cumulative
function as this facilitates the relationship with analytical models: the asymptotic value simply
tends towards the constant coverage parameters usually used in these models.
Two principal constraints have to be considered in the derivation of experimental measures.
First, it is worth noting that C(t) is usually defective (e.g., see [3]) since all the faults cannot
be properly covered, thus its asymptotic value is less than or equal to one, i.e.:
Also, the observation domain T is bounded and the readouts obtained from the experiments
form a set of so-called Type I (or time) censored data (e.g., see [27], p. 248); the unobserved
times are known only to be above the upper bound T (censoring time) of the observation
domain. The characteristics of the considered target system and especially the temporal
parameters of the FTAMS to be evaluated have a direct impact on the determination of T. The
choice of T relies on a careful analysis of thea prori (partial) information available concerning
the temporal parameters of the FTAMs and may necessitate a set of preliminary experiments for
its proper adjustment.
The combination of these two constraints results in a total uncertainty for the experiments
for which no outcome (predicate assertion) is observed. Indeed, either the assertion would
occur in a finite time beyond T or the assertion is not true for that experiment (which denotes a
coverage deficiency). These implications are further analyzed in the following sub-sections.
Estimation of the Coverage Function
Consider a test sequence of n independent fault injection experiments; in each experiment, a
point in the {FxA} space is randomly selected according to the distribution of occurrences in
{FxA} and the corresponding readouts collected. If t pi denotes the instant of assertion of p
for experiment i, denote the random variable defined by:
{ 1, if assertion p is observed in [0, t]
0, otherwise
The number of assertions of p cumulated within the time interval [0, t] can thus be expressed
as:
and the coverage function C(t) can be simply estimated by:
The asymptotic coverage is estimated by:
Due to the monotonically increasing behavior of C(t) and to the finite restriction of the
observation domain, this estimation is always pessimistic. Furthermore, as C(t) is defective,
another interesting measure corresponds to the conditional coverage expressed as:
{
Prob . { T p -
This experimental conditional coverage refers also to the conditional distributions defined for
the coverage model presented in reference [3].
If T '
designates the random variable characterizing the non-infinite coverage times (non-
infinite instants of assertion of p), then T' p can be described by the following distribution:
that is estimated by:
Estimation of the Mean Coverage Time
The mean coverage time is defined
dC(t). The two constraints
identified previously also complicate the estimation of t; three types of estimators can be
{ e i
{ e i
The first estimator given by expression (10) corresponds to the estimation of the mean of the
coverage times actually observed. It is thus an estimator of E[T' p ], i.e., of the mean of the
conditional coverage time.
The second estimator defined by expression (11) estimates the random variable min(T p, T).
It has been modified to assign a time T (i.e., the upper bound of the observation domain) to
each of the [n - N(T)] experiments for which the assertion of p was not observed.
The third estimator (expression (12)) corresponds to the estimator typically used when dealing
with time-censored exponentially distributed test data (e.g., see [28], pp. 105-106) or with the
estimation of the Mean Time to First Failure (MTFF) [29].
It is worth noting constitutes an "optimistic"
estimation of the mean coverage time. However, the fact that C(t) is defective prevents
conclusions being drawn about the bias induced by the other estimators. We therefore selected
the first estimator.
III. Integration of Experimental Measures of Fault Tolerance with
the Fault Occurrence Process
In this section we first identify the main interactions between analytical dependability
modeling and experimental evaluation. We then present a framework for characterizing the
relationship between the experimental estimates obtained in a fault injection test sequence and
the coverage parameters usually considered to account for FTAM behavior in Markov chain
models. An example is given to illustrate the respective impact on dependability evaluation of
asymptotic coverage and coverage distribution.
A . Bridging the Gap between Analytical Modeling and Fault Injection
Figure
3 depicts the principal phases of analytical dependability evaluation and experimental
dependability evaluation based on fault injection that rely respectively on the construction and
the processing of either axiomatic models (sequence 1-2-4-6), or empirical and physical
models (sequence 1-3-5-7).
Of course, both sequences may be used separately to impact the target system (e.g.,
parameter sensitivity analysis for early architectural design decisions in the case of model-based
evaluation or as a design aid for fault removal in the case of fault injection-based experimental
testing). However, we would like to stress here the benefits that can be obtained from the
interactions between these two sequences. For sake of conciseness, we will emphasize only the
most significant interactions (identified by bold arrows in figure 3).
The transition from 2 to 5 depicts the necessary impact of modeling on the definition of the
readouts in the R set and the determination of the measures in the M set. In particular, one
impact represented by this transition may be that of considering the relative ratios of the
occurrence rates of different fault classes in order to refine the general estimators of the
coverage function given in section II (e.g., see [30] and [31]).
The transition from 7 to 8 identifies two types of interactions:
. impact of models on experiments: the reference to the fault occurrence process, usually
described in axiomatic models, is necessary to derive dependability measures,
. impact of experiments on models, including: estimation of the coverage parameters of
the original models, validation of the assumptions made in the elaboration of these
models and refinement of the structure of the models.
Relevant measures of system dependability can be obtained by processing models thus
supported by experiments. This provides an objective foundation for proposing modifications
to the design and implementation of the target fault-tolerant system.
The interactions induced by transition 7-8 are analyzed further in the next sub-section.
B . Dependability Evaluation
If we assume that the major risk of system failure is that induced by the failure of the
FTAMs in properly processing the first fault occurrence, the reliability expression for a non-
maintained fault-tolerant system can be written as:
where F F (t) and f F (t) are respectively the cumulative distribution and density functions
characterizing the fault occurrence process of the whole target fault-tolerant system and C(t)
designates the cumulative distribution of the FTAM coverage function (see section II). In
particular, the first part of expression (13), 1 - F F (t) expresses the probability that no fault
occurred before t and the last term expresses the probability of survival to the first component
failure.
The derivation of expression (13) is based on the fact that, in a fault-tolerant system, the
risks of failure resulting from exhaustion of redundancy correspond generally to much lower
orders of magnitude than those induced by a coverage deficiency in the FTAMs. This is
especially true when the mission time is small compared to the mean time to fault occurrence. It
should also be pointed out that the reference to the fault occurrence process is by no means a
limiting factor; the extension to the error occurrence process (fault activation) can be simply
achieved by substituting E for F in the indices.
Three major techniques (namely, Monte Carlo simulation, closed-form expressions and
Markov chains) can be considered for implementing the relationship between the fault
occurrence process and the experimental coverage parameters formally expressed by relation
(see [14]). Of these three, Markov chains are especially attractive since they provide a
tractable means to account for the main temporal characteristics of the coverage distribution, as
exemplified in the following sub-sections.
1 . Estimation of the Coverage Parameters of a Markov Chain
Let us consider the model of figure 4-a that describes the behavior of a fault-tolerant
system. This model accounts for the coverage of the FTAMs with respect to the occurrence of a
fault and the possible occurrence of a second (near-coincident [32]) fault while processing the
first one.
As shown in [3], an equivalent Markov representation (figure 4-b) can be derived for such
a behavior where the equivalent coverage C* is defined as:
where the constant parameter C can be identified as the asymptotic value of the coverage
cumulative distribution C(t) (see section II), l* is the rate of occurrence of a near coincident
fault and the E[T d
designates the successive moments of the random variable characterizing
the processing time of the FTAMs.
By limiting expression (14) to the first order and letting the decision rate of
the FTAMs, we obtain the model of figure 4-c. This model provides an essential "building
block" to describe the coverage process, in particular for studying the impact of the temporal
distribution.
Although the truncation of the observation domain leads to a conservative estimation of the
asymptotic coverage (see expression (6)), the estimation of the distribution of T d is in practice
more complex. Basically, the distribution of T d can be related to the distribution of the random
variable T '
characterizing the non-infinite coverage times (see expression (10)) which is in
turn related to the random variable T p characterizing the coverage process (i.e., the assertion of
predicate p) by:
assertion of p in [0,t] }
Prob.{ assertion of p in [0,t] | assertion of p in [0,-[ }
Prob.{ assertion of p in [0,-[ }
Accordingly, E[T d thus expression (14) can be simply expressed as:
and E[T p ] can then be (under-)estimated by the estimator given in expression (10).
. Impact of the Time Distribution of the Fault Tolerance Process
In order to study this impact, we consider as an example the case of a duplex architecture
featuring (i) self-checking units whose coverage is characterized by an asymptotic coverage C
and a mean decision time denoted by 1/n and (ii) a procedure for cross-checking both units
- with perfect coverage - and whose timing is characterized by the activation process that is
common to both units; let a denote the associated rate. Figure 5 describes the considered
model and defines the model parameters as well as the meaning of the states. This model
corresponds to the basic model used in the safety and availability evaluation of the potential
architectures for the computerized interlocking of the French National Railways [33].
The analysis of the failure states explicitly distinguishes whether or not an error was
detected. Accordingly, state 3, although unreliable since the service delivery has been
interrupted during the repair action that follows the detection of an error, can be considered as a
safe state (benign failure); therefore, only states 4 and 6 are catastrophic failure states. State 5
represents the system after a second failure but before (re-)activation of the system. Its positive
effect on system dependability is usually very slight (since a >> l) and can be neglected (by
merging it into state 6).
For the evaluation, we use the equivalent catastrophic failure rate (denoted G) associated
with the absorbing states 4 and 6. The strong connectivity property of the graph consisting of
the non-absorbing states as well as the very small values that are usual for the model parameter
ratios l/a and l/n ensure that the absorption process is asymptotically a homogeneous Poisson
process and that the associated equivalent failure rate G is given by (e.g., see [34],
pp.
paths from
initial state (I) to
failed state(s)
transition rates of the considered path
s t a t es i n pa t h
output rates of the considered state
Application of (17) to the model of figure 5, and some algebraic manipulations lead to the
following normalized failure rate:
G
l
l
a
l
a (C -
l
l
l
a
l 2
Expression (18) reveals the prominent role of the asymptotic non-coverage of the self-checking
mechanisms (C - ), or by the activation rate (a), according to the value of the ratio l/n
(with respect to C
- ). It is worth noting that this ratio corresponds to the normalized mean
decision time (1/n) - with respect to the MTFF of one unit (i.e., 1/l). These results extend
and refine the results usually found in the existing literature, which are mainly restricted to the
influence of the asymptotic coverage. It is also worth noting that the ratio obtained when
inverting expression corresponds to the ratio of the MTFF procured by the redundant
duplex architecture (i.e., MTFFduplex - 1/G) over the MTFF of one unit
1/l). This is illustrated by the curves shown in figure 6 that plot the gain in MTFF procured
by the redundant duplex architecture as a function of the ratio l/n. Figure 6-a illustrates the
impact of the lack of coverage (C - ), while figure 6-b illustrates the influence of the activation
rate through the normalized mean activation time (l/a).
The curves provide useful insight about the domains where the impact of the FTAM
coverage time distribution is significant. The variations observed explicitly show that, for the
usual orders of magnitudes of the ratio l/n, i.e., l/n << 1, the impact of the asymptotic
coverage is the most prominent. This clearly indicates that, in the experimental evaluation,
specific attention should be paid to the estimation of asymptotic coverages.
IV . Example of Fault Injection-Based Dependability Evaluation
This section illustrates the concepts set forth in the previous sections by applying them to
the Delta-4 distributed fault-tolerant architecture. The reader interested by the Delta-4
architecture can refer to [35] and [36]. Two issues are considered here:
. model construction, exemplified by the description of a typical experimental graph,
. calibration of coverage parameters for the evaluation of dependability measures.
A . Experimental Graph
The target system considered for the experimental validation of the Delta-4 architecture is
made up of a local area network of 4 nodes. Each node is composed of a host computer and of
a Network Attachment Controller (NAC). The NAC features hardware self-checking
mechanisms specifically designed to ensure a fail-silent behavior (by provoking the extraction
of a faulty node from the network). Tolerance of faults at the host computer level is achieved
through data and code replication and a variety of alternate mechanisms of which the basic
building block is an Atomic Multicast protocol (AMp) also implemented in the NAC.
The fault injection test sequence was aimed at testing the hardware self-checking
mechanisms implemented in the NACs as well as the behavior of the AMp software in the
presence of hardware faults. Faults were injected in the NAC of a single node (the faulty node)
that was monitored to assess the efficiency of its hardware self-checking mechanisms.
Successful hardware error-detection (resulting in node extraction) is characterized by a
predicate D. The resulting behavior of the non-injected nodes (the non-faulty nodes) was also
observed to assess the efficiency of the AMp mechanisms in tolerating the faults at the
communication level. Correct operation of AMp is specified in terms of atomicity, order and
group membership properties that are globally characterized by a predicate T.
To carry out the test sequence, a general distributed testbed, featuring automatic control and
sequencing of the experiments, as well as reset and recovery of the crashed nodes, was built
around the fault injection tool MESSALINE [24]. This enabled us to carry out extensive fault
injection experiments (almost 20000 experiments of about 5-minute duration each) on a
prototype of the Delta-4 architecture.
Faults in the F set were injected by forcing "zero" or "one" levels on the pins (up to 3 pins
simultaneously) - and thus on the connected equipotential lines - of 86 ICs on the NAC
board. To account for the most likely faults, the injected faults were mainly intermittent faults,
but transient as well as permanent faults were also injected. Activation of the target system (the
A set) consisted of two types of traffic exchanged among the nodes with various traffic profiles
that ensured different activation modes for the injected node. Further details on the testing
environment can be found in [37].
The experimental results obtained proved very useful in building a relevant model of fault
tolerance behavior. Figure 7 gives an example of values obtained for a typical experimental
graph, which in fact corresponds to the predicate graph discussed earlier (figure 1).
The percentages indicate the values of asymptotic coverage for the predicates E (error), D
(hardware error detection) and T (tolerance by the communication protocol). The time
measures indicate the mean values of the fault dormancy and error detection latency
distributions; only asymptotic coverage is considered with respect to the T predicate since such
a predicate is of the type described in figure 2-a.
The main feature of this graph concerns the inclusion of transitions that might have been
omitted from an a priori model of system behavior and thus also from the evaluation of the
associated probabilities; two such transitions exist, which are related to (i) the identification of
the injected faults that were not activated as errors, and (ii) the inclusion of a transition
between states E and T accounting for injected faults that were actually activated as errors but
were apparently tolerated without being detected.
The first transition represents the experiments that are non-significant (i.e., experiments that
cannot activate the tested FTAMs); relevant error-based coverage estimates can be obtained by
processing only the readouts of the significant experiments. The results show that, thanks to
the large proportion of intermittent faults injected and to the variety of activation modes applied,
a very large proportion of experiments (i.e., 93 %) were significant ones since the injected
faults were actually activated as errors; this information was obtained by means of current
sensors attached to each fault-injection device.
The T predicate coverage can be estimated more conservatively when the percentage associated
with the second transition (E to T) is taken (pessimistically) to represent experiments that
terminate with errors that have remained latent but could eventually lead to failure.
An experimental graph such as this, together with the experimental values obtained, serve as
the basis for the system-level dependability evaluation sketched out in the next sub-section.
. Evaluation of Dependability Measures
To relate the dependability evaluation to the experiments that were carried out, we consider a
system made up of 4 nodes, as in the case of the target system used for the fault injection test
sequence. Such an architecture may for example correspond to the case of a system
requirement for triplex task execution with a 4th node as a back-up in order to tolerate 2
consecutive faults.
Figure
8 shows the Markov model that describes the behavior of this architecture. A
proportion h of the total node failure rate is considered to be that of the host computer, the
remaining (1-h) that of the NAC.
In the model, parameter CT accounts only for the asymptotic coverage associated with the
tolerance predicate T of the NACs (see figure 7); as a high coverage majority voting decision
is applied to the results of task replicas running on the host computers, the coverage of the
faults in the host computers is considered here as perfect. The rate at which task replicas
exchange results for voting is considered to be much greater than the mean time to node failure
(1/l). Consequently, the host and the NAC fault-tolerance processes (activated respectively by
the exchange of results for majority voting and execution of the underlying AMp protocol) are
considered as instantaneous in comparison with the other model parameters. Therefore, this
model contains no parameters analogous to the a and n parameters of figure 5.
The experiments that were carried out clearly revealed cases of non-confinement of errors
(i.e., some injected faults not only resulted in the fact that the faulty node did extract from the
rest of the network, but also provoked the extraction of several non-faulty nodes). The
multiplicity of such multiple node extractions impacts the dependability behavior of the system;
therefore, the model includes parameters C
T,i to account for the undesired extraction of i non-faulty
nodes.
The model assumes that it is possible to tolerate up to two simultaneous extractions.
Although this assumption is valid for the redundant configuration considered here and it has
been possible to obtain these figures in the case of our 4-node testbed, this might not be true in
practice for more complex configurations. This model can thus be considered as leading to an
(optimistic) upper bound for dependability evaluation. It is also interesting to account for the
(pessimistic) case when any multiple extraction results in total network failure. In practice, this
can be achieved by simply transferring the rate associated with transition 1-3 to transition 1-4
on the model of figure 8.
The equivalent failure rate of the system described by the model of figure 8, normalized
with respect to the failure rate of a single node, is:
G
MTFFnode
l
l
l
l
l
l
l
When considering the more restrictive assumption, then the equivalent failure rate becomes:
G
MTFFnode
l
l
l
l
l
For the analysis, we consider the results obtained during the experiments concerning two
distinct versions of the NAC hardware architecture: a NAC with only limited self-checking
capabilities (LSC NAC) and a NAC featuring enhanced self-checking capabilities provided by
a duplex architecture (ESC NAC). The fault injection experiments that were carried out - in
particular on the LSC NAC (featuring a lower coverage) - had a significant impact in the
debugging of the AMp software and several releases of the AMp software (denoted AMp Vx.y)
were therefore tested. The table of figure 9 summarizes the experimental measures considered
for the analysis. More details on the experimental results can be found in [38] and [39].
The results obtained for the ESC NAC - AMp V2.5 configuration show a very appreciable
improvement in the coverage. Out of the 4019 significant experiments that were carried out,
only faults were not tolerated; no non-confinement of multiplicity 2 was observed. To
provide a more objective estimation for this configuration, we have therefore considered
confidence intervals for the coverage estimations. The percentages in bold characters give the
nominal estimates; the figures in italics correspond to the upper and lower confidence limits for
a 95 % confidence level. Confidence intervals are not considered for the other configurations
as their impact would be negligible due to the relatively low coverage values.
Figure
compares the ranges of variations observed on the dependability gain measure
for the configurations considered and for both optimistic and pessimistic assumptions.
The upper and lower bounds that define the areas shown for each configuration are obtained
respectively from expressions (19) and (20) when considering the nominal coverage
percentages of figure 9; they can thus be considered as nominal bounds. Note that the areas
associated with configurations LSC NAC - AMp V2 and LSC NAC - AMp V2.3 overlap.
The confidence limits for the coverage of configuration ESC NAC - AMp V 2.5 enable
confidence limits to be obtained for these bounds; these limits appear as dotted lines. Note that
the lower limit of the upper bound and the upper limit of the lower bound almost coincide.
The best nominal upper and lower bounds obtained for configuration ESC NAC -
2.5 indicate an MTFF improvement factor of 2000 and 500, respectively. However,
the limits shown for each bound indicate how the uncertainty in the estimation of the coverages
may affect these dependability predictions. As could be expected, the influence is stronger for
the upper bound; the lower/upper limits are respectively 800/4000 for the upper bound and
400/800 for the lower bound. This shows that, even in the most conservative case, the Delta-4
architecture still provides a substantial dependability improvement.
Figure
shows that the ESC NAC - AMp V2.5 combination provides almost one order of
magnitude gain over the best results obtained for the LSC NAC architecture. This improvement
can be attributed mainly to the improved self-checking mechanisms of the ESC NAC
architecture rather than the change in AMp version since some partial tests using version 2.5 of
AMp software on the LSC NAC were carried out and it was observed that there was no
significant modification with respect to those obtained for version 2.3.
It should be pointed out that the curves shown here have been plotted for
is, the proportion of node failure rate associated to the host computer. Although it is clear from
expressions (19) and (20) that parameter h impacts the absolute value of the gain, the
sensitivity analysis with respect to h carried out in [37] has shown that for h - 95 % (which
covers the most realistic values of the ratio of host and NAC failure rates), the relative impact
of the software and hardware modifications of the architecture shown in figure 10 is not
significantly changed.
. Conclusion
The dependability evaluation of complex fault-tolerant systems requires a combination of
both experimental and analytical methods. This issue has been addressed by proposing a
framework that establishes the link between the experimental evaluation of the coverage of the
fault tolerance process and the fault occurrence process.
By investigating the relationships between the time distributions of the fault occurrence and
coverage processes, we were able to show how it is possible to identify the relative domains
where the time distribution has an impact on dependability measures.
The examples given clearly illustrate how the main interactions between model-based
evaluation and experimental evaluation - namely, model characterization and coverage
parameter calibration - fit within this framework and can be applied in practice.
The insights gained from the combined fault injection and dependability analysis carried out
were regarded by the industrial partners of the Delta-4 project as providing very valuable aids
in improving the designs and in making architectural decisions concerning the fault tolerance
algorithms and mechanisms.
However, much work remains to be carried out towards the incorporation of fault injection
at the various stages of the development and validation process of fault-tolerant systems. The
results reported in this paper constitute only one facet of the work we are carrying out towards
this goal. Other investigations include:
. the use of fault simulation as a complement to physical fault injection on a fault tolerant
system prototype,
. the identification of specific input patterns aimed at distinguishing the various error/fault
processing actions of the fault tolerance algorithms and mechanisms so that they can be
adequately and efficiently verified,
. the clustering of the experimental results in order to refine the computation of the
coverage estimators by accounting for significant differences in the operational fault
occurrence rates associated with these clusters.
Acknowledgement
We would like to thank Eliane Martins, Jean-Charles Fabre and Martine Agu-ra at LAAS
for their significant contribution in designing and setting up the fault injection testbed. The
technical support from Bull and Ferranti and the feedback for refining the analysis of the
experimental results received from Marc Ch-r-que and Ren- Ribot (Bull SA, France) and
Nigel Howard (Ferranti International plc, UK), are also gratefully acknowledged. The authors
are also grateful to the anonymous referees for providing helpful comments that greatly helped
in improving the presentation of the paper.
--R
"Design and Evaluation Tools for Fault-Tolerant Systems"
"Reliability Modeling Techniques for Self-Repairing Computer Systems"
"Coverage Modeling for Dependability Analysis of Fault-Tolerant Systems"
"Failure Mode Assumptions and Assumption Coverage"
"Measurements of Fault Detection Mechanisms Efficiency: Results"
"Fault Detection, Isolation and Reconfiguration in FTMP: Methods and Experimental Results"
Injection based Automated Testing Environment"
Experimental Evaluation of Error-Detection and Self-Checking Coverage of Components of a Distributed Real-time System
"Understanding Large System Failures - A Fault Injection Experiment"
"Evaluation of Error Detection Schemes using Fault Injection by Heavy-ion Radiation"
"Effect of Transient Gate-Level Faults on Program Behavior"
"A Fault Behavior Model for an Avionic Microprocessor : A Case Study"
"FERRARI: A Tool for the Validation of System Dependability Properties"
"Fault Injection for the Experimental Validation of Fault Tolerance"
"Dependable Computing and Fault Tolerance: Concepts and Terminology"
Dependability: Basic Concepts and Terminology
"Evaluation of Deterministic Fault Injection for Fault-Tolerant Protocol Testing"
"Fault Injection for the Formal Testing of Fault Tolerance"
"Methodology for Measurement of Fault Latency in a Digital Avionic Miniprocessor"
The Theory and Practice of Reliable System Design
"Evaluation and Design of an Ultra-Reliable Distributed Architecture for Fault Tolerance"
Dependability Evaluation Report LA3 - Architecture Validation
"Impact of Correlated Failures on Dependability in a VAXcluster System"
"Fault Injection for Dependability Validation - A Methodology and Some Applications"
Dependability Validation by Fault Injection: Method
"Modeling Recovery Time Distributions in Ultrareliable Fault-Tolerant Systems"
Applied Life Data Analysis
Statistical Models and Methods for Lifetime Data
"Fast Simulation of Dependability Models with General Failure, Repair and Maintenance Processes"
"Some Past Experiments and Future Plans in Experimental Evaluations of fault Tolerance"
Validation of Distributed Systems by Fault Injection
"Effects of Near-Coincident Faults in Multiprocessor Systems"
"On the Dependability Evaluation of High Safety Systems"
System Reliability
"The Delta-4 Approach to Dependability in Open Distributed Computing Systems"
"Experimental Evaluation of the Fault Tolerance of an Atomic Multicast Protocol"
Dependability Testing Report LA2 - Fault-Injection on the Fail-Silent NAC: Preliminary Results
Dependability Testing Report LA3 - Fault-Injection on the Extended Self-Checking NAC
--TR
Coverage Modeling for Dependability Analysis of Fault-Tolerant Systems
Fault Injection for Dependability Validation
Four
Reliability modeling techniques for self-repairing computer systems
--CTR
Chen Fu , Ana Milanova , Barbara Gershon Ryder , David G. Wonnacott, Robustness Testing of Java Server Applications, IEEE Transactions on Software Engineering, v.31 n.4, p.292-311, April 2005
Chen Fu , Barbara G. Ryder , Ana Milanova , David Wonnacott, Testing of java web services for robustness, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
A. Steininger , C. Scherrer, Identifying Efficient Combinations of Error Detection Mechanisms Based on Results of Fault Injection Experiments, IEEE Transactions on Computers, v.51 n.2, p.235-239, February 2002
Michel Cukier , David Powell , Jean Arlat, Coverage Estimation Methods for Stratified Fault-Injection, IEEE Transactions on Computers, v.48 n.7, p.707-723, July 1999
D. Powell , J. Arlat , L. Beus-Dukic , A. Bondavalli , P. Coppola , A. Fantechi , E. Jenn , C. Rabjac , A. Wellings, GUARDS: A Generic Upgradable Architecture for Real-Time Dependable Systems, IEEE Transactions on Parallel and Distributed Systems, v.10 n.6, p.580-599, June 1999
Timothy K. Tsai , Mei-Chen Hsueh , Hong Zhao , Zbigniew Kalbarczyk , Ravishankar K. Iyer, Stress-Based and Path-Based Fault Injection, IEEE Transactions on Computers, v.48 n.11, p.1183-1201, November 1999
Guillermo A. Alvarez , Flaviu Cristian, Simulation-based Testing of Communication Protocols for Dependable Embedded Systems, The Journal of Supercomputing, v.16 n.1-2, p.93-116, May 2000
David Powell , Eliane Martins , Jean Arlat , Yves Crouzet, Estimators for Fault Tolerance Coverage Evaluation, IEEE Transactions on Computers, v.44 n.2, p.261-274, February 1995
Andrea Bondavalli , Alessandro Fantechi , Diego Latella , Luca Simoncini, Design Validation of Embedded Dependable Systems, IEEE Micro, v.21 n.5, p.52-62, September 2001
Xubin He , Ming Zhang , Qing (Ken) Yang, SPEK: A Storage Performance Evaluation Kernel Module for Block-Level Storage Systems under Faulty Conditions, IEEE Transactions on Dependable and Secure Computing, v.2 n.2, p.138-149, April 2005
J. C. Baraza , J. Gracia , D. Gil , P. J. Gil, A prototype of a VHDL-based fault injection tool: description and application, Journal of Systems Architecture: the EUROMICRO Journal, v.47 n.10, p.847-867, April 2002
Jean Arlat , Yves Crouzet , Johan Karlsson , Peter Folkesson , Emmerich Fuchs , Gnther H. Leber, Comparison of Physical and Software-Implemented Fault Injection Techniques, IEEE Transactions on Computers, v.52 n.9, p.1115-1133, September | dependability measures;distributed fault-tolerant architecture;fault injection;dependability evaluation;distributed processing;fault occurrence process;test sequence;Esprit Delta-4 Project;fault-tolerant systems;fault tolerance process;fault tolerant computing |
626781 | Scattering and Gathering Messages in Networks of Processors. | The operations of scattering and gathering in a network of processors involve one processor of the network (P/sub 0/) communicating with all other processors. In scattering, P/sub 0/ sends distinct messages to P/sub 0/. The authors consider networks that are trees of processors. Algorithms for scattering messages from and gathering messages to the processor that resides at the root of the tree are presented. The algorithms are quite general, in that the messages transmitted can differ arbitrarily in length; quite strong, in that they send messages along noncolliding paths, and hence do not require any buffering or queueing mechanisms in the processors; and quite efficient in that algorithms for scattering in general trees are optimal, the algorithm for gathering in a path is optimal and the algorithms for gathering in general trees are nearly optimal. The algorithms can easily be converted using spanning trees to efficient algorithms for scattering and gathering in networks of arbitrary topologies. | Introduction
1.1 Communication in Parallel Computation
Communication is an essential component of parallel computation. A variety of modes of
communication have been studied within the framework of networks of processors - identical
processing elements (PEs) that communicate by means of an interconnection network. The
most commonly studied modes are the following.
ffl (Partial) permutation routing [1, 3, 10, 13, 17] is a form of communication in which
each PE is both the sender and recipient of (at most) one message.
Broadcasting [8, 12] is a form of communication in which one PE sends one specific
message to all other PEs.
Gossiping (or, all-to-all broadcasting) [7, 16] is a form of communication in which each
PE sends one specific message to all other PEs.
Baumslag and Annexstein [1], Johnsson and Ho [8], and Saad and Schultz [14] (among
others) point out that these popular forms of communication do not exhaust the algorithmically
useful possibilities. Specifically, they add to the menu of communication modes the
operations of scattering and gathering. 1
Scattering (or, one-to-all personalized communication) is a form of communication in
which one PE sends (possibly) distinct messages to all other PEs.
ffl Gathering is a form of communication in which all PEs send (possibly) distinct messages
to one specific PE.
Efficient algorithms for a general version of the operations of scattering and gathering form
the subject matter of the current paper. Specifically, we present efficient algorithms for
scattering from and gathering to the root PE of a general tree-structured network. 2 We
present an optimal algorithm for scattering from the root of a general tree, an optimal
algorithm for gathering to the root of a unary tree (i.e., the end-PE of a path), and a nearly
optimal algorithm for gathering to the root of a general tree. Via the use of spanning trees,
Other important modes have also been studied, including multiscattering [11] and exchange [2], but less
frequently.
2 Henceforth, for brevity: we use the term "tree" for "tree-structured network;" also, we use the term
"network" to denote both a network of processors and its underlying interconnection network; context should
always disambiguate each occurrence of the word.
our (nearly) optimal tree-oriented algorithms become efficient algorithms for scattering and
gathering in networks of arbitrary topology. The generality of our study manifests itself in
three ways.
1. We allow messages to differ in length by arbitrary amounts; indeed, some messages may
be null.
This contrasts with the studies in [1, 4, 8, 14], wherein all messages have the same length.
2. We scatter and gather messages in trees of arbitrary shape and, hence, via the use of
spanning trees, in networks of arbitrary topologies.
This contrasts with the studies in [4, 8, 14, 15], which focus on a small repertoire of networks,
such as rings, meshes, and hypercubes.
3. We transmit messages along noncolliding paths in our networks, hence do not require any
buffering or queuing mechanisms in the PEs.
This contrasts with virtually all other studies of message transmission in networks. One
might be able to rationalize our demand for unbuffered communication in terms of resource
conservation: buffering requires both additional memory (each PE must be prepared to store
the longest message in the system) and time (e.g., for the processing of addresses). However,
our overriding motivation in this study was to understand communication in networks better,
by determining the cost of this strict assumption in terms of the complexity of the problems
of scattering and gathering general messages in general networks.
1.2 The Computing Model
A. Networks of Processors
We study the problems of scattering from and gathering to the root-PE of a synchronous
tree of arbitrary shape. Each network A comprises
convention, we always let P 0 denote the root of the tree, i.e., the PE which is the source of
messages in a scattering operation and the target of messages in a gathering operation.
The PEs of the networks we study have neither message buffers nor queues. Messages
within networks must, therefore, be scheduled so as never to "collide" with one another. For
the operation of scattering, the fact that we scatter within a tree guarantees such avoidance;
for the operation of gathering, this scheduling is a major challenge.
The networks we study use the single-port communication regimen: during each communication
step, a PE can send information to at most one of its immediate neighbors and,
simultaneously, receive information from at most one of its immediate neighbors; the sending
and receiving neighbors may be distinct. We do, however, allow a PE to perform (say,
computations while communicating, as well as to access its local memory. This
regimen is to be contrasted with the multiport communication regimen, in which a PE can
send and receive information from each of its immediate neighbors in one step. In Section 4
we indicate briefly how our results extend to a multiport model.
The networks we study communicate in rounds; i.e., while a scattering (resp., a gathering)
operation is in progress, there is no other communication going on in the network. This
means that the only resource contention we must worry about arises from the many messages
that are being scattered (resp., gathered) in the current operation. This regimen is to be
contrasted with the one studied in [2], wherein the present study of bufferless communication
is generalized to allow each PE to be both the source of and the destination for arbitrarily
many messages at once. As an aside, the study in [2] compensates for the generality of its
communication setting - bufferless PEs passing messages in arbitrary ways - by restricting
attention to simple network topologies, specifically, one- and two-dimensional meshes (i.e.,
rings and toroidal meshes).
Porting to General Networks. Our efficient collision-free algorithms can be transported
easily to networks of arbitrary topology via the use of an "efficient" spanning tree of (the
undirected graph underlying) the network in question, rooted at the singular PE for the
scattering or gathering operation. For the operation of scattering, and for the operation of
gathering under a multiport regimen, one would sensibly choose a breadth-first spanning tree,
in order to ensure that every message travels the shortest possible distance to its destination:
the possibility of large node-degrees in breadth-first trees causes no concern, because in a
scattering operation, a PE is receiving or transmitting at most one message at each step, and
in a multiport gathering operation, a PE can service as many ports as it has at each step. For
the operation of gathering under a single-port regimen, the time required to accommodate
large node-degrees in the tree can dominate the time for single-port gathering: broadcasting
is typically part of the synchronization protocol needed for gathering in multi-successor
networks, and high-degree nodes can slow down single-port broadcasts. (As an extreme
example, compare the times for single-port broadcasting in an n-PE network A in which
every pair of nodes is connected by an edge: (a) using a complete binary spanning tree of
A, versus (b) using a single-level degree-(n \Gamma 1) spanning tree.) Consequently, in this case,
one might seek a spanning tree whose structure approximates that of a minimum broadcast
tree [9].
Remark 1. The framework just outlined may represent only the communication subsystem
of a heterogeneous parallel architecture; for instance, the architecture viewed as a whole may
have PEs of differing powers and sizes, which operate asynchronously except during global
communication operations (such as scattering and gathering).
B. Messages and Message Sequences
Each message M i involved in a scattering or gathering operation is a sequence of some
number L i (perhaps zero) of atomic flits: a flit is the largest unit of information that the
network can transmit between adjacent nodes in one communication step (i.e., in one so-called
time).
A message is treated as an indivisible unit during a scattering or gathering operation, in
the sense that the L flits of a message are never interrupted by flits from other messages.
Initially, the L flits of the message are all in the originating PE; after the message has begun
to travel through the network, its flits are always in contiguous PEs; the lack of buffering
ensures that each flit is in a separate PE once it leaves the originating PE. A consequence
of the indivisibility of messages is that addressing information needs appear only in the first
flit of the message, thereby lessening both the setup time for messages and the aggregate
length devoted to addressing information.
be a sequence of messages (to be scattered or gathered).
Let
denote, in increasing order, the subsequence of message indices whose messages are nonnull,
i.e., for which L i j
C. The Scattering and Gathering Problems
In a scattering operation, the root-PE P 0 has a message M i of length L i to send to each
In a gathering operation, each PE P i , where i ? 0, has a message M i of
length L i destined for PE P 0 . (For both operations, some messages M i may be null, so that
We perform these operations in trees of arbitrary shapes, subject to the following
constraints.
ffl Once a message has been dispatched by its originating PE, it encounters no interruption
until it is received by its destination PE. In particular,
- each intermediate PE must relay the message with no queuing or buffering;
- messages are treated as indivisible units (in the sense descibed earlier).
ffl For each i ? 0, message M i will be routed along the unique path ae i that connects PE
in the tree. We let ffi(i) denote the length of path ae i , i.e., the distance
that message M i must travel.
D. Problem Complexity
We measure the complexity of a scattering or gathering operation in terms of the time
for delivering all relevant messages. Focussing on a fixed but arbitrary message sequence
time is formalized as follows.
The Time for Scattering. A schedule for scattering message sequence M is a permutation
oe (for "scattering-schedule") of the index-sequence
function
The intended interpretation is that PE P 0 sends out message M oe(1) , then message M oe(2) ,
then M oe(3) , and so on, in that order, in a steady stream, with no intervening gaps. Thus,
under schedule oe, given index i with L i 6= 0, PE P 0 begins transmitting message M i at
dispatch time
(Note the effect of the single-port regimen.) Message M i arrives at its destination, PE P i ,
at arrival time
ff oe
The time for scattering message sequence M under scattering-schedule oe is the time it takes
for every flit of M to reach its final destination; symbolically,
fff oe (i)g: (3)
Equation (3) implies the following simple result, which delimits the difference between the
best and worst scattering-schedules. The proof is left to the reader.
Proposition 1 Let oe be a scattering-schedule for message sequence M. Assuming that
message M i of M, for 1 - i - n, has length L i , the time for oe satifies the following bounds:
fffi(i)g:
The Time for Gathering. A schedule for gathering message sequence M is a sequence of
integers
(for "gathering-schedule"), where g. The intended interpretation is
that each - fl (i) (where i 2 N(M)) is the dispatch time for message M i , i.e., the time when
. The last flit of message M i is received by PE P 0 at arrival
time
ff
The time for gathering message sequence M under gathering-schedule fl is the time it takes
for every flit of M to reach PE
The Challenges. Note that neither the time for scattering, T scat , nor the time for gathering,
gath , allows for any delay of messages at nodes other than the originating node. This means
that our message-scheduling algorithms cannot rely on - so the network need not provide -
any mechanism for buffering or queuing messages in PEs. This lack of buffering provides an
additional challenge in scheduling the gathering operation, which is lacking in the scattering
operation. Namely, the scheduling algorithm must provide - in a distributed manner - for
the dispatching of messages in the network so that messages never collide on their paths to
Remark 2. Our timing model is somewhat simpler than that of some of the earlier cited
sources. Specifically, we charge L time units to transmit a message containing L flits; some
sources (such as [4]) would charge a message setup time of fi time units, plus a per-flit
transmission time of - time units for this message, for a total cost of fi +L- time units. This
change of model would not affect our analyses in a material way.
Remark 3. As suggested earlier, our algorithms for scattering and gathering in arbitrary
networks employ spanning trees that are fixed, independent of the message sequence M. For
many networks, there exists no single spanning tree that is simultaneously optimal for the
single-port regimen and for all message sequences, especially because messages can be null.
This means that our algorithms for general networks will often be suboptimal.
1.3 Related Work
Saad and Schultz [14] define the operations of scattering and gathering in full generality
but present algorithms only for a specific repertoire of network topologies and for the case
of equal-length messages. Fraigniaud et al. [4] prove the optimality of the Saad-Schultz
algorithm for scattering on a unidirectional ring of processors. Stout and Wagar [15] and
Johnsson and Ho [8] present optimal algorithms for scattering equal-length messages on
a hypercube, using both the single-port and multiport communication regimens. Li [11]
considers performing several scattering operations at once on a reconfigurable network of
processors. Bhatt et al. [2] study the most general type of communication, wherein each PE
has a distinct message for each other PE, in bufferless rings and toroidal networks. All of
these references, save the last, assess time transmitting an L-flit message.
2 Scattering on Networks of Processors
Say that scattering-schedule oe is optimal for message sequence M on a given tree if on that
tree,
for any other scattering-schedule oe 0 for M.
It is shown in [4] that the unique optimal scattering-schedule for equal-length messages on
a unidirectional ring is given by the permutation by sending out messages
according to a farthest-destination-first (FDF) regimen - one in which nonnull messages are
dispatched in decreasing order of the distances to their destinations. We now prove that the
optimality of FDF schedules persists when the lengths of the scattered messages are general
and when the scattering is done from the root-PE of an arbitrary tree. Specifically, we show
that, within this setting, for every message sequence M, every FDF scattering-schedule is
optimal for M (although there may be optimal non-FDF schedules also). It is consistent
with intuition that FDF scattering-schedules need no longer be the unique optimal ones
when one considers messages of arbitrary lengths, because a single enormous message could
so dominate the message transmission time as to mask the order of a collection of small
messages sent out right after it. Since the optimality of all FDF schedules ensures the
optimality of a large family of scattering algorithms, we present the following theorem in
lieu of a specific optimal algorithm.
Theorem 1 Every FDF scattering-schedule for scattering from the root-PE of an arbitrary
tree is optimal.
Proof. Let the tree T with root-PE P 0 be fixed.
The Theorem makes two claims, which we treat in turn. First, we prove that every optimal
scattering-schedule for a given message sequence can be replaced by an FDF scattering-
schedule for the sequence with no increase in scattering time (so the FDF schedule is also
optimal). Second, we prove that every FDF schedule for a message sequence is optimal,
i.e., that messages destined for equidistant PEs can be dispatched in any order. The reader
should note the crucial role of our communicating on a tree in what follows.
1 For every message sequence M and every scattering-schedule oe for M, there is
an FDF scattering-schedule oe 0 for M with
Moral. Every message sequence has an optimal FDF scattering-schedule.
asserts that one can never decrease the scattering time of a schedule by dispatching
a nonnull message that is destined for a nearby PE before a nonnull message that
is destined for a more distant PE. This is not surprising, as one hopes to use pipelining to
make progress in sending the nearby message while the distant message is in transit.
Proof of Claim. Assume, for contradiction, that there is a message sequence
such that no optimal scattering-schedule for M observes the FDF reg-
imen. Let oe 1 be any optimal scattering-schedule for M. Because oe 1 does not observe the
FDF regimen, there must exist PE indices i and j, both in N(M), such that:
oe
Let oe 2 be the scattering-schedule for M obtained from oe 1 by interchanging oe \Gamma1
i.e.,
We claim that
By equation (3), inequality (4) will follow from the inequality
(j)g;
we establish this inequality by analyzing the dispatch and arrival times of messages under
schedules oe 1 and oe 2 . We begin by noting that equation (1) implies the following relations
among the dispatch times under schedules oe 1 and oe 2 . (All indices referred to are associated
with nonnull messages.)
otherwise.
(Note that - oe 1
therefore, we infer the following
relations among the arrival times under schedules oe 1 and oe 2 .
while ff oe 2
(k) for all k 62 fi; jg. (These last equations on ff oe 1
and ff oe 2
hold because
we route messages within a tree.) We can now deduce that
(j)g:
It follows from this chain of reasoning that T scat (oe strict
inequality whenever message M i is the last message to arrive at its destination under schedule
oe 1 . Now, if scattering-schedule oe 2 observes the FDF regimen, then this inequality already
contradicts the assumption that no FDF scattering-schedule is optimal for M. If scattering-
schedule oe 2 does not observe the FDF regimen, then it is "one transposition closer" to
observing the regimen than is schedule oe 1 . In particular, we can iterate the operation of
transposing transmission times that violate the FDF regimen a finite number of times (in
no more than n(n \Gamma 1)=2 times) to arrive at a scattering-schedule oe that does observe
the FDF regimen and that has scattering time no greater than that of schedule oe 1 , thus
contradicting the assumption that no FDF scattering-schedule is optimal for M. 2-ClaimClaim 2 All FDF scattering-schedules take the same time.
Moral. Every scattering-schedule that observes the FDF regimen is optimal.
Proof of Claim. Say that the scattering-schedule oe observes the FDF regimen. The
only way to alter oe without violating the regimen is to rearrange the transmission order of
messages destined for equidistant PEs. We claim that such rearrangement does not alter the
time for the schedule and, hence, must preserve optimality. To wit, equations (1) and (2)
imply the following. If messages M j 1
are all destined for PEs at distance \Delta
from PE P 0 , and if the earliest dispatch time of any of these messages is - , then the latest
arrival time of any of these messages is
independent of the specific order of dispatching the messages. 2-Claim 2
Note that Claim 2 verifies that optimal scattering-schedules for a message sequence M
do not depend on the lengths of the messages in M.
The Theorem follows. 2
Let us focus momentarily on the simplest possible tree, namely, a path having PE P 0
as its root. For notational convenience, say that in this tree: P i+1 is the child of P i , for
is the parent of P i , for is the (sole) leaf. When one is
scattering messages from P 0 in such a tree, the proof of Theorem 1 can be visualized easily.
As one can see in Figure 1, for instance, in this case, each message dispatched by PE P 0
sweeps out a parallelogram in the space-time domain. (The parallelogram associated with
the length-L i message M i destined for PE P i has length-L i sides parallel to the time axis,
corresponding to the path traversed by the L i flits of message M i , and length-i sides at a
45-degree angle to the time axis, corresponding to the progress of the flits along the line of
PEs.) Constructing examples of scattering operations on paths, visualized via space-time
parallelograms, will convince the reader that often a portion of the upper slanted side of the
space-time parallelogram of one message can be "hidden in the shadow" of the space-time
parallelogram of an earlier dispatched message; this corresponds to pipelining the use of the
intermediate PEs to decrease the overall time of the scatter operation. Constructing analogs
of the competing dispatch orders of Figures 1(a) and 1(b) will illustrate what Theorem 1
verifies, namely, that more hiding occurs when the parallelogram of a message destined for
a more distant PE "provides shadow for" the parallelogram of a message destined for a
nearby PE than when the dispatch times of the two messages are reversed. In Figure 1, we
make message M 4 longer than message M 5 to emphasize the independence of the "hiding"
phenomenon from the lengths of messages.
3 Gathering on Networks of Processors
Say that gathering-schedule fl is optimal for message sequence M on a given tree if, on that
tree,
for any other gathering-schedule fl 0 for M.
In an ideal world, we would implement the gathering operation by running an FDF scattering
algorithm "backwards;" by reasoning analogous to that in the proof of Theorem 1, an
algorithm that accomplished this would be optimal. Of course, one can not literally run an
FDF scattering algorithm "backwards," because in the scattering operation, PEs other than
are passive, while in the gathering operation, they are active - they must initiate their
message transmissions. To compensate for this fact, any algorithm for a bufferless gathering
operation must precede the transmission of messages by a distributed protocol that schedules
the dispatch times of the messages so that no two collide in transit. A straightforward
synchronization-like protocol suffices to accomplish this scheduling. We begin this section
with a simple version of this protocol, called shoulder tapping (Section 3.1), that implements
the operation of gathering messages to one end of a path by interlacing the synchronization
and scheduling activities. Although shoulder tapping yields an optimal algorithm for gathering
on a path, it is too simple to work on general tree structures. Since altering shoulder
tapping to operate on general trees leads to a cumbersome algorithm, we opt instead for a
version of the protocol which decouples the synchronization and scheduling activities. The
resulting protocol, called transmission certification (Section 3.2), is readily adapted to general
tree structures, but only at the cost of added time for separate synchronization and
scheduling activities.
It is worth stressing here that gathering must in general be more time consuming than
scattering, because of the need for a scheduling protocol that precedes message transmission.
In particular, in a gathering operation, a PE cannot safely begin transmitting its message
until "told to," for fear of interfering with the transit of another PE's message.
3.1 Shoulder Tapping: a Solution for Paths of Processors
The shoulder-tapping protocol we present now exploits the single-child structure of a path
in an essential way; it is this feature that precludes its graceful extension to trees of more
complicated structure. The algorithm that implements shoulder tapping seeks, for a message
sequence
which minimizes each dispatch time - fl (i j ) subject to the requirement
that messages never collide, and subject to the inequalities
Inequalities (5) must hold for any distributed gathering algorithm on a path; they reflect the
following facts, which hold for all PE indices, not just those in N(M).
ffl Each PE P i (save, of course, must receive a wakeup call telling it when to begin
transmitting its message M i (assuming that the message is nonnull).
ffl The sequence of wakeup calls must be initiated by P 0 (since, in general, it is the best
arbiter of when it is ready to receive the message sequence), hence must take at least
i steps to reach P i .
ffl The single-port communication regimen does not allow P i to overlap dispatching its
message (toward P 0 ) and transmitting a wakeup call to P i+1 .
The algorithm operates as follows. Each PE remains dormant until its
shoulder is tapped by PE P i\Gamma1 with a wakeup call: the call is a (one-flit) message consisting
of the order
where s i is a positive integer. Assume that P i receives its wakeup call at time t i . It responds
by serially entering the following operational phases, which embody Algorithm Shoulder-
Tap.
Algorithm Shoulder-Tap:
Phase 0: P 0 transmits to P 1 the wakeup call
Phase 1: If phase is ignored; else,
receiving its wakeup call), P i transmits
to P i+1 a wakeup call of the form
where the positive integer s i+1 is computed using the following time-line. (Note the
effect of the single-port communication regimen.)
time receives its wakeup call (from P
time receives its wakeup call from P i .
time receives the first flit of M i from P i (when
time receives the last (i.e., the L i th) flit of M i from
hence, at this time, P i is ready to relay (passively) any messages it receives from
Since P i+1 receives its wakeup call at time t must be positive, P i
sets the value of s i+1 as follows:
Phase 2: If L this phase is ignored; else,
to
one flit at a time.
Phase 3: From time begins to relay (passively) any messages
it receives from PEs P j for j ? i.Two small instances of Algorithm Shoulder-Tap appear in Figures 2 and 3. Figure 2
attempts to depict a "typical" message sequence; Figure 3 depicts a somewhat pathological
sequence which illustrates that the dispatch times of messages under the algorithm may not
be monotonic in the indices of the dispatching PEs.
We show now that Algorithm Shoulder-Tap produces an optimal gathering-schedule
for paths.
Theorem 2 Algorithm Shoulder-Tap is an optimal algorithm for gathering on a path.
Proof. Let us consider the behavior of Algorithm Shoulder-Tap on an arbitrary message
sequence
Note first that when all of the messages in sequence M are nonnull, Algorithm Shoulder-
Tap delivers the messages to P 0 in a gap-free fashion. When takes
place from time-step 2 to time-step 1 takes place
from time-step 3 to time-step 2
. In this case, the Algorithm can clearly not be
improved, since the small additive constant in excess of the message-stream length is needed
for synchronization, as in inequality 5.
In order to establish the optimality of Algorithm Shoulder-Tap when some of the
messages in sequence M are null, we introduce the following analogue of FDF scattering-
schedules.
We have already remarked that the ideal gathering-schedule would be one that ran an
FDF scattering-schedule "backwards." From the perspective of PE P 0 , as recipient of the
messages, such a schedule would have messages that originate at nearby PEs arrive before
messages that originate at more distant PEs, i.e., would observe a nearest-received-first
(NRF) regimen. The formal verification that there is an optimal NRF gathering-schedule
satisfying inequality (5) for every message sequence follows the lines of the analogous result
for FDF scattering-schedules (Theorem 1), hence is left to the reader. In common with
Theorem 1, this verification can be visualized geometrically when the underlying tree is a
messages in gathering operations sweep out the same type of parallelograms in the
space-time domain as they do in scattering operations; the main difference is that gathering-
parallelograms slant from the northeast to the southwest, whereas scattering-parallelograms
slant from the northwest to the southeast; cf. Figure 2.
With no loss of generality, we henceforth compare Algorithm Shoulder-Tap only with
gathering-schedules that honor the NRF regimen.
Consider, therefore, an arbitrary NRF gathering-schedule for M,
N(M). For any 2 - j - k, we must have
or else messages M i j
and M i
would either collide or violate the NRF regimen. By
combining inequalities (5) and (6), we obtain:
A straightforward induction establishes that the gathering-schedule produced by Algorithm
Shoulder-Tap satisfies inequality (7) as an equality. It follows that the gathering
time for Algorithm Shoulder-Tap is minimal among algorithms for gathering on a path,
that schedule message deliveries in a distributed fashion, hence obey inequality 5. 2
Generalizing the interlaced synchronization-plus-message passing strategy of Algorithm
Shoulder-Tap to trees whose PEs have multiple children seems to require a rather complicated
protocol: messages must have end-of-message delimiters so that each PE P i can
coordinate the message streams of its children and their descendants. We turn now to an
alternative strategy which accomplishes this coordination in a simpler way, hence extends
gracefully to trees of arbitrary structure.
3.2 Transmission Certification: a Solution for General Trees
We now modify the protocol of Algorithm Shoulder-Tap by decoupling the synchronization
and message passing activities. The resulting Algorithm Transmission-Certification
operates in four phases.
Algorithm Transmission-Certification:
fThe first two phases represent the decoupled synchronization part of the protocol.g
Phase 1: PE P 0 "awakens" all other PEs in the tree by broadcasting a synchronization to-
ken. (This wakeup call lets the PEs know that P 0 is ready to "gather" their messages.)
Phase 2: Each PE P i responds to the synchronization token by sending a (one-flit) transmission
certificate to its parent PE. The certificate indicates how soon P i can initiate
a gap-free transmission of all the messages in the subtree whose root it occupies. The
PEs at the leaves of the tree are the first to send certificates; a nonleaf PE's certificate
is computed using the length of its message, together with the certificates of its
children.
fThe second two phases are reminiscent of Algorithm Shoulder-Tap.g
Phase 3: When P 0 receives its children's certificates, it initiates a wave of transmit-
message orders. Inductively, the orders transmitted by a PE P i to its children schedule
the children's gap-free transmissions: the scheduled dispatch time for each child is
calculated from P i 's own dispatch time, its own message length L i , and the certificates
it received (during Phase 2) from its children.
Phase 4: Finally, the PEs follow the schedule of phase 3, transmitting messages in a gap-free
stream toward P 0 , via their parents.Since P 0 eventually receives the entire set of messages in a gap-free stream (of length
Transmission-Certification is optimal, up to the time required for
the synchronization-and-scheduling protocol. This protocol comprises three phases: two of
the phases (Phases 1 and are essentially broadcasts in the tree; the other (Phase 2) is
essentially a leaf-to-root reverse broadcast, with children's messages being combined into a
single message by each parent. We now describe these phases in detail.
Assume henceforth that each PE P i which is not a leaf in the tree has d i children, denoted
in some arbitrary but fixed order.
Broadcasting and Receiving Messages. Because the single-port communication regimen
allows a PE to communicate with at most two neighbors in a single step (one by sending a
message and one by receiving a message), communications in the various phases of Algorithm
Transmission-Certification must be orchestrated as illustrated in the following scenario.
When PE P i receives a synchronization token "send-certificate" from its parent, it relays
the token in turn to its children, P
. After sending the token to a child, P i
waits to receive that child's transmission certificate before sending the token to the next
child. continues in this fashion, until it has collected transmission certificates from all
children. The reader should note that the Algorithm requires P i to "remember" which
certificate came from which child.
An Overview of Transmission Certificates. During Phase 2 of the Algorithm, each
sends its parent a transmission certificate; this message consists of a pair
of integers is the certified lag time, and n i - 0 is the certified stream
length. The intended interpretation of P i 's transmission certificate is:
c i steps after receiving a transmit-message order, PE P i can start transmitting
toward P 0 a gap-free stream of n i flits, comprising all the messages originating
at PEs in the subtree rooted at P i .
Each PE that is a leaf of the tree can compute its certificate directly from the length of its
message; each nonleaf PE P i computes its certificate from the length of its message, together
with the certificates of its children. (P i needs both the certified lag times and the certified
stream lengths from its children for scheduling purpose, in order to coalesce the children's d i
message streams into a single stream.) When P 0 receives the certificates from its children,
it can proceed to schedule all the transmissions, using transmit-message orders that are
essentially identical to the shoulder taps that characterize Algorithm Shoulder-Tap. The
transmission schedule produced by Algorithm Transmission-Certification differs from
that produced by Algorithm Shoulder-Tap mainly in its avoidance of gaps in message
transmission (such as that observed at Step 8 in Figure 2). We now describe how the
transmission certificates are computed.
Computing Transmission Certificates. Say that PE P i has received the certificates
from its d i children. It uses these certificates, plus the length L i of its message, to compute
its certificate
Length. The computation of P i 's certified stream length n i is straightforward,
since the message stream that P i will transmit is just the concatenation of its message, M i ,
with the message streams of its children; hence,
Lag Time. A PE P i that resides at a leaf of the tree does not have to wait for any
other PE before starting to transmit its message stream - which is just its message M i ;
therefore, it can start transmitting its message stream with no gaps one step after receiving
a transmit-message order, so its certified lag time is just c In contrast, a PE P i
that is not at a leaf of the tree must consider how its message interacts with the message
streams that will come from its children PEs. Specifically, PE P i computes its certified lag
time c i from the certificates c i;1 ; c
of its d i children, via the following reasoning,
which is presented most easily by means of a time-line similar to that used to compute the
wakeup calls in Algorithm Shoulder-Tap. Say that (at some time in the future) P i will
receive the order
transmit in s i steps
at time t. The following actions will ensue.
relay the order to its child P i;j , with an appropriately
modified value s i;j of s i .
as the first stage of transmitting
the message stream from the PEs in the subtree rooted at P i . Note that the
integer s i can be no smaller than d because of the single-port communication
regimen. 3
will begin to relay, without gaps, the message streams sent to it by
its d i children. Note that the integer s i can be no smaller than minfc
because some child of P i must begin its gap-free transmission one step before P i begins
its gap-free relaying. s i may be larger than this lower bound because of the requirement
that message transmission be gap free.
With this time-line in mind, P i computes its certified lag time in four steps, as follows.
1. adopts the preliminary certified lag time c 0
acknowledges the fact that
transmitting its message, M i , until it has dispatched a transmit-message
order to each of its children.
2. P i "adjusts" each of its children's certified lag times, amending the lag time of P i;j , where
acknowledges the fact that P i cannot begin relaying its
children's message streams until it has dispatched a transmit-message order to each of
its children.
3. P i sorts the certified lag times fc i;j of its children, thereby obtaining a
permutation - of the set f1; which orders the children of P i in increasing order of
their certified lag times. (P i will use the permutation - now, in computing its certified lag
time, and later, in computing the transmit-message times for its children.)
4. Finally, P i computes its certified lag time, using a geometrical model. Visualize the
nonnegative x-axis, with the following
ffl X i;0 is a length-L i segment whose left endpoint can be placed anywhere at or to the
right of point c 0
ffl For is a length-n i;j segment whose left endpoint can be placed anywhere
at or to the right of point c 0
3 There is an implicit inductive assumption here that s i
has been assigned a feasible value by P i
's parent.
The intended interpretation is that the x-axis is the time axis, and each line segment represents
the time interval during which the corresponding message stream is being transmitted
by PE P i . Specifically, line segment X i;0 represents the length-L i time interval during which
each other line segment X i;j , for represents the
length-n i;j time interval during which P i relays the message stream it receives from its jth
child P i;j . The restrictions on the placements of the line segments are compatible with this
interpretation: any line segment can be moved to the right, representing a delay in the transmission
time of the corresponding message stream; no line segment can be moved to the left
of its indicated limit (the points c 0
i;k ), for such a move would represent transmitting the
corresponding message stream before the stream is available to it.
now computes its certified lag time by shifting the line segments X i;k along the x-axis
moving segments rightward at will, but never moving any segment X i;k so that its left
endpoint goes to the left of point c 0
i;k - with the goal of combining all d i segments (by
concatenation) into a single line segment of length n i , whose left endpoint is as small, i.e.,
as far to the left, as possible; call this combined line segment X ?
. The left endpoint of
certified lag time c i . Straightforward reasoning allows us to compute c i
explicitly:
Remark 4. (a) Combining the d i +1 line segments into a single line segment X ?
represents
scheduling a gap-free transmission by P i of all messages originating in its subtree.
(b) Placing the line segment X ?
i as far to the left as possible (subject to the constraints of
the points c 0
represents an attempt to schedule P i 's transmission as early as possible.
(c) If we denote by c ?
i;k the left endpoint of line segment X i;k within line segment X ?
the increasing sequence of values of the endpoints c ?
i;k represents a schedule for the gap-free
transmission of the (combined) message streams of P i 's children.
To clarify the connection between moving line segments and scheduling messages, let us
focus on just two segments: for say that line segment X i has length n i and left
constraint c 0
. Say, moreover, that c 0- c 0
. Three cases arise.
1. If c 0
can be positioned in their leftmost legal
positions (namely, c 0
just juxtaposed to form segment X ? .
In this situation, the PEs associated with X 1 and X 2 can both honor their certified lag
times.
2. If c 0
can be positioned in its leftmost possible position (namely,
must be shifted right
positions before being juxtaposed
with segment X 1 in order to form segment X ? . This corresponds to having the PE
associated with time interval X 2 delay its message transmission for
so as not to interfere with the transmission by the PE associated with interval X 1 .
3. If c 0
can be positioned in its leftmost possible position (namely,
must be shifted right c 0\Gamma n 1 positions before being juxtaposed
with segment X 2 in order to form segment X ? . This corresponds to having the PE
associated with time interval X 1 delay its message transmission for c 0\Gamma n 1 time units
so that the final transmission of messages will be free of gaps.
Remark 5. Because line segments start out in their leftmost feasible positions, one can
combine them by moving line segments to the right but never to the left, i.e., by delaying
message streams but never advancing one. This ensures that a single pass over the line
segments, in decreasing order of their indices under the permutation -, suffices to produce
line segment X ?
hence to compute c i .
The Message Scheduling Protocol. After PE P 0 receives a transmission certificate from
its last (i.e., d 0 th) child, it spends the next d 0 steps sending transmit-message orders to
its children. Each order is a one-flit message of the form
after s steps
where the transmission time s is a positive integer; the intended interpretation is that, if a PE
P receives the indicated order at time t, then it begins transmitting its (gap-free) message
stream at time t + s; if P is a nonleaf PE, then it will begin this message transmission
only after it has relayed to its children versions of the order with appropriately modified
times. 4 The issue we must focus on is how a PE (P 0 or any other nonleaf
PE) computes its children's transmission times. This computation can be described more
uniformly if we imagine that P 0 has received the (imaginary) order transmit after 0
steps. Now we can say, uniformly, that nonleaf PE P i receives the order transmit after
steps at time t i , and we can ask, uniformly, how P i computes the transmission times
for its children fP i;j g.
Computing Transmission Times. Say that P i receives its transmission time s i from its
parent at time t i . Earlier, when P i computed its certified lag time c i , it created a tentative
transmission schedule for its children (and itself), which is embodied in the d i +1 start times
g. (Recall that these were computed while constructing the line segment
.) Indeed, c i is just the minimum of these values. The transmission time s i can be viewed
as just an adjustment to this tentative schedule, i.e., as a mandate to adjust the schedule by
When P computed its certified lag time, it included time for relaying orders to its children; hence, we
can safely assume that s has been chosen large enough to allow time for this relaying.
delaying it uniformly by s (equivalently, by shifting X ?
i to the right s
units). Therefore, P i assigns to each of its children P i;j , where 1 - j - d i , the transmission
time s
sends it the order
after s i;j steps .
After dispatching all these orders, P i proceeds to transmit, according to the schedule implicit
in the set fs containing all the messages in its
subtree.
Timing Analysis. The time required by Algorithm Transmission-Certification is divided
into four packets.
1. Broadcasting the synchronization token (Phase 1) and distributing the transmit-
message orders (Phase 3) each takes time essentially equal to the time B for a root-
to-leaf broadcast in the tree.
2. The time C for collecting transmission certificates (Phase 2) is dominated by the
accumulated time for sorting certified lag times at each PE along the leaf-to-root paths
of the tree. This time is estimated as follows. Assign each leaf-PE the weight 0 and
each nonleaf-PE having d children the weight d log 2 d. Assign each root-to-leaf path a
weight that is the sum of the weights of its nodes. Then C is a small multiple of the
maximum weight of a root-to-leaf path.
3. Since message transmission (Phase 4) is gap-free, it requires time
Easily, any gathering algorithm must take time at least max(B;M ); in the worst case, this
bound increases to B+M . To wit, synchronization must take at least B steps, and message
transmission must take at least M steps, yielding the universal lower bound; if there is only
one message in the sequence, and that message resides at a PE at maximum distance from
these activities do not overlap. Summarizing this cost assessment, we arrive at the
following reckoning.
Theorem 3 The time for gathering on a tree using Algorithm Transmission-Certification
is at most 2B . The time for gathering on a tree using any algorithm is at least
max(B;M); in the worst case, this lower bound increases to time B +M .
Figure
4 illustrates the gathering operation of Figure 2 performed using transmission
certificates, rather than shoulder tapping.
As
Figures
2 and 4 indicate, gathering on an n-node ring via transmission certificates is
materially slower (by roughly 2n steps) than gathering on the path via shoulder-tapping, the
extra time being accounted for by the explicit synchronization protocol. Although a portion
of the synchronization time is recovered by the elimination of gaps in the transmission of the
message stream, one would normally choose to use shoulder-taps rather than transmission
certificates when gathering on a path.
4 Algorithms for a Multiport Model
We discuss only briefly how one can extend the gathering algorithms of Section 3.2 to a
multiport communication regimen. Roughly speaking, one can proceed at two levels.
Parallelizing Synchronization. Most simply, in a network with a multiport communication
capability, one can parallelize the three tasks in our algorithm that are dedicated to
synchronization.
Parallelizing the broadcast of the synchronization token requires no modification of the
algorithm.
In contrast, parallelizing the distribution of the transmit-message orders may be
tricky. Specifically, each such order has a transmission time associated with it, and each
child of a given PE must receive a unique such time in order to insure collision-free message
transmission in the absence of message buffers. It is not clear that one can save much time
by parallelizing the transmission of the transmit-message orders if the computation of
the associated transmission times must be sequential.
Finally, parallelizing the computation of certificates is straightforward and, in fact, simplifies
the algorithm by obviating the protocol whereby a PE orchestrates the receipt of
certificates from its children.
Parallelizing Message Transmission. We discuss this topic in the context of scattering
and gathering in arbitrary networks, via the use of spanning trees. There are two compelling
techniques for parallelizing the transmission of messages in a network with a multiport
communication capability. Both techniques involve "covering" the network with trees which
then cooperate in transmitting the messages, using versions of the algorithms presented in
previous sections.
The first technique advocates "covering" the network with mutually edge-disjoint trees
rooted at PE P 0 , which collectively, though not necessarily individually, span the host net-work
Figure
5 depicts two such "coverings:" in Figure 5(a) two trees jointly span the
4 \Theta 4 mesh; in Figure 5(b) two trees each span the 4 \Theta 4 toroidal mesh (i.e., the mesh with
"wraparound" edges). These disjoint trees are then used just as described in previous sec-
tions. The only substantive change in the framework we have been discussing is the role
that PEs play relative to each tree, if they belong to more than one. Most simply, each PE
will be preallocated to one tree in which it will participate actively; the PE will act solely
as a conduit in all other trees. Details can readily be filled in. One attractive feature of
this technique is the availability of research in "covering" certain networks with edge-disjoint
trees (though the requirement that P 0 be the root of all the trees seems to complicate the
problem materially); for instance, one readily shows that the mesh and de Bruijn networks
can be so "covered," as can the hypercube [5, 6].
The second technique modifies the first by dropping the requirement that the "covering"
trees be mutually edge-disjoint. Adapting our algorithms to such a setting may be quite
challenging, as one must schedule the traffic on the shared edges.
--R
A unified approach to global permutation routing on parallel networks.
Complexity of scattering on a ring of processors.
Full Utilization of Communication Resources.
Routing multiple paths in hypercubes.
Optimal algorithms for dissemination of information in some interconnection networks.
Optimal broadcasting and personalized communication in hypercubes.
Approximation algorithms for minimum time broad- cast
Multiscattering on a reconfigurable network of processors.
Data broadcasting in SIMD computers.
Data communication in parallel architectures.
Intensive hypercube communication
"typical"
--TR
Deadlock-free message routing in multiprocessor interconnection networks
Multi-packet-routing on mesh connected arrays
Optimum Broadcasting and Personalized Communication in Hypercubes
All-to-All Broadcast by Flooding in Communications Networks
Intensive hypercube communication. Prearranged communication in link-bound machines
Optimum algorithms for dissemination of information in some interconnection networks
Fully-adaptive minimal deadlock-free packet routing in hypercubes, meshes, and other networks
Full utilization of communication resources
Approximation Algorithms for Minimum Time Broadcast
On Bufferless Routing of Variable-length Message in Leveled Networks (Extended Abstract)
Universal schemes for parallel communication
--CTR
Kevin H. Liu, Performance evaluation of processor allocation algorithms for parallel query execution, Proceedings of the 1997 ACM symposium on Applied computing, p.393-402, April 1997, San Jose, California, United States
Leandros Tassiulas , Jinoo Joung, Performance measures and scheduling policies in ring networks, IEEE/ACM Transactions on Networking (TON), v.3 n.5, p.576-584, Oct. 1995
Sandeep N. Bhatt , Gianfranco Bilardi , Geppino Pucci , Abhiram Ranade , Arnold L. Rosenberg , Eric J. Schwabe, On Bufferless Routing of Variable Length Messages in Leveled Networks, IEEE Transactions on Computers, v.45 n.6, p.714-729, June 1996
Weizhen Mao , Jie Chen , William III Watson, One-to-all personalized communication in torus networks, Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: parallel and distributed computing and networks, p.291-296, February 13-15, 2007, Innsbruck, Austria | multiprocessor interconnection networks;queueing mechanisms;trees of processors;message scattering;distributed processing;networks of processors;buffering;messages gathering;noncolliding paths;scheduling;spanning trees |
626788 | A Data Structure for Circular String Analysis and Visualization. | A csdawg for circular strings, which is obtained by making simple modifications to the compact symmetric directed acyclic word graph (csdawg) for linear strings, is proposed. This data structure does not contain extraneous vertices and, consequently, avoids the disadvantages of previous methods. Using this method, algorithms which make use of the csdawg for linear strings can then be extended to circular strings with trivial modifications. The extended algorithms continue to have the same time and space complexities. Moreover, the extensions take the form of postprocessing or preprocessing steps which are simple to add on to a system built for linear strings, particularly in an object-oriented language. | Introduction
The circular string data type is used to represent a number of objects such as circular
genomes, polygons, and closed curves. Research in molecular biology involves the identification
of recurring patterns in data and hypothesizing about their causes and/or effects
[1, 2]. Research in pattern recognition and computer vision involves detecting similarities
within an object or between objects [3].
Detecting patterns visually is tedious and prone to error. In [4], a model was proposed
to alleviate this problem. The model consists of identifying all recurring patterns in a string
and highlighting identical patterns in the same color.
[4] also listed a number of queries that the model would support. In [5], efficient (mostly
optimal) algorithms were proposed for some of these queries for linear strings. These algorithms
perform operations and traversals on the symmetric compact directed acyclic word
graph (scdawg) [6] of the linear string. The scdawg, which is used to represent a string or a
set of strings, evolved from other string data structures such as position trees, suffix trees,
directed acyclic word graphs, etc [7, 8, 9, 10].
One approach for extending these techniques to circular strings is to arbitrarily break
the circular string at some point so that it becomes a linear string. Techniques for linear
strings may then be applied to it. However, this has the disadvantage that some significant
patterns in the circular string may be lost because the patterns were broken when linearizing
the string. Indeed, this would defeat the purpose of representing objects by circular strings.
[3] defined a polygon structure graph, which is an extension of suffix trees to circular
strings. However, the suffix tree is not as powerful as the scdawg and cannot be used to
solve some of the problems that the scdawg can solve. In this paper, we define an scdawg
for circular strings. Algorithms in [5] and [6] which make use of the scdawg for linear strings
can then be extended to circular strings with minor modifications. The extended algorithms
continue to have the same efficient time and space complexities. Further, the extensions
take the form of postprocessing or preprocessing steps which are simple to add on to a
system built for linear strings, particularly in an object oriented language.
e
c
a
d
c
a
Figure
1: Circular string
Section 2 contains definitions. Section 3 describes the scdawg for linear strings while
Section 4 describes its extension to circular strings. Section 5 deals with the computation
of occurrences of displayable entities. Section 6 introduces the notion of conflicts and
Section 7 lists other queries that are to be implemented. Section 6 also explains how the
algorithms implementing queries for linear strings can be modified so that they work with
circular strings. Finally, Section 8 mentions some applications for the visualization and
analysis of circular strings.
Let s denote a circular string of size n consisting of characters from a fixed alphabet, \Sigma,
of constant size. Figure 1 shows an example circular string of size 8. We shall represent a
circular string by a linear string enclosed in angle brackets "!?" (this distinguishes it from
a linear string) . The linear string is obtained by traversing the circular string in clockwise
order and listing each element as it is traversed. The starting point of the traversal is chosen
arbitrarily. Consequently, there are up to n equivalent representations of s. In the example,
s could be represented as !abcdabce?, !bcdabcea?, etc.
We characterize the relationship between circular strings and linear strings by defining
the functions, linearize and circularize. linearize maps circular strings to linear strings. It
is a one-many mapping as a circular string can, in general, be mapped to more than one
linear string. For example, dabcg. We will assume,
for the purpose of this paper, that linearize arbitrarily chooses one of the linear strings; for
convenience we assume that it chooses the representation obtained by removing the angle
brackets "!?". So, linearize(!abcd?) = abcd. circularize maps linear strings to circular
strings. It is a many-one function and represents the inverse of linearize.
We use lower case letters to represent circular strings and upper case letters to represent
linear strings. Further, if a lower case letter (say, s) is used to represent a particular circular
string, then the corresponding upper case letter (S) is assumed to be linearize(s). A single
character in s or S occurring in the i th position is denoted by s i or S i , respectively. A
substring of S is denoted by S i;j where i - j. S substring of s is denoted
by s i;j , where s j. For example, if
We use the symbol, fl, to
denote either a circular string or a linear string. In the example,
The predecessor, pred(fl,i,j) of a substring fl i;j of fl is defined as
linear
The successor, succ(fl,i,j) of a substring fl i;j of fl is defined as
and fl is linear
and fl is circular
The immediate context, context(fl,i,j) of a substring fl i;j of fl is the ordered pair
The predecessor, pred(fl; ff), and successor, succ(fl; ff), sets of a pattern, ff, in a string fl
are defined as below:
pred(fl; g. succ(fl; g.
The immediate context set, context(fl,ff) of a pattern, ff, in fl is the set
g.
In the example string of Figure 1, succ(s;
A pattern occurring in fl is said to be maximal iff its occurrences are not all preceded
by the same character nor all followed by the same character. So, a pattern ff of length !
n in fl is maximal iff jpred(fl; ff)j - 2 and jsucc(fl; ff)j - 2. This is not necessarily true for
patterns of length greater than or equal to n. For example, S is maximal in S (since it is
neither preceded nor followed by a character), but jpred(S;
A pattern is said to be a displayable entity (or displayable) of fl iff it is maximal and
occurs at least twice in fl. Note that if fl represents a circular string, then a pattern can
be arbitrarily long. In the rest of our discussion, we will assume that displayable entities of
circular strings have length less than n.
3 Scdawgs For Linear Strings
An scdawg, corresponding to a string S is a directed acyclic
graph defined by a set of vertices, V (S), a set, R(S), of labeled directed edges called right
extension (re) edges, and a set of labeled directed edges, L(S) called left extension (le)
edges. Each vertex of V (S) represents a substring of S. Specifically, V (S) consists of a
source (which represents the empty word, -), a sink (which represents S), and a vertex
corresponding to each displayable entity of S.
Let de(v) denote the string represented by vertex, v, v ffl V (S). Define the implication,
imp(S; ff), of a string, ff of S to be the smallest superword of ff in fde(v)j v ffl V (S)g, if
such a superword exists. Otherwise, imp(S; ff) does not exist. Re edges from v 1 (v
are obtained as follows: for each letter, x, in \Sigma, if imp(S; de(v 1 )x) exists and is equal to
there is an re edge from v 1 to v 2 with label xfl. If fi is the empty
string, then the edge is known as a prefix extension edge. Le edges from v 1 (v are
obtained as follows: for each letter, x, in \Sigma, if imp(S; xde(v 1 exists and is equal to de(v 2
gabcde
de
e
de
c
abc
bc
de
gabcde
fabcgabcde
gabcde
fabcgabcde
cde
sink
abc
c
source
Figure
2: SCDAWG for re edges are shown
there is an le edge from v 1 to v 2 with label flx. If fi is the empty string, then
the edge is known as a suffix extension edge. Figure 2 shows (V (S),R(S)) corresponding
to cde, and c are the displayable entities of S. There are two re
edges from the vertex representing abc. These correspond to g. imp(S,abcd)
Consequently, both edges are incident on the sink. There are no edges
corresponding to the other letters of the alphabet as imp(S,abcx) does not exist for x ffl
fg.
Notice that the number of re edges from a vertex, v, equals jsucc(S; de(v)) - f1gj and
the number of le edges equals jpred(S; de(v)) - f1gj. In the example,
So, the number of right edges leaving the vertex corresponding to it is 1.
The space required for SCD(S) is O(n) and the time needed to construct it is O(n) [7, 6].
While we have defined the scdawg data structure for a single string, it can be extended to
represent a set of strings [6].
4 Extension to Circular Strings
In Section 4.1, we present a constructive definition of an scdawg for circular strings. Section
4.2 analyzes the complexity of the algorithm of Section 4.1 to construct the scdawg of
a circular string and Section 4.3 identifies and proves some properties of this scdawg.
4.1 SCDAWGs For Circular Strings
The notion of an scdawg may be extended to circular strings. The scdawg for circular
strings is defined constructively by the algorithm of Figure 3. The scdawg for the circular
string s is obtained by first constructing the scdawg for the linear string
that linearize(s)). A bit is associated with each re edge in R(T ) indicating whether it
is a prefix extension edge or not. Similarly, a bit is associated with each le edge in L(T )
to identify suffix extension edges. Two pointers, a suffix pointer and a prefix pointer are
associated with each vertex, v in V (T ). The suffix (prefix) pointer points to a vertex, w,
in V (T ) such that de(w) is the largest suffix (prefix) of de(v) represented by any vertex
in V (T ). Suffix (prefix) pointers are the reverse of suffix (prefix) extension edges and are
derived from them. Figure 4 shows SCD(T cabcbab. The broken
edge from vertex c to vertex abc is a suffix extension edge, while the solid edge from vertex
ab to vertex abc is a prefix extension edge.
Next, in step 2, suffix and prefix redundant vertices of SCD(T ) are identified. A suffix
(prefix) redundant vertex is a vertex v that satisfies the following properties:
(a) v has exactly one outgoing re (le) edge.
A vertex is said to be redundant if it is either prefix redundant or suffix redundant or both.
In
Figure
4, vertex c is prefix redundant only, while vertex ab is suffix redundant only. No
other vertices in the figure are redundant (in particular, the vertex representing S is not
redundant even though it has one re and one le out edge as n). The fact that step 2
does, in fact, identify all redundant vertices is established later.
Vertices of SCD(T ) are processed in reverse topological order in step 3 and redundant
Algorithm A
Step1: Construct SCD(T ) for
fIdentify Suffix Redundant Verticesg
while v 6= source do
begin
if v has exactly one outgoing re edge
then
then mark v suffix redundant;
else
exit Step 2(a);
fIdentify Prefix Redundant verticesg
fSimilar to Step 2 (a)g
Step3:
while (v !? source) do
begin
case v of
suffix redundant but not prefix redundant: ProcessSuffixRedundant(v);
prefix redundant but not suffix redundant: ProcessPrefixRedundant(v);
suffix redundant and prefix redundant : ProcessBothRedundant(v);
not redundant : fDo nothing
Figure
3: Algorithm for constructing the scdawg for a circular string
T=S.S
c
cabcbab
cabcbab
cabcb
abcbab
c
bab
cabcb
c
a
c
ab
cabc
abcbab
bab
ab
c
c
a
ab
c
abc
ab
Figure
4: SCD(T ) for T=cabcbabcabcbab
Procedure ProcessSuffixRedundant(v)
1. Eliminate all left extension edges leaving v (there are at least two of these).
2. There is exactly one right extension edge, e, leaving v. Let the vertex that it leads to
be w. Let the label on the right extension edge be xfl. Delete the edge.
3. All right edges incident on v are updated so that they point to w. Their labels are
modified so that they represent the concatenation of their original labels with xfl.
4. All left edges incident on v are updated so that they point to w. Their labels are not
modified. However, if any of these were suffix extension edges, the bit which indicates
this should be reset as these edges are no longer suffix extension edges.
5. Delete v.
Figure
5: Algorithm for processing a vertex which is suffix redundant
vertices are eliminated. When a vertex is eliminated, the edges incident to/from it are
redirected and relabeled as described in Figures 5 to 10. The resulting graph is CSCD(s).
The set of vertices of CSCD(s) is denoted by CV (s). The set of right (left) edges of
is denoted by CR(s) (CL(s)). Figure 11 shows CSCD(s) for
Notice that vertices c and ab have been eliminated and that the two incoming edges to c
and the three incoming edges to ab of Figure 4 now point to abc.
uL
uR
uR uL
U s
U s
Figure
Procedure ProcessPrefixRedundant(v)
1. Eliminate all right extension edges leaving v (there are at least two of these).
2. There is exactly one left extension edge, e, leaving v. Let the vertex that it leads to
be w. Let the label on the left extension edge be flx. Delete the edge.
3. All left edges incident on v are updated so that they point to w. Their labels are
modified so that they represent the concatenation of flx with their original labels.
4. All right edges incident on v are updated so that they point to w. Their labels are not
modified. However, if any of these were prefix extension edges, the bit which indicates
this should be reset as these edges are no longer prefix extension edges.
5. Delete v.
Figure
7: Algorithm for processing a vertex which is prefix redundant
uL
uR
uR uL
U s
U s
Figure
8: v is prefix redundant
Procedure ProcessBothRedundant(v)
1. There is exactly one right extension edge, e 1 , leaving v. Let the vertex that it leads
to be w 1 . Let the label on the edge be xfl. Delete the edge.
2. There is exactly one left extension edge, e 2 , leaving v. Let the vertex that it leads to
be w 2 . Let the label on the edge be flx. Delete the edge.
fWe establish later that w 1 and w 2 are, in fact, the same vertex.g
3. All right edges incident on v are updated so that they point to w 1 . Their labels are
modified so that they represent the concatenation with xfl. If any of these edges were
prefix edges, the bit which indicates this should be reset.
4. Similarly, left edges incident on v are updated so that they point to w 2 . Their labels
are modified so that they represent the concatenation with flx. If any of these edges
were suffix extension edges, the bit which indicates this should be reset.
5. Delete v.
Figure
9: Algorithm for processing a vertex which is prefix and suffix redundant
e L (l L fiy)
e R (l R xfl)
e L (l L )
uL
uR
uR uL
U s
U s
Figure
10: v is suffix and prefix redundant
Lemma 1 For every substring s i;j of length ! n of s, there exists a substring, T l;m (= s i;j ),
of T such that context(T ;
Proof
Case
Case
Subcase
context(s;
Subcase
Subcase
possible since the length of s
Corollary 1 For every pattern, ff, of length ! n in s, context(s; ff) ' context(T ; ff).
Corollary 2 For every pattern, ff, of length ! n in s, pred(s; ff) ' pred(T ; ff) and succ(s; ff)
be a substring of T . If i 6= 1, then there is a substring s l;m (=T i;j ) of s
such that pred(s; l; m)
Proof If the result follows from the definition of pred(T ; choose s l;m
so that
the length of T i;j is greater than n, s l;m is assumed to wrap around once). So, pred(s; l; m)
Corollary 3 For every pattern, ff, of length ! n in T , pred(T
Theorem 1 For every pattern ff of length less than n, pred(s;
Proof From Corollary 2 we have pred(s; ff) ' pred(T ; ff). So, pred(s; ff) - f1g '
pred(T ; ff) - f1g and hence pred(s; ff) ' pred(T ; ff) - f1g (since pred(s; ff) does not contain
cabc
ab
T=S.S
source
abcbab
cabcb
cabcbab
cabcbab
bab
c
a
c
c
abc
a
abc
abc
Figure
11: Scdawg for
1). From Corollary 3 we have pred(s; ff) ' pred(T ; ff) - f1g. So, pred(s;
f1g. The proof that succ(s;
Theorem 2 A vertex, v with jde(v)j ! n in V (T ) is non redundant iff de(v) is a displayable
entity of s.
Proof Suppose ff is a displayable entity of s. Then, we have jpred(s; ff)j - 2 and jsucc(s; ff)j
2. From Theorem 1 we have jpred(T 2. So, ff
is a displayable entity in T and the corresponding vertex in V (T ) has at least two le and
two re edges leaving it. Hence, v is not redundant.
Next, suppose there is a non redundant vertex, v, in SCD(T ) with
de(v). Since v is not redundant, jpred(T 2. From
Theorem 1 we have jpred(s; ff)j - 2 and jsucc(s; ff)j - 2. So, ff is a displayable entity of s.Corollary 4 A redundant vertex in V (T ) is not a displayable entity of s.
Lemma 3 (a) A vertex, v, in V (T ) will have exactly one re (le) out edge only if de(v) is
a suffix (prefix) of T .
(b) If a vertex, v, such that de(v) (jde(v)j ! n) is a suffix (prefix) of T has more than one
re (le) out edge, then no vertex, w, such that de(w) is a suffix (prefix) of de(v) can be suffix
(prefix) redundant.
Proof (a) Suppose de(v) is not a suffix of T . Then 1 is not an element of succ(T ; de(v)).
2. So, v has at least two re out edges, which
is a contradiction. Hence, de(v) must be a suffix of T .
(b) Since de(w) is a suffix of de(v), a successor of de(v) must also be a successor of de(w). So,
(de(v) has at least two re out edges). So, w must have at least two re out edges
and cannot be suffix redundant. 2
We can now show that step 2(a) of Algorithm A identifies all suffix redundant vertices in
it is sufficient to examine vertices corresponding to suffixes of T (Lemma 3(a)),
step 2(a) follows the chain of suffix pointers starting from the sink. If a vertex on this chain
representing a displayable entity of length ! n has one re out edge, then it is marked suffix
redundant. The traversal of the chain terminates either when the source is reached or a
vertex with more than one re out edge is encountered (Lemma 3(b)). Similarly, step 2(b)
identifies all prefix redundant vertices in V (T ).
4.2 Complexity Analysis
will in the worst case traverse all the vertices in SCD(T )
spending O(1) time at each. The number of vertices is bounded by O(n) [6]. So, step 2
takes O(n) time. Step 3 traverses SCD(T ). Each vertex is processed once; each edge is
processed at most twice (once when it is an incoming edge to the vertex being currently
processed, and once when it is the out edge from the vertex currently being processed. So,
Step 3 takes O(n) time (note that SCD(T ) has O(n) edges).
4.3 Properties of CSCD(s)
Define the implication, imp(s; ff), of a string, ff, with respect to CSCD(s) to be the smallest
superword, fifffl, of ff represented by a vertex in CV (s), such that there does not exist a
substring fi 1 fffl 1 of T where the length of the least common suffix,
is less than min(jfij; or the length of the least common prefix, lcp(fl;
is less than min(jflj; jfl 1 j), if such a superword exists. Otherwise, imp(s; ff) does not exist.
The additional condition (which is referred to as the uniqueness condition) that is imposed
on imp(s; ff) is guaranteed for imp(T ; ff) by the definition of SCD(T ).
be the smallest set of superword displayable entities
of abc in s such that any superword displayable entity of abc in s is a superword of an
element of R. Then, de(s; abc) must be one of the elements of R. We have jlcs(b;
abc) is neither babcaa nor cabcaa. Further, since jlcs(aaaa; aa)j
abcaaaa.
Lemma 4 Let v be a suffix and prefix redundant vertex in SCDINT (T ), where SCDINT (T )
represents an intermediate configuration between SCD(T ) and CSCD(s) just after the while
statement in Step 3 of Algorithm A. Let the le and re out edges be incident on w 1 and w 2
respectively, where de(w 1
are not redundant, then w
Proof Case 1. jde(w 1
cannot be nil (if it is, then w 1 is prefix redundant since jde(w 1 )j ! n and all occurrences
of de(v) except the prefix of S are preceded by y). Similarly, must be
of the form fi 3 yde(v)xfl 1 , since y is the only letter that precedes de(v). Similarly, de(w 2 )
must be of the form fi 2 yde(v)xfl 3 . We now show that fi Assume that
this is not the case. Since jde(w 1 are not redundant,
are all at least 2.
So, there must exist a displayable entity, fi m yde(v)xfl m , of s where fi m is the largest common
suffix of fi 3 and fi 2 and fl m is the largest common prefix of fl 1 and fl 3 . Further, fi
de(v)
de(v)
de(v)
de(v)642S
Figure
12: Illustration of proof of prefix/suffix redundancy invariant
imp(s; yde(v)), which contradicts statements made above.
Case 2. jde(w 1 )j - n,
cannot be nil, otherwise w 2 is suffix redundant. So, de(w 2 as x is the
only letter that follows de(v). Arguments similar to those in Case 1 show that since jde(w 2 )j
must be a prefix of fl 3 and fi 1 a suffix of fi 2 y. But, then
a contradiction. Hence, Case 2 cannot exist.
Case 3. jde(w 2 )j - n,
Similar to Case 2.
Case 4. jde(w 2 )j - n,
Figure
12 shows that for this case to occur,
this the prefix/suffix redundancy invariant. The figure assumes that jde(w 1
de(v) is a prefix of de(w 1 ), and that jde(v)j ! n=2 and divides n. However, the prefix/suffix
redundancy invariant can be shown to be true in all other cases. Two copies of T are shown
in the figure. The first copy shades the occurrence (n of de(v) and its
ff 2m
ff
Vertices
Other
ff
ff
ff
ff
ff
ff
ff
ff
ff
Figure
13: SCD(ff 2m )
extension to de(w 1 ). The second shades the occurrence (n+1; n+ jde(v)j) and its extension
to de(w 1 ). Since the shaded regions in both strings represent
Next, we assume without loss of generality that there is no fi such that
Call this the smallest repetition assumption.
The only occurrences of ff in T are at ((1; jffj); (jffj+1; 2jffj); :::; ((m \Gamma 1)ff+1; 2n)) (if not,
an argument similar to the one of Figure 12 contradicts the smallest repetition assumption).
takes the form of Figure 13. Each vertex representing ff
exactly one le and one re out edge as shown.
All remaining displayable entities of T are subwords of ff 2 and are of size less than jffj
(if not, an argument identical to the one in Figure 12 contradicts the smallest repetition
assumption). The vertices representing these displayable entities are represented by the box
in
Figure
13.
None of the vertices in the box has out edges incident on vertices representing the displayable
entities g. In particular, no out edges from the vertices in the box
are incident on vertices representing displayable entities of length greater than n. After
2m ) has been processed by Algorithm A, all incoming edges to vertices corresponding
to ff and ff 2 in SCD(ff 2m ) are incident on the vertex corresponding to
CSCD(ff 2m ). It follows that any prefix and suffix redundant vertex in SCD(ff 2m ), when
processed by Step 3 of Algorithm A can have both edges incident on w 1 and w 2 such that
are at least n only if de(w 1
properties P1, P2, and P3 stated below (Theorem 3). These properties
ensure that the algorithms of [5] can be extended to circular strings.
consists of a source and a sink. For each v of CV (s) that is not the source or
sink, the following are true:
(a) jde(v)j ! n iff de(v) is a displayable entity of s.
(b) if jde(v)j - n, then de(v) is a displayable entity of T .
There exists an re out edge corresponding to letter x in \Sigma from vertex v 1 in CV (s) to
vertex exists and is equal to de(v 2 ). If de(v 2
then the label on the re edge is xfl. If then the edge is a prefix extension edge.
3: Similar to P2 but for le edges.
Theorem 3 CSCD(s) satisfies P1, P2, and P3.
Proof Property P1 is established by the knowledge that SCD(T ) contains all displayable
entities of T and that Algorithm A only eliminates those displayable entities of T of length
less than n, which are not displayable entities of s (Corollary 4).
P2 and P3 are proved by induction. The induction hypothesis is:
Let U s be the subset of U T that remains after the vertex set U T ' V (T ) has been processed
by step 3 of Algorithm A.
Let RUs be the set of re edges which are incident on vertices in U s . For any re edge r ffl
RUs from vertex u to w with label xfl, imp(s;
condition holds for le edges.
(II) For each vertex u in U s [ there is an re out edge corresponding to each
letter x in succ(T ; incident on a vertex in U s [
condition holds for le edges.
When U by definition. So, R CV
establishes that these edges are incident on the correct vertices and that their labels are
correct. (II) establishes that CR(s) is complete. So P2 holds. Similarly, P3 holds.
Induction Base: U fg. RUs and LUs are empty so (I) does not apply. (II) is
established from the definition of SCD(T ).
Induction Step: Consider vertex, v (v ffl V (T )), which is about to be processed by step 3
of algorithm A. Let U 0
T and U 0
s denote U T and U s respectively after v has been processed.
We must show that (I) and (II) hold for U 0
s and U 0
T . Since the vertices are processed in
reverse topological order, all out edges from v are incident on vertices in U s and are therefore
elements of RUs or LUs . So, they must satisfy (I).
Case 1: v is not redundant. U 0
since v is not eliminated.
We must show that (I) is true for incoming edges to v as these are the only additions to
RUs and LUs . I.e., R U 0
s
fincoming right edges to vg, L U 0
s
fincoming left
edges to vg.
Let e be an re edge with label xfl from u to v. From the definition of SCD(T ), we have
is the smallest superword
of de(u)x in fde(w)jw ffl V (T )g. Since CV
(s)g. But,
this is true since v ffl CV (s). So,
symmetric argument can be made for incoming le edges to v.
The letter of the alphabet to which an re (le) out edge corresponds is the first (last)
character in its label. Since no out edges are added, deleted, or redirected and the labels
of all out edges are unchanged, each vertex has an re/le out edge corresponding to the
same letter of the alphabet as it had prior to processing vertex v. So, (II) holds (induction
hypothesis).
Case 2: v is redundant. U 0
since v is eliminated.
Subcase (a): v is suffix redundant only. By definition, v consists of a single re
out edge, e, to a vertex w in U s . Let label(e) = xfl. From the induction hypothesis,
imp(s; first establish that (i)
(ii) de(v) is a prefix of de(w).
imp(s; de(v)) 6= de(v) as v is redundant. So, imp(s; de(v)) must correspond to a vertex
on which one of the out edges from v is incident, since there is an out edge corresponding
to each element in pred(s; de(v)) [ succ(s; de(v)) (from (II)). The single re edge is incident
on w, which represents imp(s; de(v)x). The left out edges from v are incident on vertices
which represent imp(s; x i de(v)) for 2. From the definition of
imp(s; de(v)), none of these vertices can possibly represent imp(s; de(v)). For instance, if
imp(s; x i de(v)) is imp(s; de(v)), then the string, imp(s; x j de(v)), i 6= j, would invalidate
the definition.
So, imp(s; de(v)) must be de(w). However, for this to be true, we must show that
nil and therefore that de(v) is a prefix of de(w). All occurrences of de(v) in s are followed
by x. So, jpred(T 2. An argument
similar to the one in the previous paragraph shows that for imp(s; de(v)x) to exist,
We have RU 0
s
re out edge from vg fincoming re edges to vg and LU 0
s
out edges from vg fincoming le edges to vg. (I) and (II) do not apply to the
edges deleted from RUs and LUs . So, we only need to prove (I) and (II) for incoming edges
to v.
Let e R be an re edge incident on v from vertex uR with label yfl 1 so that
must be redirected to imp(s; de(u R )y) for (I) to
hold. imp(T ; de(u R de(v) is the smallest superword of de(u R )y in fde(a)j a ffl V (T )g.
imp(s; de(u R )y) is the smallest superword of de(u R )y that satisfies the uniqueness condition
in fde(a)j a ffl CV (s)g ' fde(a)j a ffl V (T )g. Since v
imp(s; de(u R )y) is the
smallest superword of de(v) that satisfies the uniqueness condition in fde(a)j a ffl CV (s)g.
imp(s; de(u R xfl. The updated re edge, e R ,
is incident on w and has label yfl 1 xfl which was obtained in step 3 of Algorithm A by concatenating
continues to be a prefix extension
edge. e R satisfies (I).
Let e L be an le edge incident on v from uL so that
z). Using the same argument that was used for e R , we have imp(s; zde(u L
is redirected to w and its label remains unchanged. Clearly,
e L is no longer a suffix edge even if fi
Notice that (II) continues to be satisfied as each out edge corresponding to any vertex
in U 0
continues to be associated with the same character (in particular,
label(e R ) continues to begin with y and label(e L ) continues to end with z); and each out
edge continues to leave the same vertex (in particular, e R continues to leave uR , e L continues
to leave uL ).
Subcase (b): v is prefix redundant only. Symmetric to subcase (a).
Subcase (c): v is prefix and suffix redundant. So, v has one re out edge, e 1 , to
vertex w 1 in CV (s). Let label(e 1 Also, v has one le out edge, e 2 , to vertex w 2 in
y.
From the induction hypothesis, de(w 1
imp(s;
The conditions for Lemma 4 are satisfied since w 1 and w 2 are not redundant (otherwise
they would have been eliminated). Thus, de(w 1 imp(s; de(v)) can
either be imp(s; de(v)x) or imp(s; yde(v)). But, both these expressions are equal to de(w).
So, imp(s;
The proof that (I) and (II) are satisfied is similar to that for subcase (a). Note, however,
that any incoming prefix/suffix extension edges to v will no longer remain prefix/suffix
extension edges as xfl and fiy are not nil. 2
Computing Occurrences of Displayable Entities
Procedure LinearOccurrences(S; v) of Figure 14, which is based on the outline in [6], reports
the end position of each occurrence of de(v), v ffl V (S), in the linear string S . However,
invoking LinearOccurrences(T; v), v ffl CV (s), does not immediately yield all occurrences
of de(v) in T . In Section 5.1 we present a modification which obtains all occurrences of
displayable entities of s. In Section 5.2 we show that this modification is correct and that
its time complexity is optimal.
5.1 Algorithm
An auxiliary boolean array , reported[1.n], is used in conjunction with CSCD(s). Initially,
all elements of this array are set to false. Procedure CircOccurrences(s; v) of Figure 15
computes the end positions of each de(v) (v ffl CV (s)) in s. LinearOccurrences(T; v) of
line 1 will not necessarily compute all occurrences of de(v) in T , since it is being executed
on CSCD(s) and not on SCD(T ). Note, also, that an occurrence of de(v) ending at
position i (i - n) in T has an identical occurrence ending at position n
occurrences correspond to the same occurrence of de(v) in s. So,
reports both occurrences, then only the single corresponding
occurrence of de(v) in s must eventually be reported.
Lines 4-7 transform the occurrence l, if necessary, so that it represents a value between
1 and n. If this occurrence has not already been listed, then it is added to the list of
occurrences and the corresponding element of reported is set to true. If the occurrence has
been listed then it is a duplicate (lines 8-12). After all occurrences have been computed,
all elements of reported are reset to false (lines 14,15) so that reported can subsequently be
reused to compute the occurrences of some other displayable entity in s.
In the example of Figure 11, LinearOccurrences(T ; v), where v represents abc, does report
the end positions of all occurrences of abc in T (i.e., 4, 8, and 11). Lines 2 to 12 transform
this into the list of end positions of abc in s (i.e., 1 and 4) corresponding to s 6;1 and s 2;4
respectively.
Figure
shows the de(v)'s, de(w)'s, and de(x)'s for a hypothetical string
Figure
shows some fragments of its scdawg. v is suffix redundant in SCD(T ) and its
single re out edge is incident on w. There is an re edge from x to v and x is not redundant.
By construction, the re edge from x to v in SCD(T ) becomes an re edge from x to w
in CSCD(s). Procedure LinearOccurrences(T; x), x in CSCD(s) will fail to yield the
rightmost occurrence of de(x) in T , since that occurrence is neither a subword of de(w)
Procedure LinearOccurrences(S:string, v:vertex)
fObtain all occurrences of de(v), v ffl V (S), in Sg
Procedure Occurrences(S:linear string, v:vertex, i:integer)
begin
if de(v) is a suffix of S
for each re out edge, e, from v in SCD(S) do
begin
let w be the vertex on which e is incident;
Figure
14: Obtaining all occurrences of a displayable entity in a linear string
Procedure CircOccurrences(s:circular string, v:vertex)
fv is a vertex in CSCD(s)g
2 for each reported occurrence l of de(v) do
6 else
8 if not reported[k] then
9 begin
add k to final list of occurrences
14 for each occurrence, l, of de(v) in s do
Figure
15: Obtaining all occurrences of a displayable entity in a circular string
de(v)
de(w)
Figure
Example string
nor a suffix of T . In the next section, we show that CircOccurrences(s; x) computes all
occurrences of de(x) in s in spite of the fact that LinearOccurrences(T; v) does not compute
all occurrences of de(x) in T .
5.2 Proof of Correctness
substring of T . Assume that T i;j is not a suffix of T
define the immediate
right extension IRE(SCD(T ); T i;j ) of T i;j in SCD(T ) to be the occurrence T i\Gammajfij;j+jflj+1 of
displayable entity, y.
(s), be a substring of T . Assume that T i;j is not a suffix of
define the immediate
right extension IRE(CSCD(s);T i;j ) of T i;j in CSCD(s) to be the occurrence T i\Gammajfij;j+jflj+1
of displayable entity, y.
So, if in Figure 16,
which is the occurrence of de(v) corresponding to the suffix of T . However,
which does not represent a valid
substring of T .
ff
Figure
17: Fragments of scdawgs corresponding to Figure 16
IRE(DAWG; IRE
An occurrence T said to be Right Retrievable (RR) in SCD(T )
iff one of the following is true:
RR in SCD(T ).
Similarly, an occurrence T is said to be Right Retrievable (RR) in
iff one of the following is true:
RR in CSCD(s).
defined for any occurrence, T
i;j is not RR in CSCD(s) only if (i) IRE(CSCD(s);T i;j ) does not represent
a substring of T or (ii)IRE(CSCD(s); T i;j ) is a valid substring of T , but is not RR in
CSCD(s).
In the example of Figure 16, T 2n\Gammajde(x)ffj+1;2n\Gammajffj is RR in SCD(T ), but not RR in
CSCD(s).
Notice that (i is not a substring of T iff
Lemma 5 For k - 1, if IRE
substrings of T and if (i
then
Proof Assume that there exists a pair of substrings
and
of T , such that
are assuming that their IRE's are defined).
By symmetry, both occurrences represent the same displayable entity (say, de(v)). Further,
), then from the definition
of IRE, we have n. Applying this argument repeatedly
proves the lemma 2
Lemma 6 The RR occurrences of de(v), v in V (T ) (CV (s)) in SCD(T ) (CSCD(s)) are
exactly those occurrences of de(v) which are obtained by LinearOccurrences(T ,v).
Proof Follows from the definition of RR occurrences. 2
Corollary 5 All occurrences of a pattern de(v) (v ffl V (T )) in T are obtained by LinearOccurrences(T ; v).
Lemma 7 All occurrences of de(v) in T , v ffl CV (s), where jde(v)j - n are obtained by
Proof This follows from Corollary 5 and the construction of CSCD(s) in which no right
out edges from vertices representing displayable entities of size - n were modified.
are RR in CSCD(s).
Proof Assume that the lemma is false and that there exists an occurrence, T i;j , of de(v)
with which is not RR in CSCD(s).
Clearly, j 6= 2n, otherwise T i;j would be RR in CSCD(s). Let last denote the smallest
value of k for which IRE k (CSCD(s); T i;j ) is not a substring of T . Such a last - 1 must
exist since T i;j is not RR. Let (i last ; j last ) denote IRE last (CSCD(s); T i;j ) . Let z be the
vertex in CV (s) to which T i last ;j last
corresponds.
Case 1. last
last ! 2n. Consider the string T 1;j last
in T . Its length is greater than n. If there
were two occurrences of this string in T , then it would be a displayable entity of length ? n
(because (i) T i;j last
does not have a predecessor and (ii) de(z) is maximal and its occurrences
are not all followed by the same letter). A vertex corresponding to this displayable entity
would not have been eliminated by Algorithm A since its length would be - n and T 1;j last
would be RR in CSCD(s) (Lemma 7). So, there must exist only one occurrence of the
string represented by T i;j last
. But, this string is a proper suffix of de(z) which means that
one of its occurrences is preceded by a character. So, there are two occurrences of this
string. This leads to a contradiction.
Case 2. last ? 2n
The proof is similar to the one for Case 1. 2
Lemma 9 At least one of the two occurrences, T i;j and T i+n;j+n , of de(v),
ffl CV (s), with is RR in CSCD(s).
Proof Assume that the lemma is false. Let last be the smallest value of k for which either
not a substring of T . Let (i
last (CSCD(s); T i;j ) and (i q last (CSCD(s); T i+n;j+n ).
Case 1. IRE last (CSCD(s); T i;j ) is not a substring of T; IRE last (CSCD(s); T i+n;j+n ) is a
substring of T.
I.e, RR in CSCD(s),
since (i q ; j q ) satisfies the conditions of Lemma 8.
Case 2. IRE last (CSCD(s); T i;j ) is a substring of T ; IRE last (CSCD(s); T i+n;j+n ) is not a
substring of T .
Symmetric to Case 1.
Case 3. IRE last (CSCD(s); T i;j ) is not a substring of T ; IRE last (CSCD(s); T i+n;j+n ) is
not a substring of T .
5). This is shown to cause a
contradiction by an argument similar to the one in Lemma 8. 2
Theorem 4 Procedure CircOccurrences(s,v) correctly obtains all occurrences of de(v) in s
shows that LinearOccurrences(T; v) computes all RR occurrences of de(v)
in CSCD(s). Lemmas 8 and 9 show that each occurrence of de(v) in s has at least one
corresponding occurrence in T , which is RR in CSCD(s). CircOccurrences computes these
occurrences in T and transforms them so that they represent occurrences in s, removing
duplicates if any. So, the output is a list of all occurrences of de(v) in s. 2
Theorem 5 Procedure CircOccurrences is optimal.
Proof Procedure CircOccurrences(s; v) takes O(jocc(T ; v)j) time, where jocc(T ; v)j is the
number of occurrences of de(v) in T . Each for loop takes O(jocc(T ; v)j) time.
is the number of occurrences of de(v) in
s. So, the complexity is O(jocc(s; v)j). jocc(s; v)j is the size of the output, so the algorithm
is optimal. 2
6 Computing Conflicts Efficiently
[4] defines the concept of conflicts and explains its importance in the analysis and visualization
of strings. Formally,
(i) A subword conflict between two displayable entities, D 1 and D 2 , in S exists iff D 1 is a
substring of D 2 .
(ii) A prefix-suffix conflict between two displayable entities, D 1 and D 2 , in S exists iff there
exist substrings, S s in S such that S p SmS s occurs in S and S p
. The string, Sm is known as the intersection of the conflict; the conflict is said to
occur between D 1 and D 2 with respect to Sm .
[4] also identified a number of problems relating to the computation of conflicts in a linear
string, while [5] presented efficient algorithms for most of these problems (some of which
are listed in the next section). These algorithms typically involve sophisticated traversals
or operations on the scdawg for linear strings. Our extension of scdawgs to circular strings
makes it possible to use the same algorithms to solve the corresponding problems for circular
strings with some minor modifications which are outlined below.
There are conceptually two kinds of traversals that the algorithms of [5] perform on an
scdawg corresponding to a linear string:
(i) Traversal of displayable entities of the string. In these traversals, a vertex is traversed
specifically because it represents a displayable entity of the string.
(ii) Incidental traversals. In these traversals, a vertex is not traversed because it is a
displayable entity, but because it performs some other function. For example, this includes
vertices traversed by LinearOccurrences(T; v).
Traversals of type (i) in CSCD(s) are not required to traverse vertices which represent
displayable entities of size greater than or equal to n. This may be achieved simply by
disabling edges in CSCD(s) which leave a vertex representing a displayable entity of size
less than n and are incident on a vertex representing a displayable entity of size greater
than or equal to n. Traversals of type (ii), however, may be required to traverse vertices
representing displayable entities of size greater than or equal to n. This is achieved by
associating a bit for each edge which is set to 1 if it represents an edge from a vertex whose
displayable entity is of size less than n to a vertex whose displayable entity is of size greater
than or equal to n. Otherwise, it is set to 0. Type (i) traversals check the bit, while type
(ii) traversals ignore it.
Finally, all calls to LinearOccurrences are replaced by calls to CircOccurrences.
7 Other Queries
In this section, we list queries that a system for the visualization and analysis of circular
strings would support. [5] contains algorithms for these same queries for linear strings. In
the previous section, we showed how these algorithms could be modified to support these
queries.
Size Restricted Queries: Experimental data show that random strings contain a large
number of displayable entities whose lengths are small. In most applications, small displayable
entities are uninteresting. Hence, it is useful to list only those displayable entities
whose lengths are greater than some integer, k. Similarly, it is useful to report exactly those
conflicts in which the conflicting displayable entities have length greater than k. This gives
rise to the following problems:
(1) List all occurrences of displayable entities whose length is greater than k.
(2) Compute all prefix suffix conflicts involving displayable entities of length greater than
k.
(3) Compute all subword conflicts involving displayable entities of length greater than k.
An alternative formulation of the problem which also seeks to achieve the goal outlined
above is based on reporting only those conflicts whose size is greater than k. The size of a
conflict is defined below:
The overlap of a conflict is defined as the string common to the conflicting displayable
entities. The overlap of a subword conflict is the subword displayable entity. The overlap of
a prefix-suffix conflict is its intersection. The size of a conflict is the length of the overlap.
This formulation of the problem is particularly relevant when the conflicts are of more
interest than the displayable entities. It also ensures that all conflicting displayable entities
reported have size greater than k. We have the following problems:
(4) Obtain all prefix-suffix conflicts of size greater than some integer k.
(5) Obtain all subword conflicts of size greater than some integer k.
Pattern Restricted Queries: These queries are useful in applications where the fact
that two patterns have a conflict is more important than the number or location of the
conflicts. The following problems arise as a result:
List all pairs of displayable entities which have subword conflicts.
List all triplets of displayable entities (D 1 ,D 2 ,Dm ) such that there is a prefix suffix
conflict between D 1 and D 2 with respect to Dm .
Same as 6, but size restricted as in 5.
Same as 7, but size restricted as in 4.
Statistical Queries: These queries are useful when conclusions are to be drawn from
the data based on statistical facts.
(10) For each pair of displayable entities, D 1 and D 2 , involved in a subword conflict (D 1
is the subword of D 2 ), obtain (number of occurrences of D 1 which occur as
subwords of D 2 )=(number of occurrences of D 1 ).
(11) For each pair of displayable entities, D 1 and D 2 , involved in a prefix-suffix conflict,
(number of occurrences of D 1 which have prefix-suffix conflicts with
/(number of occurrences of D 1 ).
greater than a statistically determined threshold, then the following
could be be said with some confidence: Presence of D 1 implies presence of D 2 .
Applications
Circular strings may be used to represent circular genomes [1] such as G4 and OEX174. The
detection and analysis of patterns in genomes helps to provide insights into the evolution,
structure, and function of organisms. [1] analyzes G4 and OEX174 by linearizing and then
constructing their scdawg. Our work improves upon [1] by :
(i) analyzing circular strings without risking the "loss" of patterns.
(ii) extending the analysis and visualization techniques of [5] for linear strings to circular
strings.
Circular strings in the form of chain codes are also used to represent closed curves in
computer vision [11]. The objects of Figure 18(a) are represented in chain code as follows:
(1) Arbitrarily choose a pixel through which the curve passes. In the diagram, the starting
pixels for the chain code representation of objects 1 and 2 are marked by arrows.
(2) Traverse the curve in the clockwise direction. At each move from one pixel to the next,
the direction of the move is recorded according to the convention shown in Figure 18(b).
Objects 1 and 2 are represented by 1122102243244666666666 and 666666661122002242242446
respectively. The alphabet is f0, 1, 2, 3, 4, 5, 6, 7g which is fixed and of constant size (8) and
therefore satisfies the condition of Section 2. We may now use the visualization techniques of
[5] to compare the two objects. For example, our methods would show that objects 1 and 2
share the segments S1 and S2 (Figure 18(c)) corresponding to 0224 and 2446666666661122
respectively. Information on other common segments would also be available. The techniques
of this paper make it possible to detect all patterns irrespective of the starting pixels
chosen for the two objects.
Circular strings may also be used to represent polygons in computer graphics and computational
geometry [3]. Figure 19 shows a polygon which is represented by the following
alternating sequence of lines and angles: bfiaffeffeffcficfiefiaffeffeffcficfibffcffdffcff, where ff
denotes a 90 degree angle and fi, a 270 degree angle.
The techniques of this paper would point out all instances of self similarity in the polygon,
such as affeffeffcfic. Note, however, that for the methods to work efficiently, the number of
lines and angles that are used to represent the polygons must be small and fixed.
9 Conclusions
In this paper, we have defined the scdawg for circular strings and shown how it can be used
to solve problems in the visualization and analysis of patterns in circular strings. We expect
that it can also be used for other string matching applications involving circular strings.
An important feature of the scdawg for circular strings is that it is easy to implement and
use when corresponding techniques for scdawgs for linear strings are already available.
(c)
(b)
(a) Starting position for object 2
Starting position for object 16420
Chain code representations of directions
Figure
Representing closed curves by circular strings
e
d
c c
c
c
c
c
e
e
a
a
Figure
19: Representing polygons by circular strings
Acknowledgement
We are grateful to Professor Gerhard Ritter for pointing out the application of circular
strings to the representation of closed curves.
--R
"Sequence Landscapes,"
"The Matching of Protein Sequences using Color Intrasequence Homology Displays,"
"A method for detecting structure in polygons,"
"String Visualization,"
"Computing Display Conflicts in String Visualization,"
"Complete Inverted Files for Efficient Text Retrieval and Analysis,"
"The Smallest Automaton Recognizing the Subwords of a Text,"
"Efficient on-line construction and correction of position trees,"
"A space-economical suffix tree construction algorithm,"
"Efficient and elegant subword tree construction,"
Processing, 2nd Edition.
--TR
Digital image processing (2nd ed.)
Complete inverted files for efficient text retrieval and analysis
The matching of protein sequences using color intrasequence homology displays
Computer graphics: principles and practice (2nd ed.)
Models and techniques for the visualization of labeled discrete objects
A Space-Economical Suffix Tree Construction Algorithm
--CTR
D. P. Mehta , S. Sahni, Computing Display Conflicts in String Visualization, IEEE Transactions on Computers, v.43 n.3, p.350-361, March 1994 | space complexities;data structure;linear strings;computational geometry;visualization;circular string analysis;compact symmetric directed acyclic word graph;time complexity;data structures;data visualisation;object-oriented language;computational complexity;directed graphs |
626842 | Performance Evaluation of Hierarchical Ring-Based Shared Memory Multiprocessors. | Investigates the performance of word-packet, slotted unidirectional ring-based hierarchical direct networks in the context of large-scale shared memory multiprocessors. Slotted unidirectional rings are attractive because their electrical characteristics and simple interfaces allow for fast cycle times and large bandwidths. For large-scale systems, it is necessary to use multiple rings for increased aggregate bandwidth. Hierarchies are attractive because the topology ensures unique paths between nodes, simple node interfaces and simple inter-ring connections. To ensure that a realistic region of the design space is examined, the architecture of the network used in the Hector prototype is adopted as the initial design point. A simulator of that architecture has been developed and validated with measurements from the prototype. The system and workload parameterization reflects conditions expected in the near future. The results of this study shows the importance of system balance on performance. | Introduction
In this paper we study the performance of large-scale, shared-memory multiprocessors that use a
word-packet, ring-based hierarchical network. This class of architectures is of interest for several
reasons. First, by distributing the shared memory among the processor modules, associating caches
with each processor module, and locating the processor modules at the nodes of the network (that is,
using a direct network), communication locality is exploited to reduce network traffic and memory
latency.
Second, bit-parallel, unidirectional, slotted rings have been found to be effective at maximizing
link bandwidth in direct networks [26]. The advantages of unidirectional rings include: 1) with
their point-to-point connections, they can run at high clock speeds, 2) it is easy to make full use of
their bandwidth, they provide a natural broadcast mechanism, and 4) they allow easy addition
of extra nodes.
Third, a single slotted ring does not scale well, so multiple rings need to be interconnected. A
hierarchical ring interconnection is attractive since it allows simple node interfaces and inter-ring
connections. A node need only interface with its two ring neighbors. All inter-ring connections
(regardless of system size) can be implemented using a two-by-two crossbar switch. Moreover, a
hierarchy provides a unique path between any two nodes, which can be useful in the implementation
of some cache consistency protocols [13]. A disadvantage of a hierarchy is the limited bandwidth
near the root. However, this disadvantage is mitigated when there is sufficient communication
locality.
Our interest lies in the performance of this type of network within the context of a shared memory
multiprocessor, not just in isolation. Consequently, in evaluating overall system performance,
the effects of memory cycle time, memory utilization, and aspects of the processor design that effect
the request rate need to be considered. Specific issues of interest are the effectiveness of techniques
to hide memory latency such as multiple outstanding transactions and non-blocking reads, the use
of memory banks, ring topology, communication locality, hot spots, and the relative speeds of the
processors, memories, and rings. The transactions we consider are memory reads and writes of
single words and block transfers.
Our approach in evaluating these issues is to accurately simulate an existing system using a
detailed, packet-level simulator that can be validated against the existing system. For this purpose,
we use the Hector prototype which is a multiprocessor of this class of architecture. Since we are
interested in the performance of systems with on the order of 1024 processors, and since it is not
clear how to extrapolate results from small systems, we have found it necessary to use synthetic
workloads. Simulating instruction execution (as with Tango [12]) for a large system would take
prohibitively long, and using address traces from other systems is highly questionable.
The results of our study show the importance of system balance on performance. Large-scale
systems inherently have large communication delays for distant accesses, so processor efficiency will
be low, unless the processors can operate with multiple outstanding transactions using techniques
such as prefetching, asynchronous writes and multiple hardware contexts. However with multiple
outstanding transactions and only one memory bank per processing module, memory quickly
saturates. Memory saturation can be alleviated by having multiple memory banks per processing
module, but this shifts the bottleneck to the ring subsystem. While the topology of the ring hierarchy
affects performance - we show that topologies with a similar branching factor at all levels,
except, possibly, for slightly smaller rings at the root of the hierarchy tend to perform best - the
ring subsystem will inherently limit the throughput of the system. Hence increasing the number of
outstanding transactions per processor beyond a certain point only has a limiting effect on perfor-
mance, since it causes some of the rings to become congested. An adaptive maximum number of
outstanding transactions appears necessary to adjust for the appropriate tradeoff between concurrency
and contention as the communication locality changes. We show the relationships between
processor, ring and memory speeds, and their effects on performance.
In the next section we describe in more detail the systems we are examining, the simulation
methodology, the system parameters, and the workload parameters. The experimental results are
reported in Section three, and we conclude in Section four, together with a discussion of related
work.
2 System and Workload Description
We have chosen the Hector architecture, developed at the University of Toronto, as the initial
design point for our study. because it was designed specifically for ring-base hierarchies and was
implemented successfully. We choose to start from a design that has actually been implemented for
two reasons. First, basing the study on an implementation helps to ensure that the performance
of all system modules and their interactions are correctly captured; a more abstract system model
might miss some of these. Second, by restricting the study to designs related to a carefully thought
out implementation, we are focusing our attention on a realistic section of the design space and one
we believe to be relatively promising with respect to scalability. We briefly describe Hector below;
a more detailed presentation is given in [29, 27]. We then comment on the simulator, the system
parameters, and the workload parameters.
2.1 The Hector Architecture
Hector is a shared-memory multiprocessor consisting of a set of stations that are interconnected
by a hierarchy of unidirectional rings. Each station contains a collection of processor modules
containing a processor, a local cache, and part of the main memory. A station is connected to
the lowest level (or local) ring. Hector provides a flat, global (physical) address space, and each
station is assigned a unique contiguous portion of that address space, determined by its location.
All processors can transparently access all memory locations in the system. Information transfer
takes place by means of fixed-size packets transferred in a bit-parallel format along a unique path
through the ring hierarchy.
Two types of ring interfaces control packet transfers, both of which are simple to realize. Station
controllers control on-station traffic as well as local ring traffic at the station. They gate incoming
packets from the ring onto the station, outgoing packets from the station onto the ring, and
continuing packets from the previous ring interface on to the next ring interface. Packets on the
ring have priority over packets from the station to minimize the time packets are buffered at the
station controllers. Inter-ring interfaces control traffic between two rings. Logically, an inter-ring
interface corresponds to a 2 \Theta 2 crossbar switch with FIFO buffers. FIFO's are needed in order to
be able to store packets if collisions occur, which can happen when, in a given cycle, input packets
from both rings are to be routed to the same output. In order to minimize the remaining delay
of packets that are descending the hierarchy, packets from the higher-level ring have priority over
packets from the lower-level ring.
Figure
1 depicts a Hector configuration with two levels in the ring hierarchy, where a global
ring connects several local rings that in turn connect multiple stations. In this example, which
corresponds to a prototype, a bus connects several processing modules to the local ring interface.
All communication in Hector occurs synchronously. During a given clock cycle, a packet can
be transferred between two adjacent ring interfaces, from a station controller onto the station, or
from the station onto the ring. A request for access to a non-local memory location initiates a
Station
Station Controller
Inter-Ring Interface
Figure
1: Structure of Hector with two levels of ring hierarchy.
packet transfer across the network. Following the terminology of the Scalable Coherent Interface
protocol [22], each transaction involves a request and response subtransaction. A subtransaction
typically entails the transmission of a single packet, but in the event of collisions and timeouts,
several packets may be used.
For example, to access a remote memory location, a request packet containing the address of the
target memory is formed in a processing module and transferred to the ring via the local station
controller. The packet then travels around the ring visiting one segment in each ring cycle. When
the packet reaches the first inter-ring interface on its path, it is either switched onto a higher level
ring, or passed on to the next station controller on the same ring, depending on the destination
address. The packet first travels up the hierarchy to the level needed to reach the target station,
and then descends the hierarchy to the target station where it is removed from the local ring.
The target station sends a response packet back to the requesting PM, along a similar path. In
the case of a read transaction, the response packet contains the requested data. In the case of a
write transaction, the request packet contains data in addition to the addressing information, and
the response packet contains an acknowledgment. For writes, the response packet is sent back to
the requesting station as soon as the write is queued at the target memory, so the latency of the
actual memory operation is hidden.
It is possible that a request packet cannot be successfully delivered to the target memory. This
can happen, for example, when there is congestion at the target memory and it cannot accept
a further request when the packet arrives. In this case, the target station generates a negative
acknowledgement packet, which is sent back to the requesting station so that it can retry the
operation at a later time by retransmitting the request packet.
2.2 Simulator
We constructed a simulator that reflects the behavior of the packets on a cycle-by-cycle basis. The
simulator was written using the smpl [19] simulation library. The batch means method of output
analysis was used with the first batch discarded to account for initialization bias. In the batch
means method a single long run is divided into subruns called batches. A separate sample mean
is computed for each batch. These batch means are then used to compute the grand mean and
confidence interval [19]. The batch termination criterion was that each processor had to complete
at least some minimum number of requests. Early experiments showed that using a total number
of requests completed over the entire system as the batch termination criterion can substantially
underestimate mean response times since requests with long response times are underrepresented.
Using several workloads we validated the simulator from measurements collected on the Hector
prototype. The base simulator was then extended to model features not present in the prototype,
such as an arbitrary number of ring levels and ring cycle times different than the processor cycle
time.
2.3 System Parameters
A hierarchical ring-based system can be characterized by the following parameters: system size (in
processors), the relative processor, memory, and ring cycle times, the maximum number and type
of transactions that a processor may have outstanding, whether a processors blocks until a read
completes, ring topology, and the number of banks in each memory module.
Transaction latency refers to the entire time from when the request packet of a transaction is
issued by a processor until the transaction completes (that is, until the response packet returns to
the processor 1 ). There is a base, or contention-free, fraction of the transaction latency which is the
number of cycles required to traverse the network twice and, for reads, the time to actually execute
the memory operation in the absence of contention. The remaining fraction of the transaction
latency is the number of additional cycles due to contention.
Exposed transaction latency is the fraction of the transaction latency during which the processor
is blocked waiting for the transaction to complete. Thus, a memory cycle time of
may imply a transaction latency of 100 processor cycles in a large system when the target memory
is far from the source processor. Processor efficiency is the fraction of time a processor spends
doing useful work averaged over all processors. Useful work includes the delay for references that
are cache hits (we assume each processor has a local cache), but does not include the additional
delay for cache misses.
If the processor blocks until each read or write completes, then the large transaction latency
relative to processor cycle time implies a low processor efficiency. A number of techniques have been
proposed to increase processor utilization over this base case, including relaxed memory consistency
models, prefetching, and multiple hardware contexts [16]. Instead of assuming one technique versus
another, we characterize their effects by varying the maximum number of outstanding transactions
a processor may have before blocking and by considering whether or not reads block. Thus, a
processor does not block until either the number of its outstanding transactions has exceeded the
maximum or, if reads block, a read cache miss occurs. The goal of both multiple outstanding transactions
and non-blocking reads is to hide exposed transaction latency. One of the most effective
methods of allowing non-blocking reads and multiple outstanding transactions is to use multiple
hardware contexts. When we consider the case of multiple outstanding transactions, we intentionally
do not take into account any cycles lost due to hardware context switches. This assumption is
based on recent work that suggests such context switches can be scheduled so as to avoid any lost
cycles [7].
To prevent network saturation we consider the implications of alternative ring topologies. The
system topology can be specified by the branching factor at each level of the hierarchy starting at
1 For writes the response packet returns upon queueing of the request at the target memory so it is possible for
the target memory to be still processing a write request after the transaction completes in the above sense.
the number of stations on each local (or level 1) ring and with the last branching factor being the
number of ring directly attached to the root ring. Thus, a topology refers to a topology
with 2 stations per level 1 ring, 3 level 1 rings per level 2 ring, and 4 level 2 rings connected by the
root ring. Throughout the paper we assume one processor per station.
To prevent memory saturation we consider the use of multiple memory banks per memory
module. We assume that a transaction for a particular target memory is equally likely to access
any of the banks at that memory.
2.4 Workload Parameters
A detailed system simulator is needed in order to ensure that the important features of the architecture
being studied are captured. Simulating a large system is also important since extrapolating
from the results of a small system is questionable. The key to satisfying both of these concerns is
to use a synthetic workload model. Simulating instruction execution for a large system (as with
Tango) would take prohibitively long, and using address traces from other, smaller systems is highly
questionable. In contrast, with a synthetic workload model, the number of transactions that need
to be issued by each processor in order to obtain system performance measures is dramatically
smaller than the number generated when simulating the execution of actual application programs.
Moreover, the use of a synthetic workload sometimes allows clearer understanding of the significance
of different workload parameters. Of course, the concern that has to be addressed is the
realism of the workload model.
Our approach is to characterize the workload by the mean time between cache misses given a
non-blocked processor (or equivalently by the request rate which is the inverse of the mean time), the
probability that the cache miss is a read, and the communication locality. For the read cache miss
probability we assume 0.7 throughout the study which is consistent with empirical statistics [15].
For the request rate we consider a rate of 0.01 to 0.05 cache misses per processor cycle (equiv-
alently, 20 to 100 cycles between cache misses). This range choice is supported by a recent study
of a number of application programs that observed a mean number of processor cycles of between
6 and 137 for shared data reads [7]. We assume that code and private data references (such as to
a stack) always hit in the local cache. Factoring in shared data writes and shared data cache hits,
yields a more realistic mean number of processor cycles between cache misses of at least 20.
We chose for our workload characterization to avoid accounting (at least explicitly) for cache
coherence traffic. Cache coherence traffic could be included within a low-level workload model
such as ours by providing a translation from a high-level workload model to a low-level workload
model. We did a preliminary study of the effect of such a translation for the case of software cache
coherence using the approach developed by Adve, et.al. [1]. The resulting ranges for the low-level
workload parameters were consistent with the ranges we consider 2 .
Most of the results presented in this paper assume one word transfers. Transfers larger than
a word can arise from several sources: page migrations and replications, cache line transfers, and
prefetching. With regard to page migrations and replications, parameter values greatly effect the
results. Since there are no clear value ranges to use, we chose to ignore them. For cache line
transfers and prefetching of adjacent words on a cache miss we consider the use of page-mode
DRAM access to transfer blocks of words. Thus, for example, upon a memory access, the first
word is provided by the memory after, say, processor cycles, and successive words might be
provided at intervals of between 5 to 10 processor cycles.
2 The low-level software cache coherence model includes traffic due to posts and invalidates of cache lines. Assuming
single word cache lines, it is possible to include this traffic within the read and write parameters.
2.5 Communication Locality and Hot Spots
In a direct network, communication locality and hot spots can greatly affect performance. Our
clusters of locality model attempts to model communication locality. Intuitively, this model, for
each processor, logically organizes all processors into clusters around that processor independent
of the network topology. The first cluster typically only contains the processor itself; the second
cluster contains "near-by" processors; additional clusters contain processors further away. In our
case, we view the processors as the leaves of the tree defined by the ring hierarchy and number
them left to right. The clusters are then defined in terms of distance between two processors, which
is the absolute difference (modulo the size of the system) between the two processor numbers. For
example, in the two cluster case, cluster 0 may be the source processor module itself and cluster
1 all of the remaining processor modules. In the three cluster case, cluster 0 may be the source
processor module itself, cluster 1 the source processor module's set of ``closest neighbors'', and
cluster 2 the remaining processors modules. Defining clusters in this manner is reasonable since it
is likely that applications will be programmed in a manner independent of the particular branching
factors present in a certain ring topology.
The probability that the target memory of a transaction is in a processor module of a particular
cluster depends on the cluster. Given that the target memory is in a particular cluster, the
probability of a processor module within that cluster containing the target memory is uniformly
distributed.
The communication locality model specifies the number of clusters, the number in each cluster,
and the probability of a transaction's target being in each cluster. Thus, if there are 1024 processors
in the system, then means that there are three clusters,
cluster 1 has size 1 and probability 0.9 of being the target, cluster 2 has the 4 closest processing
modules and has probability 0.8 of being the target given that the target is not in cluster 0, and
cluster 3 has the remaining 1019 processing modules and has probability 1.0 of containing the target
given that the target is not in cluster 1 or cluster 2.
The clusters of locality model has been adopted because similar but simpler models have been
shown to be effective in studies of direct networks [25, 2, 3] and because the model exhibits memory
access patterns similar to scientific applications that have been examined on Hector. This is especially
true for applications that have been optimized for NUMA systems and migrate and replicate
data objects to improve locality (including when the migration and replication is done by the operating
system transparent to the application). Further study of the extent to which real shared
memory programs conform to this model (allowing for application restructuring and hardware and
software dynamic page (or cache block) placement) is of interest, but outside of the scope of this
paper.
A major type of non-uniform traffic is a hot spot; that is, a single memory that has an unusually
high probability of being accessed by all or many of the processors. Early papers [23, 30] identified
hot spot memory modules as a major cause of performance degradation in shared-memory interconnection
networks. The degradation is exacerbated by "tree saturation" [23] which even obstructs
memory traffic to non-hot spots locations. Significant progress has been made in reducing hot spot
traffic, especially hot spot traffic due to synchronization. Techniques include separate synchronization
networks (possibly with combining) [18] and hot-spot-free software algorithms that use
distributed data structures [20]. Furthermore, flow control mechanisms may be useful, especially
when hot senders (processors with usually high request rates or a high favoritism to the hot spot)
are a factor. Evaluating alternative techniques for reducing hot spot traffic is outside the scope of
this paper. Instead, we investigate the significance of the effects due to hot spots on performance
for this type of architecture.
Parameter Value Description
N 1024 number of processors (memories) in the system
number of memory banks
branching factor at each of the n level
RXMY R1M10 to R4M60 ratio of ring and memory cycles to processor cycles
T 1-8 maximum number of outstanding transactions
request rate
R 0.7 probability that a cache miss is a read
to (0:9; 1:0) cluster probability for each of the m clusters
to (1; 1023) cluster size for each of the m clusters
F 0.0-0.025 favorite memory probability
Table
1: System and workload parameters used in the simulations and their value ranges.
In this section we present the results of our experiments. The primary performance metric used
is processor efficiency since it reflects overall system performance. To further understand the
differences identified by processor efficiency, a number of secondary metrics are examined. These
include mean transaction latency and mean remote transaction latency (for accesses directed to
non-local memory). Mean local transaction latency is relatively constant, because network traversal
and contention is avoided and because local transactions have priority over remote transactions.
We also consider memory and ring utilizations. For ring utilization we report only the average
utilization of the rings at the level with the largest average utilization since that ring level has the
dominant performance effect.
The ranges of the input parameter values are shown in Table 1. For all systems under con-
sideration, the system size is 1024 processors, the probability that a transaction is a read is 0.7,
and the size of cluster 1 is 1 1). The system parameters studied are the system topology,
the maximum number of outstanding transactions per processor, the use of blocking versus non-blocking
reads, the number of memory banks per memory module, and the relative speed of the
processor, ring, and memory. We refer to the latter as the cycle ratio and specify it by RXMY
which means that each ring cycle is X times as slow as a processor cycle and the memory requires
Y processor cycles to service one memory access. The workload parameters studied are the request
rate, the communication locality, and the presence of hot spots. All simulation results reported in
the section have confidence interval halfwidths of 1% or less at a 90% confidence level, except near
saturation where the confidence interval halfwidths sometimes increase to a few percent.
Section 4.1 describes the base system and its performance under different request rates and
degrees of communication locality. In the following sections we examine the issues mentioned in
Section 1. In Section 4.2 we consider the effects of increasing the maximum number of outstanding
transactions and whether reads block. Section 4.3 presents results on the use of multiple memory
banks. The effect of variations in ring topology is considered in Section 4.4. Section 4.5 considers
hot spot traffic. Finally, Section 4.6 considers changes in the relative speeds of the processors,
memories, and rings.
3.1 Base System Performance
For the base system, some variable parameters are fixed as follows: a system topology of
(16; 4; 4; 2; 2), one memory bank per memory module 1), at most one outstanding transaction
per processor that all transactions, including reads, block), and a cycle ratio of R2M30
(that is, the ring has a cycle time twice as long as that of the processor and memory requires
processor cycles per access).
The cycle ratio chosen is based on near-term expected timings. In particular, high-performance
processors are now obtaining cycles times on the order of 5ns (such as in the DEC Alpha [11]).
Although the IEEE SCI standard specifies a ring cycle time of 2ns, we assume a ring cycle time
of 10ns. We do so because we define ring cycle time as the time required for a packet to move
from the input of one station to the input of the next station. Such a transfer need not occur in a
single ring cycle. For example, a recent performance study of the SCI ring [26] assumes (with no
contention) four ring cycles for a packet to traverse a station and the link to the next station. The
assumption that a memory cycle takes cycles follows values used in recent studies [10].
Figure
2(a) shows how efficiency varies with the request rate, -, for the base system as the cluster
probabilities are varied. As the request rate increases, efficiency drops sharply to less than 40% at a
request rate of 0.05 even for (where 95% of all accesses go to local memory). When there
is a high degree of locality (that is, a cluster 1 probability of 90% or higher), memory utilization
and maximum ring utilization are far from saturation. Hence, the non-contention component of
the transaction latency is the primary cause of the decline in efficiency. In other words, efficiency
is low because all of the transaction latency is exposed. For cluster 1 probabilities less than 90%,
ring contention (for the level 4 rings, one level below the root ring) becomes substantial enough to
effect the latency and thus further decrease efficiency.
Next we considered the effect of cluster size for the base system for several different cluster 1
probabilities, 0:8. Figure 2(b) plots efficiency as a function of the request rate for the
case of P 0:9. Increasing the cluster 2 size from 4 to 16 or 32 (thus, spreading non-local accesses
over a wider range) has minimal effect on efficiency for any of the cluster 1 probabilities considered.
One reason for this invariance in the base case is that the level 1 ring contains
a transaction to a target memory on the same level 1 ring as the source processor imposes the
same load regardless of the logical distance between the target memory's processor and the source
processor. Consequently, the primary cause of increased load by increasing the cluster 2 size from
4 to 16 is due to increased traffic to adjacent level 1 rings.
On the other hand, increasing the cluster 2 size to 1023 (that is, causing all non-local transactions
to be uniformly distributed across the machine), has a major effect on efficiency due to ring
saturation (primarily the level 3 rings). Even for high locality (that is, request rate of
causes a maximum ring utilization of over 90%. For 0:80, the maximum ring utilization
is 100% even at a request rate of 0.01.
We conclude that for the base system, processor efficiency can be quite low at high request rates.
At cluster 1 probabilities of 90% or above and cluster 2 sizes on the order of 4 to 32 processors, the
low efficiency is due to the long contention-free latency being exposed, which stems from the limit
of one outstanding transaction per processor. At cluster 1 probabilities below 90% or cluster 2 size
on the order of the system size for any considered cluster 1 probability, ring contention becomes a
factor in increasing transaction latency. We examine below techniques to address both causes of
low efficiency.
(a) Vary Cluster Probability
\Theta
\Theta
\Theta
\Theta
f
f
c
e
Request Rate
(b) Vary Cluster-2 Size
\Theta
\Theta
\Theta
\Theta
Figure
2: Base system experiments with In
Figure
In Figure 2(b)
3.2 Maximum Outstanding Transactions and Non-Blocking Reads
We next examine two techniques for increasing efficiency in the presence of long exposed latency:
increasing the maximum number of outstanding transactions and allowing non-blocking reads.
Our assumptions are the same as the base system with cluster sizes cluster
probabilities 1:0). The effects of allowing the maximum number of outstanding
transactions, T , to be 1, 2, 4, 6, and 8 with and without reads blocking are described below.
The first experiment (not shown) varied the request rate for different T values assuming blocking
reads. This might reflect, for example, a single context that is using a relaxed memory consistency
model. Increasing T from 1 to 2 causes a small increase in efficiency (1% absolute change) and
further increases in T have no significant effect. Given that 70% of all transactions are reads, it
is not surprising that increasing T has limited effect. The remainder of this study, therefore, only
considers non-blocking reads.
Figure
3(a) plots efficiency versus the request rate for different T values assuming non-blocking
reads. Increasing T is effective at substantially improving efficiency (to approximately 75% for T=4
and a request rate of 0.05). However, this increases traffic and hence contention, which in turn,
increases latency, thus reducing the effectiveness of increasing T to reduce exposed latency. For the
cases of T=6 and T=8, the simulations did not complete under high request rates because of an
excessive number of packets in the system arising from severe contention. Figure 3(b) shows how
mean transaction latency (in processor cycles) increases for different T values as the request rate
increases.
Figure
4 shows that excessive memory utilization is the primary cause of the contention. The
increase in ring utilizations at higher request rates is an indirect result of memory contention.
Increased packet traffic arises from an increase in packet retries, which in turn is due to requests
\Theta \Theta
\Theta
\Theta
L a
y
y
c
l e
Request Rate
(b)
\Theta
\Theta
Figure
3: Varying T. Non-blocking reads.
being turned back at saturated memories. Were it not for the retries, a doubling of the request rate
from should only increase ring traffic by 10%. We conclude that increasing the
maximum number of outstanding transactions, given non-blocking reads, is effective at increasing
efficiency for the communication locality considered, but that memory saturation limits further
efficiency improvements.
3.3 Multiple Memory Banks
We next consider dividing each memory module into multiple memory banks as a means of decreasing
memory utilization. The system considered initially is the same as that considered in the
previous subsection, but varying the number
of memory banks from 8. The first series of experiments varies the number of
memory banks for the case of fixed the number of outstanding
4, and varied the communication locality by considering
for the case, and by considering for the
For the case of shown), going from 1 to 2 memory banks has some effect on processor
efficiency (the efficiency at request rate 0.05 increases from 56% to 64%). Adding more memory
banks has little effect on efficiency, since the exposed part of the contention-free transaction latency
is the limiting factor on efficiency, not memory contention.
4, on the other hand, there is enough concurrency so that the contention-free trans-action
latency is hidden and memory contention is the limiting factor for the ranges considered.
This is shown in Figure 5(a), which displays efficiency versus request rate for different numbers of
memory banks. Efficiency is now increased, at a request rate of 0.05 to over 90% for 4 memory
banks, 4). The cause for this improvement is the reduction in memory utilization as shown
e
U
l
Request Rate
(a)33
\Theta
\Theta
\Theta
a x
R
U
l
Request Rate
(b)
\Theta
\Theta
\Theta
\Theta
Figure
4: Varying T. Non-blocking reads.
in
Figure
5(b).
The results for are similar to those for that the efficiency is somewhat higher
(over 95% at request rate memory and maximum ring utilizations change by
a few percent.
Figure
5(c) plots the maximum ring utilization as a function of the request rate. The ring
utilization is still low enough so that it has little effect on transaction latency, but at the higher
request rates it is approaching levels where it will have effects. With only 1 or 2 memory banks,
ring utilization is constrained at higher request rates by memory contention. A higher number of
memory banks increases the processor efficiency by removing the memory bottleneck, which, in
turn, increases the offered load to the network and thus the ring utilization. However, the increase
in ring utilization due to packet retries as seen in Figure 4(b) is no longer present.
The increase in ring utilization shown in Figure 5(c) identifies a fundamental tradeoff between
maintaining a small value for T and improving processor efficiency. Increasing T has the potential
for increasing contention by increasing memory and ring utilization. Consequently, increasing T
in order to reduce the exposed latency can, after a point, provide minimal improvement. Thus,
the tradeoff is to choose a T value that is as small as possible while still achieving almost all of
the possible improvement in processor efficiency. The above experiments indicate that for
(0:95; 0:8; 1:0) and a value of would be a good balance in this tradeoff.
To further investigate this tradeoff point, its sensitivity to the degree of communication locality
was examined. The results (not shown) indicate that the tradeoff point is highly sensitive. For
request rates of 0.04 (0.02) or higher cause almost 100%
maximum ring utilization. For in the request rates of 0.02
(0.01) or higher cause 100% maximum ring utilization. In all cases, the level 4 rings always have
f
f
c
e
Request Rate
(a)
B=6 \Theta
\Theta \Theta \Theta \Theta
U
l
Request Rate
(b)33
\Theta
\Theta
\Theta
\Theta
M a x
R
U
l
Request Rate
(c)3
\Theta
\Theta
\Theta
Figure
5: Varying B.
the highest utilization. Clearly, for these degrees of communication locality, a lower T is needed.
Given the sensitivity of T relative to communication locality, an adaptive T level algorithm
is clearly desirable. One decentralized approach is for each processor to decrease T when it observes
latency of its transactions higher than expected, an indication of high network or memory
contention. Likewise, each processor can increase its T level when the actual latency of its transactions
is as low as the contention-free latency would be. An alternative approach, using a centralized
scheme that allows the T level to be adjusted system-wide, avoids potential fairness problems with
the decentralized approach while introducing coordination overhead.
The above discussion shows that memory saturation can be a major limiting factor to increased
processor efficiency and that the use of multiple memory banks is an effective technique to remove
this bottleneck. The increase in the offered load due to multiple outstanding transactions and the
removal of the memory saturation causes ring utilization to increase and to saturate for some reasonable
traffic patterns. The appropriate number of outstanding transactions is a tradeoff between
concurrency and contention, which is highly sensitive to the degree of communication locality. An
adaptive maximum number of outstanding transactions may be useful for adjusting to this tradeoff
as the communication locality changes.
3.4 Ring Topology
Ring utilization is also affected by the topology of the ring hierarchy. To evaluate the significance
of the ring topology, we first analyzed the maximum transaction latency in a ring hierarchy when
each access goes to the most distant memory; that is, when the request and response packets must
traverse all of the levels in going from the source to the target memory and back. If both request and
response subtransactions are considered together, then each traversed ring is completely traversed
exactly once. A complete traversal of a ring with a branching factor L i takes L i +1 ring cycles if the
ring is not the root ring (one cycle for each L i child inter-ring interface or station plus one for the
parent inter-ring interface). A complete traversal of the root ring with a branching factor L i takes
ring cycles. Thus, the contention-free maximum transaction latency, ML, for a configuration
R
c
l
e s
Number of Levels
(1 level is 1054)
Figure
latency in ring cycles for the optimal topology
at each number of ring levels. mem = 30.0.60.81
a
l
z e
d
Number of Levels
\Theta
\Theta
\Theta \Theta
\Theta \Theta \Theta
\Theta
Figure
7: Comparison of normalized contention-free
maximum transaction latency with normalized
mean remote transaction latency for
traffic pattern
mem is the memory cycle
time in ring cycles.
We computed ML for a number of different topologies. For all the topologies that have the
same number of levels we chose the one with the lowest ML value and plotted them in Figure 6.
Although a memory latency of 30 is assumed in order to be consistent with the R2M30 cycle
ratios used in the other simulations, mem simply causes a constant shift. The optimum topology
has 5 levels and is 4). For a given number of levels, topologies with the best ML
values minimize large branching factors except possibly for a larger branching factor at the root
since the root ring is only traversed once. The relative performance of topologies with different
numbers of levels is determined by the balancing of two factors: 1) larger branching factors have
the disadvantage of requiring the packet to traverse extra links to reach the next level, and 2) larger
number of levels has the disadvantage of increasing the total number of inter-ring interfaces in the
system.
Although the above analysis shows that balanced topologies are best, the analysis ignores more
realistic traffic patterns and contention. The above analysis represents the worst case, in which all
accesses are to memories for which the contention-free transaction latency is maximum. This traffic
pattern will cause ring saturation unless the request rate is extremely low and is unrealistic. For the
next set of experiments we consider traffic patterns that are more realistic and that our previous
experiments indicate will probably cause the network to be heavily utilized, but not saturated. The
preliminary comparison considered the topologies with the best ML value for each level and used
Table
2: Ring topologies considered for different communication localities. The index denotes the
topology in Figure 8.
the traffic pattern 4. For each request rate,
we normalized the results against the worst mean remote transaction latency for that topology and
plotted them in Figure 7.
The qualitative behavior of the alternative topologies is similar to that of the normalized ML
values in that topologies with 4 to 6 levels tend to be superior. As Figure 7 shows, under high
traffic loads, having too few levels can hurt performance substantially and, under low traffic loads,
having too many levels also hurts performance. For request rate 0.05, a large number of levels
does not degrade performance. This invariance is due to root ring contention. Some of the best
topologies by ML value have a large branching factor at the root. For these topologies the root
ring contention masks any other performance changes due to a large number of levels.
Because topologies with 4 to 6 levels appear to perform better, we then examined in more detail
the performance of alternative topologies with that many levels under three traffic patterns. The
three traffic patterns we considered are:
1.
2.
3.
Besides the topologies that the ML analysis indicated were optimal we considered several others
that had somewhat larger branching factors at the lower ring levels. The rationale is that the
previous experiments indicated that the higher ring levels had the highest utilization. Consequently,
reducing the branching factor at the higher ring levels might be advantageous.
Table
2 lists the topologies considered. The index associated with each topology in Table 2
denotes that topology in Figure 8. The topologies with indices 1-3 are the best for their number of
levels by ML values. Figure 8 plots the maximum ring utilization for the investigated topologies
for the three traffic patterns. The best topologies, as measured by ML, do not perform especially
well. The baseline topology (16; 4; 4; 2; 2) that we have been using in earlier experiments performs
well, but the relative performance depends on the traffic pattern. Any of the topologies numbered
seem to be an acceptable choice with none of them clearly superior.
M a x
R
U
l
Topologies
(a)2060100
M a
x
R
U
l
Topologies
(b)2060100
M a x
R
U
l
Topologies
(c)
Figure
8: Comparison of the maximum ring utilizations of alternative topologies for three traffic
patterns. The plots are for the topologies patterns listed in Table 2.
We conclude that the best contention-free topology, as measured by maximum transaction
latency, and the one least sensitive to contention has between 4 and 6 levels. The best of these
topologies tend to have uniform branching factors at the different levels except that somewhat larger
branching factors at the lower levels also do well. The relative performance of the best topologies
depends on the traffic patterns and also on the performance measure (mean remote transaction
latency or maximum ring utilization).
3.5 Hot Spot Traffic
The results presented so far are based on a communication locality model where the traffic within
a cluster is uniformly distributed. In this subsection we examine the behavior of the system in
the presence of a hot spot. To perform the analysis, we assume that a transaction from any
processor has probability F of addressing the hot spot memory. The remaining transactions are
distributed according to the standard communication locality model. Moreover, we assume
(0:95; 0:8; 1:0); In all of the experiments reported
so far, the simulated system has had a memory queue length of 9. When there is no hot spot traffic
and four memory banks, this queue length is sufficient so that queue overflow is extremely unlikely
at traffic loads which do not saturate the memory itself. It is possible that with with a hot spot a
longer memory queue is needed. Consequently, in this section we consider queue lengths of 9 and
36.
Figure
9(a) evaluates the effect of the hot spot on overall system performance by plotting
mean remote transaction latency (for all remote transactions in the system) versus request rate for
different favorite memory probabilities for a memory queue of length 9. The request rate needed
to cause latency to significantly increase depends greatly on the favorite memory probability. It is
clear that favority memory probabilities of 1% or more are not supportable at reasonable request
rates with respect to overall system performance. Figure 9(b) uses a queue length of 36. The
change in queue length has some effect, but the qualitative behavior is the same.
The degradation in performance in Figure 9 is large once a critical request rate is reached,
because of the negative feedback effect of packet retries. The favoritism to the hot spot memory
imposes additional load on three resources: the hot spot memory, the queue for the hot spot memory,
and the network near the hot spot memory. As contention for the hot-spot memory increases, some
of the access request packets destined for the hot spot memory will be negatively acknowledged
and then resent. The above experiments indicate that all three resources contribute to performance
degradation. The hot spot memory itself is clearly a factor due to its high utilization, but we can
not actually saturate it. For both queue lengths of 9 and 36, workloads that cause mean remote
latencies to be in the hundreds, have hot memory utilizations between 85% and 98%. Memory queue
overflow is a factor since a memory queue of 36 has lower mean remote latencies and higher hot
memory utilizations than a memory queue of 36. Network saturation of rings near the hot memory
is a factor; measurements of the queues at the IRIs near the hot memory show long queues.
As mentioned in Section 2, our goal, with respect to hot spots, is to understand their significance
within this class of systems, instead of evaluating alternative solutions. Without techniques
to alleviate hot spots, our experiments indicate that such systems can become unstable at favorite
memory probabilities on the order of 1% to 2% under reasonable request rates. If the memory
queues are of inadequate length, significantly lower favorite memory probabilities can cause insta-
bility. The techniques proposed in the synchronization literature (such as separate synchronization
networks with combining or software algorithms using distributed data structures [20, 18]) may
well reduce the likelihood of hot spots. The simulated system does provide flow control in that the
number of cycles before a source processing module submits a retry is a function of the number
of retries previously sent [27]. More sophisticated flow control mechanisms could be considered.
One possibility is that the destination module could return with the negative acknowledgement an
indication of how congested it is. The source module could use this information to choose a wait
period.
3.6 Relative Speed
So far we have assumed a fixed ratio of the processor, memory, and ring speeds, namely R2M30.
In this section, we examine the effect of varying this ratio on the conclusions drawn in the previous
sections. Since processor speeds seem to be increasing faster than memory speeds, three cases
are considered for which the processor speed is increased by 100% and the memory cycle remains
unchanged (and thus is now cycles). The first case assumes that the ring cycle
time remains unchanged at 4 processor cycles, the second case assumes a 50% reduction and the
third case, assumes a 75% reduction in ring cycle time. The cases are denoted R4M60, R2M60,
and R1M60, respectively. We assume
Figure
10(a) plots efficiency versus request rate for the three cases as well as for R2M30, our
base case. The faster processor causes efficiency to drop somewhat in comparison to R2M30,
but, surprisingly, the drop is essentially identical for all three cases. (In noting the high efficiency
of R2M30 it is important to remember that, relatively speaking, the processor in this case is
only executing instructions at half the speed of the other cases.) Figure 10(b) plots maximum ring
utilization versus request rate for all of the cases and shows that the similar behavior with respect to
efficiency masks major differences with respect to ring utilization. (There is little difference among
the cases with respect to memory utilization.) In fact the ring saturation of R4M60 suggests that
processor efficiency for that case should start dropping if the offered load is increased further. To test
this hypothesis we redid the experiment with Figure 11(a) and Figure 11(b) plots
efficiency and maximum ring utilization, respectively, versus request rate for this communication
L a
y
y
c
l
e
Request Rate
(a)
F=1.0% 22
F=1.5% \Theta
L a
y
y
c
l e
Request Rate
(b)
F=1.0% 22
F=1.5% \Theta
\Theta
Figure
9: Hot spot. How large of a request rate can be supported for different favorite memory
probabilities before overall system performance degrades as measured by mean remote transaction
latency. Figure 9(a) has a memory queue 9 deep. Figure 9(b) has a memory queue 36 deep.10305070900
f
f
c
e
Request Rate
(a)
\Theta \Theta \Theta \Theta10305070900 0.01
M a x
R
U
l
Request Rate
(b)33
\Theta
\Theta
\Theta
\Theta
Figure
10: Effect of different processor and ring speeds.
\Theta \Theta \Theta \Theta10305070900 0.01
a x
R
U
l
Request Rate
(b)33
\Theta
\Theta
\Theta
\Theta
Figure
11: Effect of different processor and ring speeds. 1:0). The simulation for
R4M60 at aborts due to saturation. The efficiency and maximum ring utilization points
reported for that case is the average of the batches that completed.
locality. The results confirm our hypothesis by the sharp drop in processor efficiency for R4M60
when at request rate 0.03. The collapse in performance when increasing request rate beyond 0.02 is
so sharp, that the request rate 0.03 simulation does not complete. The point reported is the mean
of the batches before the simulation aborts.
We have not explicitly considered a faster ring cycle time in our experiments. The R1M60
results in the Figure 10 and Figure 11 plots make clear that a faster ring cycle time would have
little effect since at R1, the memory is the limiting factor, not the ring utilization 3 .
Returning to our base case, we then considered the effect of our assumption that a memory
cycle equals cycles. One of our conclusions had been that as we increased T to hide the
exposed transaction latency, memory saturation became a limiting factor. We then used multiple
memory banks to allow higher processor efficiency. Now we return to one memory bank to see how
sensitive our results are to memory cycle time. We assume
(0:95; 0:8; 1:0); and that a ring cycle equals 2 processor cycles and vary memory
cycle time from 10 to 40 processor cycles.
Figure
12(a) plots efficiency versus request rate as memory cycle time, M , is varied. As M
increases, efficiency drops which is partially due to an increase in the base transaction latency and
partially due to increased contention. To understand the degree to which contention contributes
to the drop in efficiency, we plot memory and maximum ring utilization versus request rate in
Figure
12(b) and Figure 12(c), respectively. As memory cycle time increases, memory utilization
increases and maximum ring utilization decreases. For the longer memory cycle times and high
3 Some ring-based systems (such as the KSR1 [8, 6]) have a ring cycle time faster than the processor cycle time.
Most often these systems (as does the KSR1) require several ring cycles for a transaction to pass through a ring node.
M40 \Theta
\Theta
\Theta
\Theta
U
l
z a
Request Rate
(b)33
\Theta
\Theta
\Theta
M a x
R
U
l
Request Rate
\Theta
\Theta
\Theta
Figure
12: Effect of different memory cycle times.
request rates, memory saturates and the maximum ring utilization is constrained by the memory
saturation. The case of significantly differs from larger M values in that for
memory utilization is less than maximum ring utilization. It appears that there is a significant
transition as M increases above 10 that involves whether memory utilization becomes a bottleneck
before the rings become a bottleneck. For higher M values, multiple memory banks are useful for
increasing efficiency. For lower M values, multiple memory banks will not be useful for increasing
efficiency, since ring saturation dominates and since there is little room for improvement in efficiency
given that T is raised adequately (efficiency is close to 100% with Figure 12).
3.7 Block Transfers
The increase in the ratio of memory cycle time to processor cycle time has motivated the development
of innovative memory technologies [24, 21]. The one approach we examine here is page-mode
access. With page-mode access, fetching the first word from memory requires raising both
the row-access strobe line and the column-access strobe line. However, subsequent words on the
same row in memory can be retrieved by just changing the column address and reraising the column-
access strobe line. Thus, the access time for the first word is unchanged, but the additional access
time for subsequent words on the same row is sharply reduced. Page-mode access is a natural
mechanism to support block transfer.
The result of page-mode access is to increase the memory bandwidth and to reduce the average
latency (as measured over all words fetched from memory). If we assume that the first word fetched
is a cache miss and the subsequent words are prefetches, then the use of page-mode access is only
advantageous to the extent that the cache hit ratio is increased by spatial locality through the
prefetching. Page-mode access had disadvantages in that memory and network utilization increase
and in that there is increased potential for cache pollution (to the extent that the prefetched words
are not referenced and cause the replacement of words that will be referenced). In addition, if the
processor stalls until the last word fetched reaches the processor (instead of just until the first word
is fetched), the stall time on cache misses further increases.
The extent to which the use of page-mode access increases the cache hit ratio is highly program-
dependent and out of the scope of this study. However, we can determine the extent to which the
cache hit ratio needs to increase in order to compensate for the disadvantages of page-mode access.
We conducted several experiments to determine the needed hit ratio increase at request rates 0.01%
through 0.05%.
In all the experiments we fixed the ring topology and the ratio of processor, ring, and memory
speeds to the base case: respectively. We assumed four memory
banks, no hot spot traffic, and that each word of a block after the first word takes 5 processor
cycles. We considered four cases of communication locality:
0:8. For all four communication localities we
considered having at most four outstanding transactions. For
we also considered having at most one outstanding transaction. We considered four block sizes: 1,
2, 4, and 8 words. For all block sizes we assumed that the processor stall ends when the first word
reaches the processor.
For all communication localities and maximum numbers of outstanding transactions consid-
ered, increasing the block size degrades performance at all request rates by all of the measures of
processor efficiency, mean request latency (measured by when the processor stall ends), memory
utilization, and maximum ring utilization. The percentage degradation depends on the measure.
The memory and maximum ring utilizations are more fundamental in the sense that their proximity
to saturation determine the contention components of processor efficiency and request latency.
Since the maximum ring utilizations are substantially higher than the memory utilizations in all
the experiments, we chose to report the change to the maximum ring utilization. We characterize
this change by the percentage the mean number of processor cycles between cache misses must
increase in order for the maximum ring utilization for a block size of b to equal the maximum ring
utilization for a block size of 1. Thus, implicitly we are identifying the necessary change in the
cache hit ratio.
The results for a communication locality of are
plotted in Figure 13. The behavior is surprising. The needed increase in the number of processor
cycle between cache misses is increasingly close to linearly independent of the request rate and
the maximum number of outstanding transactions. The behavior (not shown) for the other three
communication localities is similar. The percentage increases are also quite large. For example,
over all the experiments there is one case where a block size of two words requires an increase of
13%. For all other cases a block size of two words requires an increase of between 33% and 90%.
Whether an increase in the block size to two words would cause such an improvement depends on
the program and cache characteristics, but seems doubtful. Increasing the block size to more than
two words would require much additional improvement, which seems even more doubtful.
The above experiments assume that the processor stall ends after the first word of the block
reaches the processor. For the 8 word block size we repeated the above experiments considering
the case in which the processor stall ends after the last word of the block reaches the processor. In
comparison with the 8 word block size with the processor stall ending after the first word returns,
the change is minor. The memory and maximum ring utilizations decrease by a few percentage
points. As expected, the mean remote request latency increases by the extra length of the remaining
words of the block plus a contention factor, and the processor efficiency drops be a few percent.
We conclude that using block transfers through page-mode DRAM access does not appear
promising. Across many communication localities, request rates, maximum numbers of outstanding
transactions, and block sizes, the extra traffic resulting from fetching the extra words significantly
raises the network utilization as measured by the maximum ring utilization. The improvement in
cache hit ratio due to a larger block size is program and cache dependent. However, the improvement
c
l
e
I
e a s
e
Block Size
(a) T=4
\Theta
\Theta
\Theta
c
l
e
I
e a s
e
Block Size
(b) T=1
\Theta
\Theta
\Theta
Figure
13: Effect of increasing the block size through page-mode DRAM access. The effect is
indicated by the percentage that the mean number of processor cycles between cache misses must
increase in order for the maximum ring utilization with a block size of b to equal the maximum ring
utilization with a block size of 1. Communication locality is
in cache hit ratio needed to compensate for the increase in maximum ring utilization is large enough
to be doubtful that the improvement can be achieved. This result is specific to very large systems
with this type of network and might not apply to smaller systems or systems with other types of
networks.
Conclusions
4.1 Related Work
Beside Hector, several architectures based on slotted rings have been proposed including the CDC
CYBERPLUS [14], the Express Ring [5], the IEEE SCI (Scalable Coherent Interface) standard [17,
22], and the KSR-1 from Kendall Square Research [8, 6]. There have been performance studies
of single slotted rings, but not of ring-based hierarchies. Previous studies of other shared-memory
architectures with system models at the level of detail of our simulator have tended to only examine
small systems (100 processors or less) [31, 9].
The performance of branching factor topologies has been considered for bus hierarchies. The
experiments by Vernon, J-og, and Sohi [28] indicate that the best topologies had large branching
factors at the low levels and small branching factors near the root. In contrast, our contention-based
experiments indicate that for ring-based systems, topologies with close to balanced branching
factors are best. Somewhat larger branching factors at the lower ring levels sometimes do well, but
very large branching factors at low ring levels perform poorly.
A recent analytical study by Agarwal examines the effectiveness of multithreading on increasing
processor efficiency [4]. This is relevant since multithreading is one of the main approaches for
allowing multiple outstanding transactions. However, the system and workload characteristics
Agarwal considers are significantly different than ours. The network is a k-ary, n-cube, memory
access time is 10 processor cycles, traffic is uniform (any memory is equally likely to be the target
memory of a cache miss), the cache miss ratio is a function of the number of threads, and the
context switch overhead is nonzero. The largest difference is in the role of memory. The Agarwal
model does not take memory contention into account in determining transaction latency. Even
if the model did, the significance of memory saturation occurring before network saturation (and
thus, the importance of techniques to reduce memory utilization) would not be observed due to
memory access time being 10 cycles (see our Figure 12).
4.2
Summary
In this paper, we presented the results of a simulation study to assess the performance of a shared-memory
multiprocessor, based on a hierarchical network of rings. Ring based systems are of interest
because they can be run at very fast rates due to their use of short, unidirectional and point-to-
point connections, and because of the use of simple node interfaces and inter-ring connections.
Hierarchical systems are of interest because they are scalable. To ensure that the systems we
simulated were realistic and realizable, we based them on a specific system, Hector. To ensure that
the simulator is correct and captures subtle system interactions, the simulator was validated using
measurements of this prototype system.
Trace-driven simulation would produce questionable results and execution-driven simulation
using complete applications would take prohibitively long when simulating a system this large at
the level of detail desired for realism. Consequently, we introduced a synthetic workload model
using parameter value ranges based on experience with applications. With this workload model we
have shown how systems on the order of 1024 processors can be evaluated even with network and
memory simulation at a detailed level.
The main results of this study are:
ffl Without a high degree of locality in the data accesses of the applications running on this
type of system, ring contention will cause the memory access latency to increase significantly.
Locality can be achieved by proper data placement and by migrating and replicating data
objects.
ffl If processors do not have support multiple outstanding transactions (such as, prefetching,
multiple hardware contexts, release consistency), then processor efficiency will be low due to
the long memory access transaction latency even if there is no contention.
ffl With multiple outstanding transactions, memory will quickly saturate if there is only one
memory bank per processing module. Using multiple memory banks per processing module
is an effective way to reduce memory contention.
ffl It is necessary to limit the number of multiple outstanding transactions per processor in order
to limit network contention. The appropriate number is a tradeoff between concurrency and
contention, and we have found it to be sensitive to the degree of communication locality. We
have proposed an adaptive approach for adjust for this tradeoff as the communication locality
changes.
ffl With respect to topology, we have found that 1024 processor systems with between 4 and 6
levels in the hierarchy tend to perform better, although no one topology is best. We have
also found that well-balanced systems (with a similar branching factor at all levels) tend to
perform best. Slightly smaller rings at the root of the hierarchy reduce the potential for
congestion at that point.
ffl Saturation of hot spot memories causes substantial degradation of overall system performance
if 1% or more of memory traffic is targeted to a single memory.
ffl Doubling the processor speed and keeping the memory cycle time constant causes a drop in
processor efficiency due to the increased relative transaction latency. At the traffic patterns
considered, changes in the ring speed have a major effect on maximum ring utilization. The
effect on processor efficiency is again sensitive to communication locality.
ffl Varying memory cycle time significantly effects processor efficiency, memory utilization, and
maximum ring utilization. Most notably, with one memory bank, the presence of memory
saturation at offered loads below those that cause ring saturation only occurs when the
memory cycle time is 20 processor cycles or longer.
ffl Prefetching through the use of block transfers imposes an additional load on the network.
The improvement in cache hit ratio needed to compensate for the increase in maximum ring
utilization is large enough to make it doubtful such prefetching is advantageous.
Other issues that warrant further investigation include the effect of synchronization and flow
control mechanisms on hot spot traffic, the effectiveness of the proposed adaptive maximum number
of outstanding transactions, the use of other DRAM techniques to compensate for long memory
access times, and including in the traffic pattern, page migration and replication transactions.
Acknowledgements
We thank Darrell Kindred for his work on an early version of the simulator and Keith Farkas for
his thorough comments on a draft of the paper.
--R
Comparison of hardware and software cache coherence schemes.
Performance analysis of multiprocessor mesh interconnection networks with wormhole routing.
Limits on interconnection network performance.
Performance tradeoffs in multithreaded processors.
Cache coherence on a slotted ring.
Ultracomputers: a teraflop before its time.
Improved multithreaded techniques for hiding communication latency in multiprocessors.
Overview of the KSR 1 computer system.
The impact of synchronization and granularity on parallel systems.
Reducing memory latency via non-blocking and prefetching caches
How DEC developed Alpha.
Multiprocessor simulation and tracing using Tango.
Cache consistency in hierarchical-ring-based multi- processors
CYBERPLUS and MAP V interprocessor communications for parallel and array processor systems.
Hiding memory latency using dynamic scheduling in shared-memory multiprocessors
Comparative evaluation of latency reducing and tolerating techniques.
The scalable coherent interface and related standards projects.
An effective synchronization network for hot spot accesses.
Simulating Computer Systems: Techniques and Tools.
Synchronization without contention.
Fast computer memories.
Ballot Review Committee of the IEEE Microprocessor Standards Committee.
"Hot spot"
Semiconductor Memories
Multicomputer Networks: Message-Based Parallel Processing
Performance of the SCI ring.
Experiences with the Hector multiprocessor.
Performance analysis of hierarchical cache-consistent multiprocessors
Hector: A hierarchically structured shared-memory multiprocessor
Distributing hot-spot addressing in large-scale multipro- cessors
A performance study of memory consistency models.
--TR
Simulating computer systems: techniques and tools
Distributing hot-spot addressing in large-scale multiprocessors
Multicomputer networks: message-based parallel processing
Performance analysis of hierarchical cache-consistent multiprocessors
Hector
Synchronization without contention
Comparative evaluation of latency reducing and tolerating techniques
Comparison of hardware and software cache coherence schemes
Ultracomputers: a teraflop before its time
How DEC developed Alpha
A performance study of memory consistency models
Hiding memory latency using dynamic scheduling in shared-memory multiprocessors
Improved multithreading techniques for hiding communication latency in multiprocessors
Performance of the SCI ring
Reducing memory latency via non-blocking and prefetching caches
An effective synchronization network for hot-spot accesses
Cache consistency in hierarchical-ring-based multiprocessors
The impact of synchronization and granularity on parallel systems
The Scalable Coherent Interface and Related Standards Projects
Limits on Interconnection Network Performance
Performance Tradeoffs in Multithreaded Processors
--CTR
V. Carl Hamacher , Hong Jiang, Hierarchical Ring Network Configuration and Performance Modeling, IEEE Transactions on Computers, v.50 n.1, p.1-12, January 2001
Jong Wook Kwak , Chu Shik Jhon, Torus Ring: improving performance of interconnection network by modifying hierarchical ring, Parallel Computing, v.33 n.1, p.2-20, February, 2007
Karim Harzallah , Kenneth C. Sevcik, Predicting application behavior in large scale shared-memory multiprocessors, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.53-es, December 04-08, 1995, San Diego, California, United States
Fadi N. Sibai, Performance of the hyper-ring multicomputer, Proceedings of the 1998 ACM symposium on Applied Computing, p.598-606, February 27-March 01, 1998, Atlanta, Georgia, United States
Gang Han , Robert H. Klenke , James H. Aylor, Performance Modeling of Hierarchical Crossbar-Based Multicomputer Systems, IEEE Transactions on Computers, v.50 n.9, p.877-890, September 2001
Fadi N. Sibai, Optimal Clustering of Hierarchical Hyper-Ring Multicomputers, The Journal of Supercomputing, v.14 n.1, p.53-76, July 1999 | shared memory multiprocessors;parallel architectures;hector prototype;performance;hot spots;communication locality;prefetching;slotted unidirectional ring;memory banks;performance evaluation;shared memory systems;large-scale systems;ring-based;large scale parallel systems |
626879 | Computing Display Conflicts in String Visualization. | Strings are used to represent a variety of objects such as DNA sequences, text, and numerical sequences. A model for a system for the visualization and analysis of strings was proposed by D. Mehta and S. Sahni (1992). The problem of display conflicts that arise in this model was identified and methods to overcome it were suggested. These methods require the computation of display conflicts. We present efficient algorithms to compute display conflicts. | Introduction
The string data type is used to represent a number of objects such as text strings, DNA
or protein sequences in molecular biology, numerical sequences, etc. Research in molecular
biology, text analysis, and interpretation of numerical data involves the identification of
recurring patterns in data and hypothesizing about their causes and/or effects [2, 3]. Detecting
patterns visually in long strings is tedious and prone to error. In [1], a model was
proposed to alleviate this problem. The model consists of identifying all recurring patterns
in a string and highlighting identical patterns in the same color.
We first discuss the notion of maximal patterns. Let abc be a pattern occurring m times
in a string S. Let the only occurrences of ab be those which occur in abc. Then, the
pattern ab is not maximal in S as it is always followed by c. The notion of maximality is
motivated by the assumption that in most applications, longer patterns are more significant
than shorter ones. Maximal patterns that occur at least twice are known as displayable
entities.
The problem of identifying all displayable entities and their occurrences in S can be solved
from the results in [4]. Once all displayable entities and their occurrences are obtained, we
are confronted with the problem of color coding them. In the string,
abc and def are the only displayable entities. So, S would be displayed by highlighting abc
in one color and def in another as shown in Figure 1.
In most strings, we encounter the problem of conflicts: Consider the string
cdefcdegabchabcde and its displayable entities, abc and cde (both are maximal and occur
thrice). So, they must be highlighted in different colors. Notice, however, that abc and cde
both occur in the substring abcde, which occurs as a suffix of S. Clearly, both displayable
entities cannot be highlighted in different colors in abcde as required by the model. This is
Figure
2: Alternative display model
a consequence of the fact that the letter c occurs in both displayable entities. This situation
is known as a prefix-suffix conflict (because a prefix of one displayable entity is a suffix of
the other). Note, also, that c is a displayable entity in S. Consequently, all occurrences
of c must be highlighted in a color different from those used for abc and cde. But this is
impossible as c is a subword of both abc and cde. This situation is referred to as a subword
conflict. The problem of subword conflicts may be partially alleviated by employing more
sophisticated display models as in Figure 2.
Irrespective of the display model used, it is usually not possible to display all occurrences
of all displayable entities. We are therefore forced into having to choose which ones
to display. There are three ways of achieving this:
Interactive : The user selects occurrences interactively by using his/her judgement. Typi-
cally, this would be done by examining the occurrences which are involved in a conflict and
choosing one that is the most meaningful.
weight is assigned to each occurrence. The higher the weight, the
greater the desirability of displaying the corresponding occurrence. Criteria that could be
used in assigning weights to occurrences include: length, position, number of occurrences
of the pattern, semantic value of the displayable entity, information on conflicts, etc. The
information is then fed to a routine which selects a set of occurrences so that the sum of
their weights is maximized (algorithms for these are discussed in [1]).
In a practical environment, the most appropriate method would be a
hybrid of the interactive and automatic approaches described above. The user could select
some occurrences that he/she wants included in the final display. The selection of the
remaining occurrences can then be performed by a routine which maximizes the display
information.
All the methods described above require knowledge about the conflicts, either to choose
which occurrences to display (interactive) or to assign weights to the occurrences (au-
tomatic). Automatic methods would require a list of all the conflicts, while interactive
methods require information about conflicts local to a particular segment of the string.
Since prefix suffix and subword conflicts are handled differently by different display models,
separate lists for each are required.
In this paper we identify a family of problems relating to the identification of conflicts
at various levels of detail. Problems relating to statistical information about conflicts are
also identified. Efficient algorithms for these problems are presented. All algorithms make
use of the symmetric compact directed acyclic word graph (scdawg) data structure [4] and
may be thought of as operations or traversals of the scdawg. The scdawg, which is used
to represent strings and sets of strings evolved from other string data structures such as
position trees, suffix trees, and directed acyclic word graphs [5, 6, 7, 8].
Section 2 contains preliminaries including definitions of displayable entities, conflicts,
and scdawgs. Section 3 presents optimal algorithms to determine whether a string has
conflicts and to compute subword and prefix suffix conflicts in a string. Sections 4, 5, and
6 discuss related size restricted, pattern restricted, and statistical problems and show how
to implement these by modifying the algorithms of Section 3. Finally, Section 7 presents
experimental data on the run times of some of these algorithms.
Preliminaries
2.1 Definitions
string of length n, whose characters are chosen from a fixed alphabet,
\Sigma, of constant size. A pattern in S is said to be maximal iff its occurrences are not all
preceded by the same letter, nor all followed by the same letter. Consider the string
abczdefydefxabc. Here, abc and def are the only maximal patterns. The occurrences of def
are preceded by different letters (z and y) and followed by different letters (y and x). The
occurrences of abc are not preceded by the same letter (the first occurrence does not have
a predecessor) nor followed by the same letter. However, de is not maximal because all its
occurrences in S are followed by f .
A pattern is said to be a displayable entity (or displayable) iff it is maximal and occurs
more than once in S (all maximal patterns are displayable entities with the exception of S,
which occurs once in itself).
(i) A subword conflict between two displayable entities, D 1 and D 2 , in S exists iff D 1 is a
substring of D 2 .
(ii) A prefix-suffix conflict between two displayable entities, D 1 and D 2 , in S exists iff there
exist substrings, S s in S such that S p SmS s occurs in S, S p
. The string, Sm is known as the intersection of the conflict; the conflict is said to occur
between D 1 and D 2 with respect to Sm .
2.2 Symmetric Compact Directed Acyclic Word Graphs (SCDAWGs)
An scdawg, SCD(S), corresponding to a string S is a directed acyclic graph defined by
a set of vertices, V (S), a set, R(S), of labeled directed edges called right extension (re)
edges, and a set, L(S), of labeled directed edges called left extension (le) edges . Each
vertex of V (S) represents a substring of S. Specifically, V (S) consists of a source (which
represents the empty word, ), a sink (which represents S), and a vertex corresponding to
each displayable entity of S.
Let de(v) denote the string represented by vertex, v (v ffl V (S)). Define the implication,
imp(S; ff), of a string ff in S to be the smallest superword of ff in fde(v): v ffl V (S)g, if such
a superword exists. Otherwise, imp(S; ff) does not exist.
Re edges from a vertex, v 1 , are obtained as follows: for each letter, x, in \Sigma, if imp(S; de(v 1 )x)
exists and is equal to de(v 2 there exists an re edge from v 1 to v 2 with
label xfl. If fi is the empty string, then the edge is known as a prefix extension edge. Le
edges from a vertex, v 1 , are obtained as follows: for each letter, x, in \Sigma, if imp(S; xde(v 1
exists and is equal to de(v 2 there exists an le edge from v 1 to v 2 with
label flx. If fi is the empty string, then the edge is known as a suffix extension edge.
Figure
3 shows V (S) and R(S) corresponding to are
gabcde
de
e
de
c
abc
bc
de
gabcde
fabcgabcde
gabcde
fabcgabcde
cde
sink
abc
c
source
Figure
3: Scdawg for
the displayable entities of S. There are two outgoing re edges from the vertex representing
abc. These edges correspond to g. imp(S;
Consequently, both edges are incident on the sink. There are no edges corresponding to the
other letters of the alphabet as imp(S; abcx) does not exist for x ffl fa; b; c; e; fg.
The space required for SCD(S) is O(n) and the time needed to construct it is O(n)
[5, 4]. While we have defined the scdawg data structure for a single string, S, it can be
extended to represent a set of strings.
2.3 Computing Occurrences of Displayable Entities
Figure
4 presents an algorithm for computing the end positions of all the occurrences of
de(v) in S. This is based on the outline provided in [4]. The complexity of Occurrences(S; v;
is proportional to the number of occurrences of de(v) in S.
Algorithm A
Procedure Occurrences(S:string,u:vertex,i:integer)
begin
if de(u) is a suffix of S
for each right out edge, e, from u do
begin
Let w be the vertex on which e is incident;
Figure
4: Algorithm for obtaining occurrences of displayable entities
2.4 Prefix and Suffix Extension Trees
The prefix extension tree, PET (S; v), at vertex v in V (S) is a subgraph of SCD(S) consisting
of (i) the root, v, (ii) PET (S; w) defined recursively for each vertex w in V (S) such
that there exists a prefix extension edge from v to w, and (iii) the prefix extension edges
leaving v. The suffix extension tree, SET (S; v), at v is defined analogously.
In
Figure
consists of the vertices representing c and cde, and
the sink. It also includes the prefix extension edges from c to cde and from cde to the sink.
Similarly, SET (S; v), consists of the vertices representing c and abc and the suffix
extension edge from c to abc (not shown in the figure).
contains a directed path from v to a vertex, w, in V (S)
iff de(v) is a prefix (suffix) of de(w).
Proof If there is a directed path in PET (S; v), from v to some vertex, w, then from the
definition of a prefix extension edge and the transitivity of the "prefix of " relation, de(v)
must be a prefix of de(w).
If de(v) is a prefix of de(w), then there exists a series of re edges from v to w, such that
de(v), when concatenated with the labels on these edges yields de(w). But, each of these
re edges must be a prefix extension edge. So a directed path from v to w exists in the
The proof for SET (S; v) is analogous. 2
Computing Conflicts
3.1 Algorithm to determine whether a string is conflict free
Before describing our algorithm to determine if a string is free of conflicts, we establish
some properties of conflict free strings that will be used in this algorithm.
Lemma 2 If a prefix-suffix conflict occurs in a string S, then a subword conflict must occur
in S.
Proof If a prefix-suffix conflict occurs between two displayable entities, W 1 and W 2 then
there exists W p WmW s such that W p are
followed by the same letter and W 2 isn't always preceded by the
same letter. I.e., Wm isn't always followed by the same letter and Wm isn't always preceded
by the same letter. So, Wm is maximal. But, W 1 occurs at least twice in S (since W 1 is a
displayable entity). So Wm occurs at least twice (since Wm is a subword of W 1 ) and is a
displayable entity. But, Wm is a subword of W 1 . So a subword conflict occurs between Wm
and W 1 . 2
Corollary 1 If string S is free of subword conflicts, then it is free of conflicts.
Lemma 3 de(w) is a subword of de(v) in S iff there is a path comprising right extension
and suffix extension edges from w to v.
Proof From the definition of SCD(S), if there exists an re edge from u to v, then de(u)
is a subword of de(v). If there exists a suffix extension edge from u to v, then de(u) is a
suffix (and therefore a subword) of de(v). If there exists a path comprising right and suffix
extension edges from w to v, then by transitivity, de(w) is a subword of de(v).
Algorithm NoConflicts(S)
1. Construct SCD(S).
2. Compute
3. Scan all right and suffix extension out edges from each element of V source . If any edge points to
a vertex other than the sink, then a conflict exists. Otherwise, S is conflict free.
Figure
5: Algorithm to determine whether a string is Conflict Free
If de(w) is a suffix of de(v), then there is a path (Lemma 1) of suffix extension edges
from w to v. If de(w) is a subword, but not a suffix of de(v), then from the definition of an
scdawg, there is a path of re edges from w to a vertex representing a suffix of de(v). 2
source denote all vertices in V (S) such that an re or suffix extension edge exists
between the source vertex of SCD(S) and each element of V source .
Lemma 4 String S is conflict free iff all right extension or suffix extension edges leaving
vertices in V source end at the sink vertex of SCD(S).
Proof A string, S is conflict free iff there does not exist a right or suffix extension edge
between two vertices, neither of which is the source or sink of SCD(S) (Corollary 1 and
Lemma 3).
Assume that S is conflict free. Consider a vertex, v, in V source . If v has right or suffix
extension out edge ! v; w ?, then v 6= sink. If w 6= sink, then de(v) is a subword of de(w)
and the string is not conflict free. This contradicts the assumption on S.
Next, assume that all right and suffix extension edges leaving vertices in V source end at
the sink vertex. Clearly, there cannot exist right or suffix extension edges between any two
vertices, v and w (v 6= sink, w 6= sink) in V source . Further, there cannot exist a vertex, x,
in V (S) (x 6= source, x 6= sink) such that x
source . For such a vertex to exist, there
must exist a path consisting of right and suffix extension edges from a vertex in V source to
x. Clearly, this is not true. So, S is conflict free. 2
The preceding development leads to algorithm NoConflicts (Figure 5).
Theorem 1 Algorithm NoConflicts is both correct and optimal.
Proof Correctness is an immediate consequence of Lemma 4. Step 1 takes O(n) time
[4]. Step 2 takes O(1) time since jV source j ! 2j\Sigmaj. Step 3 takes O(1) time since the number
of out edges leaving V source is less than 4j\Sigma 2 j. So, NoConflicts takes O(n) time, which is
optimal. Actually, steps 2 and 3 can be merged into step 1 and the construction of SCD(S)
aborted as soon as an edge that violates Lemma 4 is created. 2
3.2 Subword Conflicts
Consider the problem of finding all subword conflicts in string S. Let k s be the number
of subword conflicts in S. Any algorithm to solve this problem requires (i) O(n) time to
read in the input string and (ii) O(k s ) time to output all subword conflicts. So, O(n
is a lower bound on the time complexity for this problem. For the string
This is an upper bound on the number of
conflicts as the maximum number of substring occurrences is O(n 2 ) and in the worst case,
all occurrences conflict with each other. In this section, a compact method for representing
conflicts is presented. Let k sc be the size of this representation. k sc is n 3 =6
or O(n 3 ), for a n . Compaction never increases the size of the output and may yield up to a
factor of n reduction, as in the example. The compaction method is described below.
Consider S= abcdbcgabcdbchbc. The displayable entities are D
The ending positions of D 1 are 6 and 13 while those of D 2 are 3, 6, 10, 13, and 16. A
list of the subword conflicts between D 1 and D 2 can be written as: f(6,3), (6,6), (13,10),
(13,13)g. The first element of each ordered pair is the last position of the instance of the
superstring (here, D 1 ) involved in the conflict; the second element of each ordered pair is
the last position of the instance of the substring (here, D 2 ) involved in the conflict.
The cardinality of the set is the number of subword conflicts between D 1 and D 2 . This
is given by: frequency(D 1 )\Lambdanumber of occurrences of D 2 in D 1 . Since each conflict is represented
by an ordered pair, the size of the output is 2(frequency(D 1 )\Lambdanumber of occurrences
of D 2 in D 1 ).
Observe that the occurrences of D 2 in D 1 are in the same relative positions in all instances
of D 1 . It is therefore possible to write the list of subword conflicts between D 1 and D 2 as:
(6,13):(0,-3). The first list gives all the occurrences in S of the superstring (D 1 ), and the
second gives the relative positions of all the occurrences of the substring (D 2 ) in the superstring
(D 1 ) from the right end of D 1 . The size of the output is now: frequency(D 1 )+number
of occurrences of D 2 in D 1 . This is more economical than our earlier representation.
In general, a substring, D i , of S will have conflicts with many instances of a number of
displayable entities (say, D z ) of which it (D i ) is the superword. We would then
write the conflicts of D i as:
(l 1
z ).
Here, the l i 's represent all the occurrences of D i in S; the l 0
z s represent the
relative positions of all the occurrences of D z in D i . One such list will be required
for each displayable entity that contains other displayable entities as subwords. The
following equalities are easily obtained:
Size of Compact Representation =
Size of Original Representation
f i is the frequency of D i (only D i 's that have conflicts are considered). r ij is the frequency
of D j in one instance of D i . D represents the set of all displayable entities of S. D s
represents
the set of all displayable entities that are subwords of D i .
defined as the subgraph of SCD(S) which consists of the set of
vertices, SV (S; v) ae V (S) which represents displayable entities that are subwords of de(v)
and the set SE(S; v) of all re and suffix extension edges that connect any pair of vertices
in SV (S; v). Define SGR (S; v) as SG(S; v) with the directions of all the edges in SE(S; v)
reversed.
Lemma 5 SG(S; v) consists of all vertices, w, such that a path comprising right or suffix
extension edges joins w to v in SCD(S).
Proof Follows from Lemma 3. 2
2 for each vertex, v, in SCD(S) do
5 for all vertices, u, such that a right or suffix extension edge, ! u; v ?, is incident on v do
6 if u 6= source then
8 for each vertex, v, in SCD(S) such that v 6= sink and v:subword is true do
9 GetSubwords(v);
Procedure GetSubwords(v)
7 for each vertex, x (6= source), in reverse topological order of SG(S; v) do
9 if de(x) is a suffix of de(v) then x:sublist = f0g else x:sublist = fg;
for each vertex, w, in SG(S; v) on which an re edge, e from x is incident do
12 for each element, l, in w:sublist do
x:sublist
14 end;
Figure
Optimal algorithm to compute all subword conflicts
Algorithm B of Figure 6 computes the subword conflicts of S. The subword conflicts are
computed for precisely those displayable entities which have subword displayable entities.
Lines 4 to 6 of Algorithm B determine whether de(v) has subword displayable entities. Each
incoming right or suffix extension edge to v is checked to see whether it originates at the
source. If any incoming edge originates at a vertex other than source, then v.subword is set
to true (Lemma 3). If all incoming edges originate from source, then v.subword is set to
false. Procedure Getsubwords(v), which computes the subword conflicts of de(v) is invoked
if v:subword is true.
Procedure Occurrences(S; v; (line 2 of GetSubwords) computes the occurrences of de(v)
in S and places them in v.list. Procedure SetUp in line 5 traverses SGR (S; v) and initializes
fields in each vertex of SGR (S; v) so that a reverse topological traversal of SG(S; v) may be
subsequently performed. Procedure SetSuffixes in line 6 marks vertices whose displayable
entities are suffixes of de(v). This is accomplished by following the chain of reverse suffix
extension pointers starting at v and marking the vertices encountered as suffixes of v.
A list of relative occurrences, sublist, is associated with each vertex, x, in SG(S; v).
x.sublist represents the relative positions of de(x) in an occurrence of de(v). Each relative
occurence is denoted by its position relative to the last position of de(v) which is represented
by 0. If de(x) is a suffix of de(v) then x.sublist is initialized with the element, 0.
The remaining elements of x.sublist are computed from the sublist fields of vertices, w, in
v) such that a right extension edge goes from x to w. Consequently, w.sublist must
be computed before x.sublist. This is achieved by traversing SG(S; v) in reverse topological
order [9].
Lemma 6 x:sublist for vertex, x, in SG(S; v) contains all relative occurrences of de(x) in
de(v) on completion of GetSubwords(v).
Proof The correctness of this lemma follows from the correctness of procedure Occurrences(S; v;
of Section 2.3 and the observation that lines 7 to 15 of procedure GetSubwords achieve the
same effect as Occurrences(S; v; 0) in SG(S; v). 2
Theorem space and is therefore optimal.
Proof Computing v:subword for each vertex, v, in V (S) takes O(n) time as constant time
is spent at each vertex and edge in SCD(S). Consider the complexity of GetSubwords(v).
Lines 2 and 3 take O(jv:listj) time. Let the number of vertices in SG(S; v) be m. Then the
number of edges in SG(S; v) is O(m). Line 5 traverses SG(S; v) and therefore consumes
O(m) time. Line 6, in the worst case, could involve traversing SG(S; v) which takes O(m)
time. Computing the relative occurrences of de(x) in de(v) (lines 9-15) takes O(jx:sublistj)
time for each vertex, x, in SG(S; v). So, the total complexity of GetSubwords(v) is O(jv:listj+
However, m is O(
xfflSV (S;v);x6=v jx:sublistj), since jx:sublistj 1 for each x ffl SG(S; v).
xfflSV (S;v);x6=v jx:sublistj is the size of the output for GetSubwords(v).
So, the over all complexity of algorithm B is O(n
for
3.3 Prefix Suffix Conflicts
As with subword conflicts, the lower bound for the problem of computing prefix-suffix
conflicts is O(n+ k p ), where k p is the number of prefix-suffix conflicts in S. For
is which is also the upper bound on k p .
Unlike subword conflicts, it is not possible to compact the output representation.
Let w and x, respectively, be vertices in SET (S; v) and PET (S; v). Let
to be the vertex representing
imp(S; WwW v W x ), if such a vertex exists. Otherwise, Pshadow(w; v;
define Pimage(w; v;
W a WwW v W x for some (possibly empty) string, W a . Otherwise, Pimage(w; v;
For each vertex, w in SET (S; v), a shadow prefix dag, SPD(w; v), rooted at vertex w is
comprised of the set of vertices fPshadow(w; v; x)j x on PET (S; v), Pshadow(w; v; x) 6=
nilg.
Figure
7 illustrates these concepts. Broken lines represent suffix extension edges, dotted
lines represent right extension edges, and solid lines represent prefix extension edges.
ff
l
c
a
s
r
z
x
Figure
7: Illustration of prefix and suffix trees and a shadow prefix dag
SET (S; v), PET (S; v), and SPD(w; v) have been enclosed by dashed, solid, and dotted
lines respectively. We have: Pshadow(w; v;
Lemma 7 A prefix-suffix conflict occurs between two displayable entities, W
and with respect to a third displayable entity
in SET (S; v) and x occurs in PET (S; v), and (ii) Pshadow(w; v; x) 6= nil. The number of
conflicts between de(w) and de(x) with respect to de(v) is equal to the number of occurrences
of de(Pshadow(w; v; x)) in S.
Proof By definition, a prefix-suffix conflict occurs between displayable entities W 1 and
with respect to Wm iff there exists W p WmW s in S, where W
WmW s .
Clearly, Wm is a suffix of W 1 and Wm a prefix of W 2 iff w occurs in SET (S; v) and x
occurs in PET (S; v). W p WmW s occurs in S iff imp(S; W p WmW s
nil. The number of conflicts between de(w) and de(x) is equal to the number of occurrences
of imp(S; W p WmW s
Lemma 8 If a prefix-suffix conflict does not occur between de(w) and de(x) with respect
to de(v), where w occurs in SET (S; v) and x occurs in PET (S; v), then there are no
prefix-suffix conflicts between any displayable entity which represents a descendant of w
in SET (S; v) and any displayable entity which represents a descendant of x in PET (S; v)
with respect to de(v).
Proof Since w is in SET (S; v) and x is in PET (S; v), we can represent de(w) by W p de(v)
and de(x) by de(v)W s . If no conflicts occur, then W p de(v)W s does not occur in S. The
descendants of w in SET (S; v) will represent displayable entities of the form W a
W a W p de(v), while the descendants of x in PET (S; v) will represent displayable entities
of the form de(x)W are substrings of S. For a prefix-suffix
e
f
r
y
z
x
Figure
8: Illustration of conditions for Lemma 9
conflict to occur between W a de(w) and de(x)W b with respect to de(v), W a W p de(v)W s W b
must exist in S. However, this is not possible as W p de(v)W s does not occur in S and the
result follows. 2
Lemma 9 In SCD(S), if (i) there is a prefix extension edge, e,
from x to z with label aff. (iii) there is a right extension edge, f , from y to u with label afi,
then Pshadow(w; v;
Proof Let
for some possibly empty string W a .
W b W a Ww de(v)W x afi for some string W b .
To prove the lemma, we must show that
u. I.e., that (i) Ww de(v)W x aff is a subword of de(u) and (ii) de(u) is
the smallest superword of Ww de(v)W x aff represented by a vertex in SCD(S).
(i) Assume that Ww de(v)W x aff is not a subword of
ff is not a prefix of fi.
Case 1: fi is a proper prefix of ff.
Since W b W a Ww de(v)W x afi is maximal, its occurrences are not all followed by the same
letter. This is true for any of its suffixes. In particular all occurrences of de(v)W x afi cannot
be followed by the same letter. Similarly, all occurrences of de(v)W x afi cannot be preceded
by the same letter as it is a prefix of de(v)W x de(z). So, de(v)W x afi is a displayable
entity of S. Consequently, the prefix extension edge from x corresponding to the letter a
must be directed to the vertex representing de(v)W x afi. This is a contradiction.
Case 2: afi matches aff in the first k characters, but not in the th character (1 k
We have 1. Clearly, the strings de(v)W x aflff 1 and
W b W a Ww de(v)W x aflfi 1 occur in S. I.e., all occurrences of de(v)W x afl cannot be followed
by the same letter. Further, all occurrences of de(v)W x afl cannot be preceded by the same
letter as it is a prefix of de(v)W x it is a displayable entity of S. Consequently,
the prefix extension edge from x corresponding to the letter a must be directed to the vertex
representing de(v)W x afl. This results in a contradiction. Thus, ff is a prefix of fi.
(ii) From (i), ff is a prefix of fi. Assume that W b W a Ww de(v)W x afi is not the smallest su-
perword of Ww de(v)W x aff. Since x is the smallest
superword of Ww de(v)W x , the smallest superword of Ww de(v)W x aff must be of the form
W a Ww de(v)W x afl where ff is a prefix of fl which is a proper prefix of fi and/or W b 1
is a
proper suffix of W b . But, the right out edge, f , from z points to the smallest superword of
W a Ww de(v)W x a (from the definition of SCD(S)) which is W b W a Ww de(v)W x afi. So, W b 1
which is a contradiction. 2
In SCD(S), if (i) there is a path of prefix extension
edges from x to x 1 (let the concatenation of their labels be aff), (iii) there is a prefix extension
edge from x 1 to z with label bfl, and (iv) there is a right extension edge, f from y to u with
label affbfi, then
Proof Similar to proof of Lemma 9. 2
Path P
aff
z
x
f
r
y
Figure
9: Illustration of conditions for Lemmas 10 and 11
Algorithm C
Construct SCD(S).
2 for each vertex, v, in SCD(S) do
Procedure NextSuffix(current,v);
1 for each suffix extension edge ! current; w ? do
2 fthere can only be one suffix extension edge from current to wg
6 if exist then NextSuffix(w,v);
Figure
10: Optimal algorithm to compute all prefix-suffix conflicts
Lemma 11 In Lemma 9 or Lemma 10, if jlabel(f)j sum of the lengths of the labels of
of the edges on the prefix extension edge path P from x to z, then label(f) = concatenation
of the labels on P and
Proof From Lemma 10, the concatenation of the labels of the edges of P is a prefix of
label(f ). But, jlabel(f)j sum of the lengths of the labels of the edges on P . I.e., label(f)
concatenation of the labels of the series edges on P . y,
of x in PET (S; v).
Proof Follows from Lemmas 7 and 8. 2
Algorithm C in Figure 10 computes all prefix-suffix conflicts of S. Line 1 constructs
SCD(S). Lines 2 and 3 compute all prefix-suffix conflicts in S by separately computing for
each displayable entity, de(v), all the prefix-suffix conflicts of which it is the intersection.
Procedure NextSuffix(current,v) computes all prefix-suffix conflicts between displayable
entities represented by descendants of current in SET (S; v) and displayable entities represented
by descendants of v in PET (S; v) with respect to de(v) (so the call to NextSuffix(v,v)
in line 3 of Algorithm C computes all prefix-suffix conflicts with respect to de(v)). It does
so by identifying SPD(w; v) for each child, w, of current in SET (S; v). The call to Shad-
owSearch(v,w,v,w) in line 5 identifies SPD(w; v) and computes all prefix-suffix conflicts
between de(w) and displayable entities represented by descendants of v in PET (S; v) with
respect to de(v). If ShadowSearch(v,w,v,w) does not report any prefix-suffix conflicts then
the global variable exist is unchanged by ShadowSearch(v,w,v,w) (i.e., exist = false, from
line 4). Otherwise, it is set to true by ShadowSearch. Line 6 ensures that NextSuffix(w,v)
is called only if ShadowSearch(v,w,v,w) detected prefix suffix conflicts between de(w) and
displayable entities represented by descendants of v in PET (S; v) with respect to de(v)
(Lemma 8).
For each descendant, q, of vertex x in PET (S; v), procedure ShadowSearch(v,w,x,y)
computes all prefix suffix conflicts between de(w) and de(q) with respect to de(v). y represents
Pshadow(w; v; x). We will show that all calls to ShadowSearch maintain the invariant
(which is referred to as the image invariant hereafter) that
nil. Notice that the invariant holds when ShadowSearch is called from NextSuffix as
v). The for statement in line 1 examines each prefix out edge from x. Lines 3
to 28 compute all prefix suffix conflicts between de(w) and displayable entities represented
by vertices in PET (S; z), where z is the vertex on which the prefix extension edge from x
is incident. The truth of the condition in the for statement of line 1, line 4 and the truth
of the condition inside the if statement of line 5 establish that the conditions of Lemma 9
are satisfied prior to the execution of lines 8 and 9. The truth of the comment in line 8
and the correctness of line 9 are established by Lemma 9. Procedure ListConflicts of line 9
lists all prefix suffix conflicts between de(w) and de(z) with respect to de(v). Similarly, the
truth of the condition inside the while statement of line 11, lines 13 and 14, and the truth
of the condition inside the if statement of line 15 establish that the conditions of Lemma
are satisfied prior to the execution of lines 18-20. Again, the correctness of lines 18-20 are
established by Lemma 10. If done remains false on exiting the while loop, the condition
of the if statement of line 15 must have evaluated to true. Consequently, the conditions of
apply. Further, since the while loop of line 11 terminated, the additional condition
of Lemma 11 is also satisfied. Hence, from Lemma 11, and the
Procedure ShadowSearch(v; w; x; y);
1 for each prefix extension edge do
2 fThere can only be one prefix extension edge from x to zg
first character in label(e);
5 if there is a right extension edge, whose label starts with fc
6 then
9
while (not done) and (jlabel(f)j ? jlabel(e)j do
14 th character in label(f).
15 if there is a prefix extension edge starting with nc
22 else
26 ShadowSearch(v,w,z,u);
28 end
29 end
Figure
11: Algorithm for shadow search
image invariant for the recursive call to ShadowSearch(v; w; z; u) is maintained. Line 27 sets
the global variable exist to true since the execution of the then clause of the if statement of
line 5 ensures that at least one prefix-suffix conflict is reported by ShadowSearch(v; w; v; w)
(Lemmas 7 and 9). exist remains false only if the then clause of the if statement (line 5)
is never executed.
Theorem 3 Algorithm C computes all prefix-suffix conflicts of S in O(n
time, which is optimal.
Proof Line 1 of Algorithm C takes O(n) time [4]. The cost of lines 2 and 3 without
including the execution time of NextSuffix(v; v) is O(n).
Next, we show that NextSuffix(v; v) takes O(k v ) time, where k v is the number of prefix
suffix conflicts with respect to v (i.e., k v represents the size of the output of NextSuffix(v; v)).
Assume that NextSuffix is invoked p times in the computation. Let S T be the set of invocations
of NextSuffix which do not call NextSuffix recursively. Let p j. Let S F be the
set of invocations of NextSuffix which do call NextSuffix recursively. Let p Each
element of S F can directly call at most j\Sigmaj elements of S T . So, p T =p F j\Sigmaj. From lines
4-6 in NextSuffix(current,v), each element of S F yields at least one distinct conflict from
its call to ShadowSearch. Thus, p F k v . So,
cost of execution of NextSuffix without including the costs of recursive calls to NextSuffix
and ShadowSearch is O(j\Sigmaj) (= O(1)) as there are at most j\Sigmaj suffix edges leaving a vertex.
So, the total cost of execution of all invocations of NextSuffix spawned by NextSuffix(v; v)
without including the cost of recursive calls to ShadowSearch is
Next, we consider the calls to ShadowSearch that were spawned by NextSuffix(v; v). Let
be the set of invocations of ShadowSearch which do not call ShadowSearch recursively.
Let q A = jT A j. Let TB be the set of invocations of ShadowSearch which do call ShadowSearch
recursively. Let q We have q A
(j\Sigmaj+1)q B+j\Sigmajp. From the algorithm, each element of TB yields a distinct conflict. So, q B
The cost of execution of a single call to ShadowSearch
without including the cost of executing recursive calls to ShadowSearch is O(1)
O(complexity of ListConflicts of line
(complexity of ListConflicts of line 20
in the i th iteration of the while loop)), where w denotes the number of iterations of the
while loop. The complexity of ListConflicts is proportional to the number of conflicts
it reports. Since ListConflicts always yields at least one distinct conflict, the complexity
of ShadowSearch is O(1 Summing over all calls to ShadowSearch spawned by
NextSuffix(v; v), we obtain O(q Thus, the total complexity of Algorithm C
is O(n
3.4 Alternative Algorithms
In this section, an algorithm for computing all conflicts (i.e., both subword and prefix-suffix
conflicts) is presented. This solution is relatively simple and has competitive run times.
However, it lacks the flexibility required to efficiently solve many of the problems listed
in Sections 4, 5, and 6 . The algorithm (Algorithm D) is presented in Figure 12. Step 1
computes a list of all occurrences of all displayable entities in S. This list is obtained by first
computing the lists of occurrences corresponding to each vertex of V (S) (except the source
and the sink) and then concatenating these lists. Each occurrence is represented by its start
and end positions. Step 2 sorts the list of occurrences obtained in step 1 in increasing order
of their start positions. Occurrences with the same start positions are sorted in decreasing
order of their end positions. This is done using radix sort. Step 3 computes for the i'th
occurrence, occ i , all its prefix suffix conflicts with occurrences whose starting positions are
greater than its own, and all its subword conflicts with its subwords. occ i is checked against
conflict. Here, c is the smallest integer for which there is no
conflict between occ i and occ i+c . The start position of occ i+c is greater than the ending
position of occ i . The start position of the occ j (j c) will also be greater than the
end position of occ i , since the list of occurrences was sorted on increasing order of start
positions. The start positions of occ i+1 ,., occ are greater than or equal to the start
positions of occ i but are less than or equal to its end position. Those occurrences among
whose start positions are equal to that of occ i have end positions that
are smaller (since occurrences with the same start position are sorted in decreasing order
of their end positions). The remaining conflicts of occ i (i.e., subword conflicts with its
superwords, prefix suffix conflicts with occurrences whose start positions are less than that
of occ i ) have already been computed in earlier iterations of the for statement in Algorithm
D.
For example, let the input to step 3 be the following list of ordered pairs:((1,6), (1,3),
(1,1), (2,2), (3,8), (3,5), (4,6), (5,8), (6,10)), where the first element of the ordered pair
denotes the start position and the second element denotes the end position of the occurrence.
Consider the occurrence (3,5). Its conflicts with (1,6), (1,3), and (3,8) are computed in
iterations 1, 2, and 5 of the for loop. Its conflicts with (4,6) and (5,8) are computed in
iteration 6 of the for loop.
Theorem 4 Algorithm D takes O(n
Proof Step 1 takes O(n is the number of occurrences of displayable
entities of S. Step 2 also takes O(n elements are to be sorted using radix
sort with n buckets. Step 3 takes O(o time: the for loop executes O(o) times; each
iteration of the while loop yields a distinct conflict. So, the total complexity is O(n+o+k).
We now show that be the number of occurrences not involved in a
conflict. Then be the number of occurrences involved in at least one conflict.
A single conflict occurs between two occurrences. So 2k
Algorithm D can be modified so that the size of the output is This may be
achieved by checking whether an occurrence is the first representative of its pattern in the
for loop of step 3. The subword conflicts are only reported for the first occurence of the
pattern. However, the time complexity of Algorithm D remains O(n + k). In this sense, it
is suboptimal.
Algorithm D
Step 1: Obtain a list of all occurrences of all displayable entities in the string. This list is obtained
by first computing the lists of occurrences corresponding to each vertex of the scdawg (except the
source and the sink) and then concatenating these lists.
Step 2: Sort the list of occurrences using the start positions of the occurrences as the primary key
(increasing order) and the end position as the secondary key (decreasing order). This is done using
Step3:
for i:= 1 to (number of occurrences) do
begin
begin
then occ i is a superword of occ j
else have a prefix-suffix conflict;
Figure
12: A simple algorithm for computing conflicts
4 Size Restricted Queries
Experimental data show that random strings contain a large number of displayable entities
of small length. In most applications, small displayable entities are less interesting than
large ones. Hence, it is useful to list only those displayable entities whose lengths are greater
than some integer, k. Similarly, it is useful to report exactly those conflicts in which the
conflicting displayable entities have length greater than k. This gives rise to the following
problems:
P1: List all occurrences of displayable entities whose lengths are greater than k.
P2: Compute all prefix suffix conflicts involving displayable entities of length greater than
k.
P3: Compute all subword conflicts involving displayable entities of length greater than k.
The overlap of a conflict is defined as the string common to the conflicting displayable
entities. The overlap of a subword conflict is the subword displayable entity. The overlap of
a prefix-suffix conflict is its intersection. The size of a conflict is the length of the overlap.
An alternative formulation of the size restricted problem which also seeks to achieve the
goal outlined above is based on reporting only those conflicts whose size is greater than
k. This formulation of the problem is particularly relevant when the conflicts are of more
interest than the displayable entities. It also establishes that all conflicting displayable
entities reported have size greater than k. We have the following problems:
P4: Obtain all prefix-suffix conflicts of size greater than some integer k.
P5: Obtain all subword conflicts of size greater than some integer k.
P1 is solved optimally by invoking Occurrences(S; v; 0) for each vertex, v, in V (S), where
combined solution to P2 and P3 uses the approach of Section 3.4. The only
modification to the algorithm of Figure 12 is in step 1 which now becomes:
Obtain all occurrences of displayable entities whose lengths are greater than k.
The resulting algorithm is optimal with respect to the expanded representation of subword
conflicts. However, as with the general problem, it is not possible to obtain separate optimal
solutions to P2 and P3 by using the techniques of Section 3.4. An optimal solution to P4 is
obtained by executing line 3 of Algorithm C of Figure 10 for only those vertices, v, in V (S)
which have jde(v)j ? k. An optimal solution to P5 is obtaind by the following modification
to Algorithm B of Figure 6:
(i) Right extension or suffix extension edges
are marked "disabled".
(ii) The definition of SG(S; v) is modified so that SG(S; v), v ffl V (S), is defined as the
subgraph of SCD(S) which consists of the set of vertices, SV (S; v) ae V (S) which represent
displayable entities of length greater than k that are subwords of de(v) and the set of all re
and suffix extension edges that connect any pair of vertices in SV (S; v).
Algorithm B is modified. The modified algorithm is shown in Figure 13.
We note that P2 and P5 are identical, since the overlap of a subword conflict is the same
as the subword displayable entity.
Algorithm B
2 for each vertex, v, in SCD(S) do
4 for each vertex, v, in SCD(S) such that jde(v)j ? k do
5 for all vertices, u, such that a non disabled right or suffix extension edge, ! u; v ? exists do
7 for each vertex, v, in SCD(S) such that v 6= sink and v:subword is true do
9 end
Figure
13: Modified version of algorithm B
5 Pattern Oriented Queries
These queries are useful in applications where the fact that two patterns have a conflict is
more important than the number and location of conflicts. The following problems arise as
a result:
List all pairs of displayable entities which have subword conflicts.
P7: List all triplets of displayable entities (D 1 ,D 2 ,Dm ) such that there is a prefix suffix
conflict between D 1 and D 2 with respect to Dm .
P8: Same as P6, but size restricted as in P5.
P9: Same as P7, but size restricted as in P4.
P6 may be solved optimally by reporting for each vertex v in V (S), where v does not
represent the sink of CSD(s), the subword displayable entities of de(v), if any. This is
accomplished by reporting de(w), for each vertex w, w 6= source, in SG(S; v). P7 may also
be solved optimally by modifying procedure ListConflicts of Figure 11 so that it reports
the conflicting displayable entiities and their intersection. P8 and P9 may also be solved by
making similar modifications to the algorithms of the previous section.
6 Statistical Queries
These queries are useful when conclusions are to be drawn from the data based on statistical
facts. Let f(D) denote the frequency (number of occurrences) of D in the string and
the number of occurrences of displayable entity D 1 in displayable entity D 2 .
The following queries may then be defined.
P10: For each pair of displayable entities, D 1 and D 2 , involved in a subword conflict (D 1
is the subword of D 2 ), obtain (number of occurrences of D 1 which occur as
subwords of
P11: For each pair of displayable entities, D 1 and D 2 , involved in a prefix-suffix conflict,
(number of occurrences of D 1 which have prefix-suffix conflicts with
greater than a statistically determined threshold, then the
following could be be said with some confidence: Presence of D 1 implies Presence of D 2 .
the number of prefix suffix conflicts between D 1 and D 2 with
respect to Dm and psf(D 1 ; D 2 ), the number of prefix suffix conflicts between D 1 and D 2 .
We can approximate p(D The two quantities are
identical unless a single occurrence of D 1 is a subword of two or more distinct occurrences
of D 2 . Similarly, we can approximate q(D The two quantities
are identical unless a single occurrence of D 1 has prefix suffix conflicts with two or more
distinct occurrences of D 2 . f(D 1 ) can be computed for all displayable entities in SCD(S)
in O(n) time by a single traversal of SCD(S) in reverse topological order.
be computed optimally for all D 1 , D 2 , by modifying procedure GetSubwords(v) as shown
in
Figure
14.
computed optimally, for all D 1 , D 2 , and Dm , where D 1 has a prefix
suffix conflict with D 2 with respect to Dm , by modifying ListConflicts(u; z; w) of Figure 11
so that it returns f(de(u)), since this is the number of conflicts between de(w) and de(z)
with respect to de(v). psf(D calculated by summing psf(D 1 ; D 2 ; Dm ) over all
intersections, Dm , of prefix suffix conflicts between D 1 and D 2 . p(D
may be computed by simple modifications to the algorithms used to compute rf(D
Procedure GetSubwords(v)
5 for each vertex, x (6= source), in reverse topological order of SG(S; v) do
6 begin
7 if de(x) is a suffix of de(v) then rf(de(x);
8 for each vertex, w,in SG(S; v) on which an re edge, e from x is incident do
9
Figure
14: Modification to GetSubwords(v) for computing relative frequencies
and These problems may be solved under the size restrictions of P4 and P5
by modifications similar to those made in Section 4.
7 Experimental Results
Algorithms B (Section 3.2), C (Section 3.3), and D (Section 3.4) were programmed in GNU
C++ and run on a SUN SPARCstation 1. For test data we used 120 randomly generated
strings. The alphabet size was chosen to be one of f5, 15, 25, 35g and the string length was
500, 1000, or 2000. The test set of strings consisted of 10 different strings for each of the
possible combinations of input size and alphabet size. For each of these combinations,
the average run times for the 10 strings is given in Figures 15-18.
Figure
15 gives the average times for computing all conflicts by combining algorithms
B and C.
Figure
gives the average times for computing all prefix-suffix conflicts using
Algorithm C. Figure 17 gives the average times for computing all the pattern restricted
prefix-suffix conflicts (problem P7 of Section 5) by modifying Algorithm C as described in
Section 5. Figure 18 represents the average times for Algorithm D.
Figures
15 to 17 represent the theoretically superior solutions to the corresponding prob-
lems, while Figure represents Algorithm D which provides a simpler, but suboptimal,
Size of Size of String
Alphabet 500 1000 2000
Figure
15: Time in ms for computing all conflicts using the optimal algorithm
solution to the three problems. In all cases the time for constructing scdawgs and writing
the results to a file were not included as these steps are common to all the solutions.
The results show that the suboptimal Algorithm D is superior to the optimal solution for
computing all conflicts or all prefix-suffix conflicts for a randomly generated string. This is
due to the simplicity of Algorithm D and the fact that the number of conflicts in a randomly
generated string is small. However, on a string such as a 100 which represents the worst case
scenario in terms of the number of conflicts reported, the following run times were obtained:
All conflicts, optimal algorithm: 14,190 ms
All prefix-suffix conflicts, optimal algorithm: 10,840 ms
All pattern restricted prefix-suffix conflicts, optimal algorithm: 5,000 ms
Algorithm D: 26,942 ms
The experimental results using random strings also show that, as expected, the optimal
algorithm fares better than Algorithm D for the more restricted problem of computing
pattern oriented prefix-suffix conflicts.
We conclude that Algorithm D should be used for the more general problems of computing
conflicts while the optimal solutions should be used for the restricted versions. Hence,
Algorithm D should be used in an automatic environment, while the optimal solutions
should be used in interactive or semi-automatic environments.
Size of Size of String
Alphabet 500 1000 2000
Figure
Time in ms for computing all prefix suffix conflicts using the optimal algorithm
Size of Size of String
Alphabet 500 1000 2000
Figure
17: Time in ms for computing all pattern restricted prefix suffix conflicts using the
optimal algorithm
Size of Size of String
Alphabet 500 1000 2000
Figure
18: Time in ms for algorithm D
Conclusions
In this paper, we have described efficient algorithms for the analysis and visualization of
patterns in strings. We are currently extending these to other discrete objects such as
circular strings and graphs. Extending these techniques to the domain of approximate
string matching would be useful, but appears to be difficult.
--R
"String Visualization,"
"Sequence Landscapes,"
"The Matching of Protein Sequences using Color Intrasequence Homology Displays,"
"Complete Inverted Files for Efficient Text Retrieval and Analysis,"
"The Smallest Automaton Recognizing the Subwords of a Text,"
"Efficient on-line construction and correction of position trees,"
"A space-economical suffix tree construction algorithm,"
"Efficient and elegant subword tree construction,"
Fundamentals of Data Structures in Pascal
--TR
Transducers and repetitions
Complete inverted files for efficient text retrieval and analysis
The matching of protein sequences using color intrasequence homology displays
Models and techniques for the visualization of labeled discrete objects
A Space-Economical Suffix Tree Construction Algorithm
Fundamentals of Data Structures in Pascal
A Data Structure for Circular String Analysis and Visualization
--CTR
Hsuan T. Chang , Neng-Wen Lo , Wei C. Lu , Chung J. Kuo, Visualization and comparison of DNA sequences by use of three-dimensional trajectories, Proceedings of the First Asia-Pacific bioinformatics conference on Bioinformatics 2003, p.81-85, February 01, 2003, Adelaide, Australia | DNA sequences;display conflicts;data visualisation;directed graphs;numerical sequences;string visualization |
626911 | Assignment of Task Modules in Hypercube Multicomputers with Component Failures for Communication Efficiency. | The problem of assigning task modules within a hypercube multicomputer with possible link failures is investigated. A concept of indirect optimization is introduced and a function, called communication traffic, is proposed as the objective of optimization. The assignments obtained from optimizing this function are shown to significantly improve the actual communication performance measure, called communication turnaround time, over random assignments. | Introduction
While the abundance of nodes in a hypercube multicomputer allows for executing tasks
that require a large number of nodes, inter-node communication is still a major bottleneck
in achieving the overall speedup. To achieve communication efficiency, considerable efforts
have been made to improve routing algorithms and switching mechanisms, which are basically
concerned with system-level implementations. Communication efficiency must also be
improved on a per-task basis by exploiting the communication locality among task modules.
To assign task modules for an "optimal" performance, the run-time behavior of these
modules must be known a priori to some extent. However, as stated in the Halting Problem
in computing theory, there is no way to predict the exact run-time behavior of a program
before it is actually executed. In case of distributed computation, it is also very difficult
to predict the timing of communication events before a set of task modules are actually
executed.
In the graph-mapping approach (e.g., [3]) the timing aspects of module communication
are ignored, and a simple objective function is proposed for optimization. It is generally
The work reported in this paper was supported in part by the Office of Naval Research under grants N00014-
85-K-0122 and N00014-91-J-1115. Any opinions, findings, and recommendations in this paper are those of
the authors and do not reflect the views of the ONR.
difficult to relate this objective to any of well-known performance measures, such as task
execution time. By contrast, any more complicated approach (e.g., [5]) requires a substantial
amount of knowledge of the run-time behavior of task modules, which may not be available
unless the task is tested thoroughly beforehand.
Our primary goal in this paper is to optimize communication performance. We use a relatively
simple objective function and verify (with simulations) that optimizing this function
actually leads to better communication performance, especially for assigning communication-
bound tasks. Focusing on communication performance differentiates our work from others'
related to more generic aspects of task assignment. Taking a communication-oriented approach
to the task assignment problem is hardly a limitation, since inter-node communication
is of the utmost importance to the performance and fault-tolerance of any distributed
system.
This paper is organized as follows. In Section 2, we present the basic system model
and assumptions used. Our problem is also formally stated there. In Section 3, the NP-hardness
of minimizing communication traffic is stated first in order to justify the use of
heuristic algorithms. Several heuristic algorithms are then used to find good suboptimal
solutions. These algorithms are tested extensively for various inputs to assess the quality
of the assignments obtained from them. We then simulate these algorithms to verify the
actual quality of the assignments found by minimizing communication traffic. The effects
of inaccuracy in describing the task behavior are also discussed there. Section 4 deals with
the case where an alternative fault-tolerant routing scheme is used. The paper concludes
with Section 5.
Preliminaries
The communication volume between each pair of modules is expressed in number of
packets to be exchanged between them. A message may be composed of a number of
packets. Inter-module communications are assumed to be accomplished via message passing.
A message is routed from the source to the destination via a fault-free shortest path under
circuit or message switching.
Since most existing hypercubes do not support a per-node multi-programming environ-
ment, it is assumed that at most one module is assigned to a node, i.e., the mapping between
nodes and modules is one-to-one. For a task with M modules such that 2
some integer n, one can add some "dummy" modules and make it a task with 2 n modules.
So, we will henceforth assume is the dimension of the subcube allocated
by the host to execute the task, and thus, the mapping of modules into subcube nodes is
one-to-one and onto.
For a network of nodes, we define a communication event between modules (CEBM)
as an instance that a module needs to send a message to another module, while defining
a communication event between nodes (CEBN) as an instance of a node needing to send
a message to some other node. In circuit switching, these two are indistinguishable. In
message switching, however, a single CEBM can become several CEBNs. For example, when
a pair of modules reside in two different nodes which are 2 hops apart, in circuit switching a
CEBM from one module to the other is just a CEBN from one node to the other node. For
message switching, however, this CEBM becomes two CEBNs: one from the source to the
intermediate node, and the other from the intermediate node to the destination. We said
there is an outstanding CEBN if a message is to be sent by a node. An outstanding CEBN
is said to be processed if it is sent from the source node to a neighboring destination node.
An outstanding CEBN may not be processed immediately due to the limited link resources
available. A CEBN is said to be blocked if it is not processed immediately.
The goodness of a task assignment for hypercubes is measured by the communication
turnaround time (CTT), which is the time span from the the first CEBN becoming outstanding
to all CEBNs being processed. As an illustrative example, in Fig. 1 we have a
simple network of 4 nodes with 3 CEBMs. The status of each link during the execution
under both circuit and message switching is shown in this figure. Note that the computation
time needed is invariant among different assignments, since at most one module is assigned
to each node. Therefore, CTT is the main source of difference in the completion time of a
task.
CTT cannot be easily described as a mathematical function, and the exact value of CTT
depends largely on the timing of communication events, thus making it impractical to use
any direct optimization of CTT. However, as we shall see, for communication-bound tasks,
minimizing a certain simple function can usually minimize CTT.
The communication cost in executing a set of task modules is defined as the sum of
time units during which links are kept busy with the messages among these modules. In
other words, it is a measure of the total communication resources used by an instance of
task execution measured in time units.
Suppose c(h) is the number of time units links are kept busy with a packet sent over a
path of h hops. The sum of time units links are kept busy for related purposes other than
packet transmission - such as establishing a connection - is assumed to be negligible.
For message-switched hypercubes, relation may not be accurate
for circuit-switched hypercubes. However, if the "call request" signal to hunt for a free
path occupies each link only for a very short time, then this expression would be a good
approximation even for circuit-switched hypercubes.
By defining c(1) as a unit of communication traffic (i.e., the link usage by one packet
traversing one link), the communication traffic resulting from executing a task under assignment
a becomes:
1-i-n
is the number of packets traversing over i
links. One can easily see that cost com (a) / k(a).
In both type of switching, communication traffic is proportional to the total link occupation
time, two communicating modules placed far apart will require more communication
resources, and there is a higher possibility that some other instances of communication will
be blocked and/or delayed, which in turn leads to an increase of CTT. Therefore, reduction
of communication traffic is crucial to the CTT associated with communication-bound tasks.
When introducing the notion of communication cost and communication traffic, we deliberately
avoided the low-level timing details. We only consider the total number of packets
to be sent/received between a pair of task modules during the whole mission time, thus allowing
for a simple objective function that can be translated into a simple combinatorial
optimization problem.
The following notation will be used throughout the paper:
the dimension of a subcube available for executing the task under consideration.
an M \Theta M communication volume matrix, where U ij is the communication volume
from m i to m j expressed in number of packets, and M is the number of task modules.
As mentioned earlier, we will assume specified otherwise. Note that
since a module does not send messages to itself.
vector denoting an assignment, the i-th component of which represents
the fact that m i is assigned to a node whose address is a i ,
the distance (i.e., the length of a shortest path) between node n i and
and is dependent upon the routing algorithm used. For now, we will assume
case where D(n can be different from D(n
will be discussed in Section 4.) Before making a module assignment, D(n are
calculated for a subcube assigned to the task under consideration with a shortest-path
routing algorithm. Note that the distance between a pair of nodes may be greater
than their Hamming distance and depends on the number of faulty links and the
routing scheme used.
Optimization Algorithms and Performance Evaluation
Although the objective function we proposed is simple in nature, optimizing it is still a
difficult problem, as formally stated in the following theorem.
Theorem 1: Given an M \Theta M task communication volume matrix U where
it is NP-hard to find an optimal mapping a of an M-module task onto an n-dimensional
fault-free hypercube.
The theorem can be proved by restricting to the fault-free hypercube embedding problem
discussed in [4]. The proof is presented in [6] and will not be repeated here.
Thus, there is no known polynomial-time algorithm to find an optimal mapping/assignment.
Note that minimizing CTT, rather than communication traffic itself, is our ultimate goal.
As we shall see, good heuristic algorithms will suffice in most situations. An optimal solution
that minimizes communication traffic is usually computationally expensive, and may
only improve slightly over fast algorithms in terms of minimizing CTT, our actual objective.
One simple greedy heuristic which has been tested to work well in fault-free cases [6]
is given below. Consider each task as a weighted graph with vertices representing modules
and edge weights representing communication volumes. For any two nodes x and y under
the shortest-path routing, D(x; Therefore, it is sufficient to use an undirected
graph with as the weight on the edge connecting m i and m j . We want to find
a Hamiltonian cycle in this task graph with as high a total edge-weight as possible, and
then embed this cycle into a Hamiltonian cycle in the hypercube. A Hamiltonian cycle in
a fault-free hypercube can be easily found with Gray-code enumeration.
In an injured hypercube with faulty links, however, there may not be any Hamiltonian
cycle available for embedding. So, we define a weighted relaxed (WR) Hamiltonian cycle in
an injured hypercube (with no disconnected node) as a relaxed version of Hamiltonian cycle,
such that two nodes x and y can be linked in the cycle via a virtual edge which may be a
path from x to y through some intermediate nodes. The weight on each virtual edge of the
cycle is the number of physical edges on it. The greedy algorithm embeds the Hamiltonian
cycle in the task graph with the maximum weight (found by a greedy approach) into the
minimum-weight WR Hamiltonian cycle in the injured hypercube (also found via a greedy
approach).
Two other (more complex) heuristic algorithms are also implemented and tested: a
bottom-up approach algorithm similar to the one proposed in [3], and a top-down approach
proposed in [2]. Both of these algorithms are modified to handle cases with broken links. A
third non-deterministic approach using the simulated annealing method is also implemented
and tested, where 2-opting is used as the perturb function.
To compare the quality of the assignments found by these algorithms with respect to
communication traffic, we simulated these algorithms using input tasks with randomly
generated communication volumes among their modules.
Each algorithm was executed for 1000 randomly-generated tasks where U ij 's are characterized
by a normally-distributed random variable with mean - and variance oe 2 . Changing
the value of - is found to have little effect on the relative performance of assignments found
with different algorithms as long as the ratio oe=- remains constant. It is also found that, as
oe=- approaches zero, the difference in communication traffic between random assignments
and those assignments found with the above three algorithms gets smaller, while the difference
gets larger as oe=- increases. This is consistent with the fact that when all U ij 's are
identical, all assignments will lead to an identical communication traffic, and all assignment
algorithms will perform identically.
For the input tasks used to obtain the plots in Fig. 2, U ij 's are characterized with
and the horizontal axis depicts the number of faulty links while the vertical axis
represents communication traffic. In this figure, "A1" represents the greedy algorithm,
while "A2" represents the communication traffic achieved with either top-down or bottom-up
algorithms, whichever yields smaller communication traffic. This is to enhance the
readability of the plots since the performance of the top-down and bottom-up algorithms
turns out to be very close to each other.
It can be seen from the above result that the greedy approach performs surprisingly
well. Complex (i.e., top-down and bottom-up) approaches outperform the simple greedy
approach only by a small margin. Furthermore, as the number of faulty links increases,
the gap between the two curves gets narrower. This can be explained by the fact that
both the top-down and bottom-up approaches are best suited for fault-free (thus regular)
hypercubes. For hypercubes with faulty links, the interconnection structure is no longer
symmetric or regular. In such a case, the partitioning mechanism in the top-down approach
and the combining mechanism in the bottom-up approach must use less accurate heuristic
decisions, hence degrading the performance.
The simulated annealing approach ("A3"), on the other hand, has shown more consistent
performances. Its advantages over other algorithms become more pronounced as the cube
size and the number of link failures increase. Therefore, we can conclude that this approach
is more adaptable to irregular structures.
In
Table
1, we show the relative timings of various algorithms used. The algorithms are
tested on a DEC 5000 workstation running Ultrix operating system. Though we have only
shown the performance data for problem size of 16, the relative performances
of different algorithms are found to be consistent at least up to the problem size of
To demonstrate why minimizing communication traffic can be effective, we also need to
compare the CTTs of those assignments found with different algorithms. Our simulation
model for this purpose is described below.
Timing: A time unit is selected as the time required to send a packet over a single communication
link.
Routing algorithm and mechanism:
ffl Link failures are detected before task assignment and execution. Each message is
routed through a fault-free shortest path determined prior to the execution of this
task. We assume there are no additional link failures during the execution of this
task.
ffl Under message switching, the routing mechanism at an intermediate node on a path
will take a certain amount of time to forward a message from one link to the next.
We assume this time to be relatively small and absorbed into the length of the corresponding
message.
ffl The propagation delay on a communication path is assumed to be negligible.
Task communication behavior:
ffl T , given for each task, denotes the time span between the arrival time of the first
and the last CEBMs. The arrival times of CEBMs are uniformly distributed in
Hence, for a given task assignment, a larger T represents the task being more
computation-bound, while a smaller T represents the task being more communication-
bound.
ffl L msg denotes the maximum message length measured in number of packets. The communication
volume between each pair of modules is randomly grouped into messages
of lengths within [1; L msg ].
Message scheduling and queueing: If a link is busy when it is to be used for transmitting
an incoming message, the message is stored in a FIFO queue at the source end of the link.
When more than one message requests the use of the same link at a time, one of them is
randomly chosen to use the link. This selection procedure is repeated until all requests are
honored.
The goal of our simulation is to comparatively evaluate the goodness of different assignments
under the same execution environment, but not to compare the performance of
different system implementations. So, the simulation results should not be used to determine
the relative performance of different switching methods or routing algorithms.
The assignments found are fed into an event-driven simulator to evaluate their performance
in a close to real-world environment. The results are plotted in Fig. 3 for message
switching systems. Input tasks used here are the same as those used for Fig. 2. We set
circuit-switched hypercubes are found to be similar
in most situations and thus are not presented.
The effects of changing T under the same assignment for a given task are shown in
Table
2 for message switching without link failures. The results are found to be similar to
those under circuit switching. For the cases of
T in the range [10; 300] does not have any significant impact on the relative performance of
assignments found with different algorithms. The assignments found with all of the above
algorithms have shown substantial improvements over random assignments 8 T 2 [10; 300].
This is because the network gets saturated with messages when
In case of the network becomes less congested at T - 160 and the
differences of CTTs among different assignment algorithms start to diminish. So, we can
conclude that minimizing communication traffic yields a peak improvement when the task
to be assigned is communication-bound and the communication network may become highly
congested during the execution of this task. For the T value which results
in small performance differences is approximately 750, while for it is about
2; 250. However, when T is relatively small and the network is not near saturation, the
difference in message queue length can be made smaller by using the assignments obtained
from the minimization of communication traffic. Depending on system implementation, the
performance of a node may also be influenced by the length of message queue it has to
maintain.
The effects of changing L msg are more subtle than changing T . Generally, shorter
message lengths result in better performances in circuit-switched hypercubes, while for
message-switched hypercubes, changing the message length does not affect system performance
notably if the overall communication traffic is fixed.
Our simulation results have indicated that different switching techniques do not matter
much to system performance for communication-bound tasks. Circuit switching is shown
to have only a slightly better performance than message switching for the same task as-
signments. However, as mentioned earlier, the actual performance will depend on system
implementation, and thus, the simulation results should not be used to compare the effectiveness
of the two switching methods.
When the number of faulty links grows within our preset range (i.e., less than one
third of all links), CTT also increases. For smaller hypercubes, such as introducing
even one more faulty link can make a significant difference in CTT. This effect gets more
pronounced when the number of link failures becomes larger, as one can see in Fig. 3. As
the cube size increases, there will be more fault-free links, hence making lesser impacts of
a single link failure on system performance.
Though the proposed assignment scheme requires only minimal information of run-time
task behaviors, we still need the communication matrix to assign a task. It is obvious that
unless the task has been fully tested and each message length is exactly calculated, the
entries in the communication matrix cannot be absolutely accurate. To study the effects
of an inaccurate communication matrix, we repeated the simulation for evaluating CTT
while introducing uncertainties in the communication matrix. In Fig. 4, the input tasks
are essentially the same as those in Fig. 3, but there is a maximum of 20% error in each
during an instance of actual task execution, the number of packets exchanged
From Fig. 4, one can see that inaccuracies in U ij 's affect
communication performance, especially when the cube size and number of link failures are
large. However, when the number of link failures is less than one sixth of all links, the
overall performances of various assignment algorithms are still quite close to those in the
case with exact U ij 's.
4 An Alternative Routing Algorithm
Thus far, we have assumed that the hypercube is implemented with a routing scheme
which routes messages from the source to the destination via fixed, shortest paths determined
before the execution of each task. However, there are several practical problems
with this assumption. For instance, all faulty links must be known before making a task
assignment, which may not always be possible. Also, if additional link failures occur after
the assignment, the execution of the task may become unsuccessful.
To overcome these problems, we must use a routing algorithm that is more adaptive
to system changes. For instance, the DFS routing scheme proposed in [1] is an adaptive
fault-tolerant routing algorithm which uses only a limited amount of global link status infor-
mation. Under this algorithm, the system does not require a priori link status information,
and communications can be completed even if some unexpected link failures occur during
task execution as long as all nodes involved remain connected. However, due to the adaptive
nature of the DFS routing algorithm, it is difficult to predict the length of the path used
for routing a message during task execution, especially in the presence of link failures. So,
cannot be accurately estimated, thus making it difficult to minimize the overall communication
traffic. Furthermore, under some routing scheme like the DFS routing, due to
the lack of global link status information, the length of the path chosen for communication
from node x to node y may not be the same as the one chosen for that from y to x. For exam-
ple, suppose we have a 3-cube with three broken links, 00 , 0 0, and \Lambda01. Then the length
of path chosen under the DFS routing from 000 to 111 is 3. But the path chosen to route
messages from 111 to 000 is 111 !110 !001 !101 !001 !110 !010 !011 !001 !000,
which has a length of 9. The routing schemes with this nature are said to be asymmet-
ric. In most cases, a routing scheme becomes asymmetric only in the presence of faulty
components.
Based on the above observations, one may jump to a conclusion that there is no way
to minimize the communication traffic of an assignment, and hence it will be impossible to
improve communication efficiency by appropriately placing task modules. However, as our
simulation results show below, use of the proposed objective function, even by assigning
task modules to the nodes as if there were no faulty links, can still significantly improve
communication performance over random assignments when the number of faulty links is
within a certain range.
Three assignment strategies are compared in our simulation. The first is the usual
random assignment. The second is to apply the greedy algorithm to the hypercube without
knowing which links are faulty. The third assumes perfect knowledge of link failures and
how each message will be routed during the execution. This strategy is an unrealistic,
ideal case, which gives an upper bound of performance improvement with communication
traffic, whereas the second strategy provides a lower bound. In real applications, depending
on the knowledge available during the task assignment phase, the performance should lie
somewhere between these two extremes.
Fig. 5 shows the communication traffic of the assignments under the DFS routing for
the same set of input tasks as in Fig. 2. "S1" represents the assignments found with no
knowledge of faulty links, while "S2" represents those found with complete knowledge of
faulty links and the routing paths of all messages. It can be easily seen that under the
DFS routing, the overall communication traffic is higher than the routing algorithm used
before. Nevertheless, the assignments "S1" still generate smaller communication traffic than
random assignments, though the improvement becomes insignificant as the number of faulty
links increases.
The same set of input tasks used in Fig. 3 are employed again for event-driven sim-
ulations, except that the DFS routing is used here. Since the DFS routing is designed
based on the operating principles of message switching, we only simulate the hypercubes
implemented with this switching method.
The measured CTTs of these assignments are plotted in Fig. 6. It is found that, without
knowledge of faulty links, assignments "S1" still improves over random assignments with
a margin of at least 10% when the number of faulty links is more than one eighth of the
total links. This margin increases as the number of faulty links increases, but starts to
level off when the percentage of faulty links approaches 33%. The assignments "S2" show
even larger improvements and improve over random assignments with a steadily increasing
margin as the number of link failures increases.
By comparing Fig. 6 with Fig. 3, one can see that, though the DFS routing results in an
overall higher communication traffic, it results in smaller CTTs when the number of faulty
links is relatively small. This is due to the fact that the DFS routing chooses communication
paths in a more "spread out" fashion and causes less congestion than the shortest fixed-
path scheme used before. This advantage diminishes after the number of faulty links grows
beyond one fifth of all links. When the percentage of faulty links reaches 25%, the DFS
routing begins to yield larger CTTs than the shortest path routing. This is because paths
available between nodes are becoming fewer, so messages cannot be spread out to more
paths under the DFS routing. Also, the greater communication traffic overhead of the DFS
routing starts to have dominant effects. Note, however, that implementation details will be
crucial in actual applications and these simulation results should not be used to judge the
relative merits of different routing algorithms.
Concluding Remarks
Using a simple objective function, we formulated and solved the problem of mapping
a task which is composed of multiple interacting modules into a hypercube with possible
faulty links. The goal was to optimize communication performance, measured in communication
turnaround time. Due to the difficulties in optimizing this objective directly, a
function called communication traffic is proposed. By minimizing this function, we could
find assignments with the optimal communication performance using heuristic combinatorial
techniques. Several algorithms that find assignments by minimizing communication
traffic are implemented and comparatively evaluated. The assignments found with these
algorithms are also evaluated with simulations. It has been shown that for communication-
bound tasks, they have significant improvements over random assignments with respect to
an actual communication performance measure, i.e., the communication turnaround time.
We also analyzed the case where an alternative routing algorithm like the DFS routing
is used. Our task assignment criterion is again shown to work well in this case.
Although we have focused our attention on hypercube multicomputers, the objective
function we developed can be generalized to other distributed systems with different interconnection
topologies. In fact, when we consider hypercubes with faulty links, they
are actually no longer hypercubes, but they are subgraphs of hypercubes. For systems
with other interconnection topologies, as long as they adopt message switching or circuit
switching and the length of the path chosen by the routing scheme between each pair of
nodes is known before a task assignment, our assignment criterion can be applied to these
architectures.
--R
"Depth-first search approach for fault-tolerant routing in hypercube multicomputers,"
"Task allocation onto a hypercube by recursive mincut bipartitioning,"
"A task mapping method for a hypercube by combining subcubes,"
"Hypercube embedding is NP-complete,"
"Temporal communication graphs: A new graph theoretic model for mapping and scheduling in distributed memory systems,"
"Communication-oriented assignment of task modules in hypercube multicomputers,"
--TR
Task allocation onto a hypercube by recursive mincut bipartitioning
Depth-First Search Approach for Fault-Tolerant Routing in Hypercube Multicomputers
--CTR
Tarek F. Abdelzaher , Ella M. Atkins , Kang G. Shin, QoS Negotiation in Real-Time Systems and Its Application to Automated Flight Control, IEEE Transactions on Computers, v.49 n.11, p.1170-1183, November 2000
Dar-Tzen Peng , Kang G. Shin , Tarek F. Abdelzaher, Assignment and Scheduling Communicating Periodic Tasks in Distributed Real-Time Systems, IEEE Transactions on Software Engineering, v.23 n.12, p.745-758, December 1997 | hypercube multicomputers;fault-tolerant routing;communication cost;component failures;communication turnaround time;optimization;communication traffic;indirect optimization;task assignment;NP hard problem;link failures;telecommunication traffic;performance measure;communication efficiency;task modules;hypercube networks |
626918 | A Comparison of Trace-Sampling Techniques for Multi-Megabyte Caches. | The paper compares the trace-sampling techniques of set sampling and time sampling. Using the multi-billion reference traces of A. Borg et al. (1990), we apply both techniques to multi-megabyte caches, where sampling is most valuable. We evaluate whether either technique meets a 10% sampling goal: a method meets this goal if, at least 90% of the time, it estimates the trace's true misses per instruction with /spl les/10% relative error using /spl les/10% of the trace. Results for these traces and caches show that set sampling meets the 10% sampling goal, while time sampling does not. We also find that cold-start bias in time samples is most effectively reduced by the technique of D.A. Wood et al. (1991). Nevertheless, overcoming cold-start bias requires tens of millions of consecutive references. | Introduction
Computer designers commonly use trace-driven simulation to evaluate alternative CPU caches [SMIT82].
But as cache sizes reach one megabyte and more, traditional trace-driven simulation requires very long traces
(e.g., billions of references) to determine steady-state performance [BOKW90, STON90]. But long traces are
expensive to obtain, store, and use.
We can avoid simulating long traces by using trace-sampling techniques. Let the cache performance of a
small portion of the trace be an observation and a collection of observations be a sample. Sampling theory tells
1. R. E. Kessler was supported in part by a summer internship at Digital Equipment Corporation and graduate fellowships
from the National Science Foundation and the University of Wisconsin Alumni Research Foundation. He is now employed
by Cray Research, Inc. Mark D. Hill is supported in part by the National Science Foundation (MIPS-8957278 and
CCR-8902536), A.T.& T. Bell Laboratories, Cray Research Foundation and Digital Equipment Corporation. David A.
Wood is supported in part by the National Science Foundation (CCR-9157366) and the University of Wisconsin Graduate
School.
Time
Cache
Sets
Time-Space Diagram of Memory References
Vertical Slice
Horizontal Slice
Figure
1. Sampling as Vertical and Horizontal Time-Space Slices.
This figure shows a time-space diagram of a simulation with a very short trace. The time (position within
the trace) and cache set of each reference is marked with an -. An observation in set sampling is the cache
performance of one set. References that determine a single set's performance appear in an horizontal slice
of this figure. An observation in time sampling is the cache performance of an interval of consecutive
references. These references appear in a vertical slice of this figure.
how to predict cache performance of the full trace, given a random sample of unbiased observations [MIFJ90].
With additional assumptions, we can also estimate how far the true value is likely to be from the estimate.
Two important trace-sampling techniques are set sampling [HEIS90, PUZA85] and time sampling
[LAPI88, LAHA88]. An observation in set sampling is the cache performance for the references to a single set
(depicted as a horizontal slice in Figure 1), while an observation in time sampling is the cache performance of the
references in a single time-contiguous trace interval (a vertical slice in Figure 1) 2 .
This study is the first to compare set sampling and time sampling. Using billion-reference traces of large
workloads that include multiprogramming but not operating system references [BOKW90], we examine how well
these methods predict mean misses per instruction (MPI) for multi-megabyte caches. We say a sampling method
is effective if it meets the following goal:
2. Laha et al. [LAPI88] and Wood et al. [WOHK91] referred to an observation of references in a time-contiguous interval
as a "sample". We use sample to refer to a collection of observations to be consistent with statistics terminology
[MIFJ90].
Definition 1: 10% Sampling Goal
A sampling method meets the 10% sampling goal if using -10% of the references in a trace it estimates
the trace's true MPI with -10% relative error and at least 90% confidence.
For set-sampling we find several results. First, calculating the MPI for a sample using instruction fetches to
all sets is much more accurate than using only instruction fetches to the sampled sets. Second, instead of selecting
the sets in a sample at random, selecting sets that share several index bit values reduces simulation time, facilitates
the simulation of cache hierarchies, and still accurately predicts the trace's MPI. Third, and most important, set
sampling is effective. For our traces and caches, it typically meets the 10% sampling goal.
For time-sampling, we first compare techniques for overcoming cold-start bias [EASF78], i.e., determining
the MPI for a particular trace interval without knowing the initial cache state. We consider leaving the cold-start
bias unchanged, recording metrics only during the second half of each interval, recording metrics only for initialized
sets [LAPI88, STON90], stitching intervals together [AGHH88], and Wood et al.'s model for predicting the
initialization reference miss ratio [WOHK91]. We obtain two results. First, on average, the technique of Wood et
al. minimizes the cold-start bias better than the other techniques. Second, for the multi-megabyte caches we stu-
died, interval lengths of tens of millions of instructions and larger are needed to reduce the effects of cold-start.
Then using Wood et al.'s technique to mitigate cold-start bias, we show that time sampling fails to meet the
10% sampling goal, because: (1) many intervals are needed to capture workload variation, and (2) long intervals
are necessary to overcome cold-start bias. Thus, for these traces and caches, set sampling is more effective than
time sampling for estimating MPI. Time sampling will still be preferred, however, for caches with time-dependent
behavior (e.g., prefetching) or interactions between sets (e.g., a single write buffer).
We do not consider other (non-sampling) techniques that reduce trace data storage, such as, Mache
[SAMP89], stack deletion and snapshot method [SMIT77], trace (tape) stripping [PUZA85, WANB90], or exploiting
spatial locality [AGAH90]. These techniques can be used in addition to the sampling considered in this study. We
also do not consider Przybylski's prefix technique [PRZY88], which prepends all previously-referenced unique
addresses to each time-observation. This method seems unattractive for multi-megabyte caches where each time-
observation requires its own prefix and each prefix must be very large for programs that can exercise multi-megabyte
caches.
Section 2 describes our methods. Section 3 and 4 examine set sampling and time sampling, respectively.
Finally, Section 5 summarizes our results.
2. Methodology
This section describes the traces, cache configurations, and performance metric we use in later sections.
2.1. The Traces
The traces used in the study were collected at DEC Western Research Laboratory (WRL)
[BOKL89, BOKW90] on a DEC WRL Titan [NIEL86], a load/store ("RISC") architecture. Each trace consists of
the execution of three to six billion instructions of large workloads, including multiprogramming but not operating
system references. The traces reference from eight to over one hundred megabytes of unique memory locations.
These traces are sufficiently long to overcome the cold-start intervals of even the large caches considered in this
study. We chose programs with large memory requirements since we predict large application sizes will be more
common as main memories of hundreds of megabytes become available.
The traces of the multiprogrammed workloads represent the actual execution interleaving of the processes
on the traced system. The Mult2 trace includes a series of compiles, a printed circuit board router, a VLSI design
rule checker, and a series of simple programs commonly found on UNIX 3 systems, all executing in parallel (about
megabytes active at any time) with an average of 134,000 instructions executed between each process switch.
The Mult2.2 trace is the Mult2 workload with a switch interval of 214,000 instructions. The Mult1 trace includes
the processes in the Mult2 trace plus an execution of the system loader (the last phase of compilation) and a
Scheme (Lisp variant) program (75 megabytes active) and has a switch interval of 138,000 instructions. The
Mult1.2 trace is the Mult1 workload with a switch interval of 195,000 instructions. The trace is of a VLSI
timing verifier (96 megabytes). Sor is a uniprocessor successive-over-relaxation algorithm that uses large, sparse
matrices (62 megabytes). Tree is a Scheme program that searches a large tree data structure (64 megabytes). Lin
is a power supply analyzer that uses sparse matrices (57 megabytes).
2.2. Cache Configuration Assumptions
This study focuses on multi-megabyte unified (mixed) caches, where we expect trace sampling to be most
useful. We vary the size and set-associativity of these caches over a range of 1-megabyte to 16-megabytes and
direct-mapped to four-way. The caches do no prefetching, use write-back and write-allocate policies, and have
128-byte blocks. The non-direct-mapped caches use a random replacement policy. We do not expect the
3. Trademark AT&T Bell Laboratories.
replacement policy to affect sampling accuracy since, for example, least-recently-used replacement eliminates at
most 15% of the cache misses for these caches [KESS91]. The caches use virtual-indexing with PID-hashing, an
approximation to real-indexing 4 . We also examined a several real-indexed caches and found that they produced
results similar to those in this paper, which is not surprising since real-indexed cache performance is often close to
virtual-indexed cache performance.
Since multi-megabyte caches are likely to be used in a cache hierarchy, we simulate them as alternative
secondary caches placed behind a fixed primary cache configuration. The primary caches are split (separate)
instruction and data caches that are 32-kilobytes each, direct-mapped, 32-byte blocks, do no prefetching, use virtual
indexing, and write-back and write-allocate policies. We do not evaluate primary cache tradeoffs in this
study since secondary cache performance is unaffected by the primary caches when their sizes differ by at least a
factor of eight [PRHH89].
2.3. The Performance Metric: Misses Per Instruction
We measure cache performance with misses per instruction (MPI) rather than miss ratio 5 . Since we only
use MPI to compare the performance of alternative unified secondary caches, MPI is equivalent to Przybylski's
global miss ratio [PRHH89]. Specifically, a cache's MPI is equal to its global miss ratio times the average number
of processor references (instruction fetches and data references) per instruction.
3. Set Sampling
We first examine set sampling, where an observation is the MPI of a single set and a sample is a collection
of single-set observations. Section 3.1 discusses how to compute a set sample's MPI and why it should not contain
random sets, while Section 3.2 examines how well set sampling predicts MPI long , the MPI of a full trace.
4. Caches that use virtual-indexing select the set of reference using the reference's virtual address, while those that use
real-indexing select with the real address. PID-hashing means that we exclusive-or the upper eight index bits from the virtual
address with the process identifier (PID) of the currently executing process.
5. MPI is better than miss ratio for comparing the performance contributions of several caches in a system (e.g., instruc-
tion, data, secondary), because MPI implicitly factors in how often a cache is accessed. Furthermore, MPI times a cache's
average miss penalty directly gives the cycles per instruction (CPI) lost because of that cache's misses [HENP90].
3.1. Constructing Set Samples
3.1.1. Calculating the MPI of a Sample
Consider a cache with s sets, numbered 0 to s -1. For each set i, let miss i and instrn i be the number of the
misses and instruction fetches to set i. Let S be a sample containing n sets. We consider two ways to calculate the
MPI of sample S, MPI - S . The sampled-instructions method divides the mean misses to sets in sample S by the
mean instruction fetches to sets in sample S: 6
MPI
while the all-instructions method divides by the mean instruction fetches to all sets:
MPI
s
# .
We compare the two methods by computing their coefficients of variation across all set samples S (j)
obtained with the constant-bits method, described in Section 3.1.2.:
long
long
where M is the number of samples.
Experimental results, illustrated in Table 1, show that the all-instructions method performs much better,
never having a coefficient of variation more than one-tenth the sampled-instructions method. The difference is
infinite for the Sor and Lin traces because loops confine many instruction fetches to a few sets. We also investigated
normalizing miss i with total references per set and data references per set [KESS91]. These methods
6. We do not consider calculating MPI - S with n#
miss i
# , because Puzak [PUZA85] showed estimating miss ratio
with the arithmetic mean of the per-set miss ratios is inferior to dividing the misses to sampled sets by the references to
sampled sets (the miss-ratio equivalent of the sampled-instructions method). For a sample containing all sets, Puzak's
work also implies s#
miss i
long .
Coefficient of Variation (percent)
Trace MPI long - 1000 all-instructions sampled-instructions
Mult1.2 0.69 1.9% 28.9%
Mult2.2 0.59 1.3% 24.3%
Sor 7.54 0.3% -
Tree 0.59 6.8% 191.9%
Lin
Table
1. Accuracy of MPI Computations.
This table illustrates the accuracy of computing the full trace MPI (column two) for several traces with the
all-instructions and sampled-instructions methods. The accuracy is evaluated with the coefficient of variation
(Equation 1) for the MPI estimates from a 4-megabyte direct-mapped secondary cache with
samples of 1/16 the full trace each. The set samples are constructed with the constant bits method
described in the next section. Results show that the all-instructions method is far superior to the sampled-
instructions method.
perform similarly to the sampled-instructions method and not as well as the all-instructions method.
A minor disadvantage of the all-instructions method is that when gathering the references in a sample we
must also count instruction fetches to all sets. Since we believe this drawback is out-weighed by the experimental
results, we will use the all-instructions method throughout this paper.
3.1.2. The Constant-Bits Method
We now examine two methods for selecting sets to form a sample. We use an example to show a disadvantage
of selecting sets at random and introduce the constant-bits method to overcome the disadvantage.
Assume that we want to evaluate three caches with samples that contain about 1/16-th the references in a
full trace. Let the caches choose a reference's set with bit selection (i.e., the index bits are the least-significant
address bits above the block offset) and have the following parameters:
Cache A: 32-kilobyte direct-mapped cache with 32-byte blocks (therefore its index bits are bits 14-5, assuming
references are byte addresses with bit 0 being least-significant),
Cache B: 1-megabyte two-way set-associative cache with 128-byte blocks (index bits 18-7), and
Cache C: 16-megabyte direct-mapped cache with 128-byte blocks (index bits 23-7).
One method for selecting the sets in a sample is to choose them at random [PUZA85]. To evaluate cache A
with references to random sets, we randomly select 64 of its 1024 sets (1/16-th), filter the full trace to extract
e
three
filtered
simulate
each cache
sets of
each cache
filter with
random
(a) selecting sets at random for each cache (b) selecting sets that share constant bits
A
traces
full trace
one
filtered
simulate
each cache
A
trace
constant
bits
filter with
four
Figure
2. Two Methods for Selecting the Sets in a Sample.
This figure illustrates selecting sets for samples of three alternative caches (A, B, and C) using (a) random
sets and (b) constant bits. When sets are selected at random, each simulation must be begin by filtering the
full trace. With constant-bits, on the other hand, a filtered trace can drive the simulation of any cache
whose index bits contain the constant bits.
references to those sets, and then simulate cache A. For cache B, we select 128 of its 2048 sets, filter and simu-
late. Similarly for cache C, we use 8192 of its 131072 sets. As illustrated in Figure 2a, selecting sets at random
requires that each simulation begin by extracting references from the full trace. Furthermore, since primary and
secondary caches usually have different sets, it is not clear how to simulate a hierarchy of cache when sets are
selected at random.
We introduce a new method, called constant-bits, that selects references rather than sets. The constant-bits
method forms a filtered trace that includes all references that have the same value in some address bits. This
filtered trace can then be used to simulate any cache whose index bits include the constant bits 7 [KESS91]. For
example, we can filter a trace by retaining all references that have the binary value 0000 (or one of the other 15
7. This description assumes bit selection, i.e., the set-indexing bits come directly from the address of the memory access
[SMIT82]. The scenario is more complicated with other than simple bit-selection cache indexing. In particular, since we
use PID-hashing in this study, we ensured that the hashed index bits did not overlap with the constant bits. Note that
though we use virtual-indexing, one can apply the constant-bits technique to real-indexed caches, and to hierarchical
configurations with both real and virtual indexed caches if the constant bits are below the page boundary.
e
one
filtered
each secondary
cache
trace
constant
bits
filter with
four A
simulate
primary
cache
simulate
one
filtered
trace of
cache P
Figure
3. Using Constant-Bits Samples with a Hierarchy.
This figure illustrates how to use constant-bits samples to simulate a primary cache (P) and three alternative
secondary caches (A, B and C).
values) in address bits 11-8. If the filtered trace is used with cache A, it will select all sets with binary index
xxx0000xxx, where "x" is either 0 or 1. Since this index pattern has six x's, it identifies 64 (2 6 ) of the 1024
sets in cache A. For caches B and C, the filtered trace selects sets with indices xxxxxxx0000x and
xxxxxxxxxxxx0000x, respectively. More generally, we can then use this filtered trace to select 1/16-th of the
sets in any cache whose block size is 256 bytes or less and whose size divided by associativity exceeds 2 kilo-
bytes. These include both primary caches (32-byte blocks, kilobytes, direct-mapped) and all secondary caches
(128-byte blocks, 1-16 megabytes, 1-4-way set-associative) considered in this paper.
Constant-bits samples have two advantages over random samples. First, as illustrated in Figure 2b, using
constant-bits samples reduces simulation time by allowing a filtered trace to drive the simulations of more than
one alternative cache. Second, constant-bits samples make it straightforward to simulate hierarchies of caches
(when all caches index with the constant bits). As illustrated in Figure 3, we may simulate the primary cache once
and then use a trace of its misses to simulate alternative secondary caches.
A potential disadvantage of constant-bits samples is they may work poorly for workloads that use their
address space systematically (e.g., frequent accesses to a large, fixed stride vector). Experimental evidence, how-
ever, suggests that constant-bits sampling is effective. Figure 4 illustrates the accuracy of constant bits sampling
Instructions Executed (Billions)312Misses
PerInstructions
Set-Sampled Mult1.2 MPI Over Time
Figure
4. Set Sampling on the Mult1.2 Trace.
For every 100 million instructions, this figure shows the actual MPI's (solid line) with the predicted MPI's
from each of 16 different set samples (dotted lines) for the Mult1.2 trace with a 4-megabyte direct-mapped
cache. Each sample includes only references that have the same value for address bits 11-8 (i.e., bits 11-8
are the constant bits), assuming that references are byte addresses with bit 0 being least-significant Since
four bits are used to select references, each of the 16 samples contains an average of 1/16-th of the trace.
for the Mult1.2 trace. For every 100 million instructions, it plots the true MPI for the interval and the MPI
obtained from 16 set samples (each about 1/16 of the references of the full trace). In this example, the set samples
are almost indistinguishable from the true MPI. More generally, we found constant-bits samples to be
equally or more accurate than random samples with multi-megabyte caches [KESS91]. Thus, we use the
constant-bits method to construct set samples throughout the rest of this paper.
3.2. What Fraction of the Full Trace is Needed?
This section examines how well set samples estimate the MPI of a full trace. For reasons discussed above,
we construct samples with the constant-bits method and calculate MPI estimate for a sample with the all-
instructions method. We first look at the accuracy of set sampling when MPI long is known; then show how to construct
confidence intervals for MPI long when it is not known.
In
Figure
4 we saw qualitatively that for one trace, cache, and sample size, the MPI variations between set
samples and MPI long were modest compared to temporal variations. Table 2 quantifies the long run error between
samples and MPI long for several traces, direct-mapped cache sizes, and sample sizes. We measure errors with
coefficient of variation calculated using Equation 1. Table 3 gives the corresponding results for two-way set-associative
caches.
Set-Sampling Coefficients of Variation (percent)
Fraction of Sets in Sample
Trace Size MPI long - 1000 1/4 1/16 1/64
Mult1.2
Mult2.2
1M 2.63 0.7% 1.9% N/A
4M 7.54 0.1% 0.3% 0.7%
Sor
1M 2.16 4.1% 5.6% N/A
Tree
Lin
Table
2. Set Sampling Coefficients of Variation for Direct Mapped.
This table shows the actual MPI of the full trace, MPI long , for direct-mapped caches, and the coefficient of
variation of the set-sampling MPI estimates, calculated using Equation 1. We construct samples with the
constant-bits method. Samples containing 1/4 the sets in the cache have bits 9-8 constant. Samples for
1/16 and 1/64 use bits 11-8 and 12-7, respectively. Some entries marked "N/A" are not available, because
the PID hashing overlapped with the constant bits. Except where marked with a dagger (+), at least 90% of
the samples have relative errors of less than or equal to -10%.
The key result is that, for this data and for four-way set-associative caches not shown here, set sampling
generally meets the 10% sampling goal. Consider the columns labeled "1/16" in Tables 2 and 3, which
correspond to samples using 1/16-th of the sets and therefore will contain less than 10% of the trace on average.
Only Lin and Tree with 4-megabyte direct-mapped caches, marked with daggers, fail to have at least 90% of the
samples with relative errors of less than or equal to -10%. (And they both have only 2 of 16 samples with more
than -10% relative error.)
We also observe two other interesting trends in the data. First, reducing the fraction of sets in a sample (and
hence the number of sets per sample) from 1/4 to 1/16 and from 1/16 to 1/64 increases the coefficient of variation.
If the per-set MPI's were independent and identically distributed, then reducing the number of sets in a sample by
Set-Sampling Coefficients of Variation (percent)
Fraction of Sets in Sample
Trace Size MPI long - 1000 1/4 1/16 1/64
Mult1.2
Mult2.2
1M 2.31 0.2% 0.6% N/A
Sor
Tree
Lin
Table
3. Set Sampling Coefficients of Variation for 2-Way.
This table shows the MPI of the full trace for two-way set-associative caches, and the coefficient of variation
of the MPI estimates, similar to Table 2. Except where marked with a dagger (+), at least 90% of the
samples have relative errors of less than or equal to -10%.
four should double the coefficient of variation [MIFJ90, STON90]. Indeed, there is good evidence that this is the
case (see, for example, the row for Mult1.2 with a 4-megabyte cache). Second, increasing associativity from
direct-mapped to two-way reduces corresponding coefficients of variation by more than 50%. We conjecture that
set sampling works better for two-way set-associative caches because they have fewer conflict misses than direct-mapped
caches [HILS89]. A high rate of conflict misses to a few sets can make those sets poor predictors of
overall behavior.
Finally, in practical applications of set sampling, we want to estimate the error of an MPI estimate, using
only the information contained within the sample (i.e., not using knowledge of MPI long as did Tables 2 and 3).
We do this using 90% confidence intervals, calculated from the sample mean and sample standard deviation by
the standard technique [MIFJ90]. Our estimate of the sample standard deviation includes a finite population
correction, which is important when the sample size is a substantial fraction of the population (e.g., when each
90% Confidence Intervals that Contain MPI long
Fraction of Sets in Sample
Trace 1/4 1/16 1/64
fraction percent fraction percent fraction percent
Mult1.2 4/4 100% 16/16 100% 60/64 94%
Mult2.2 4/4 100% 16/16 100% 63/64 98%
Sor 4/4 100% 16/16 100% 64/64 100%
Tree 2/4 50% 12/16 75% 47/64 73%
Lin 4/4 100% 16/16 100% 62/64 97%
All 89% 93% 91%
Table
4. Set-Sampling Error Prediction.
For a 4-megabyte direct-mapped secondary cache and various traces and fraction of sets, this table gives
the fraction and percent of 90% confidence intervals that contained MPI long . Since the percentages are
near 90%, confidence intervals usefully estimate how far MPI - S is likely to be from MPI long .
sample includes 1/4-th of all sets) [KESS91, MIFJ90].
For large (- sampling theory predicts 90% of the 90% confidence intervals
will contain the true mean. For various constant-bits set samples and a 4-megabyte direct-mapped cache,
Table
4 displays the fraction of 90% confidence intervals that actually contain MPI long . Since the results in Table
4 are usually similar to 90%, the confidence interval calculation is a useful method for estimating the error of a
set-sample, given information from within that sample alone.
3.3. Advantages and Disadvantages of Set Sampling
The most important advantage of set sampling is that, for our simulations, it meets the 10% sampling goal
(Definition 1). A set sample automatically includes references from many execution phases, so an individual sample
can accurately characterize the MPI of a full trace, including its temporal variability. The reduced trace data
requirements of set sampling allow for simulation of longer traces, and therefore more algorithmic phases, in a
smaller amount of time. Besides the data reduction, set sampling also reduces the memory required to simulate a
cache. A set sample containing 1/16 of the full trace needs to simulate only 1/16 of the sets.
sampling does have its limitations. Even with the constant bits method, the full trace must be retained if
one wishes to study caches that do not index with the constant bits. Furthermore, set sampling may not accurately
model caches whose performance is affected by interactions between references to different sets. The
effectiveness of a prefetch into one set, for example, may depend on how many references are made to other sets
before the prefetched data is first used. Similarly, the performance of a cache with a write buffer may be affected
by how often write buffer fills up due to a burst of writes to many sets.
4. Time Sampling
The alternative to set sampling is time sampling. Here an observation is the MPI of a sequence of time-
contiguous references and is called an interval. Section 4.1 discusses determining the MPI for a sample, while
Section 4.2 examines using a sample to estimate MPI for the full trace.
4.1. Reducing Cold-Start Bias in Time Samples
To significantly reduce trace storage and simulation time, we must estimate the true MPI for an interval
without knowledge of initial cache state, i.e., the cache state at the beginning of the interval. This problem is simply
the well-known cold-start problem applied to each interval [EASF78]. Below we examine how well the following
five techniques mitigate the effect of the cold-start problem in multi-megabyte caches.
COLD COLD assumes that the initial cache state is empty. While this assumption does not affect misses to
full sets or hits to any set, it causes COLD to overestimate MPI, because references that appear to
miss to non-full sets may or may not be misses when simulated with the (true) initial cache state.
These potential misses are often called cold-start misses [EASF78].
HALF HALF uses the first half of the instructions in an interval to (partially) initialize the cache, and estimates
MPI with the remaining instructions.
PRIME PRIME estimates MPI with references to "initialized" sets. A set in a direct-mapped cache is initialized
once it is filled [STON90], while a set in a set-associative cache is initialized after it is filled and a
non-most-recently-used block has been referenced [LAPI88].
STITCH STITCH approximates the cache state at the beginning of an interval with the cache state at the end of
the previous interval [AGHH88]. Thus one creates a trace for a sample by stitching it's intervals
together.
INITMR Like COLD, INITMR simulates an interval beginning with an empty initial cache state. Instead of
assuming that all cold-start misses miss, however, INITMR uses Wood et al.'s - split to estimate the
fraction of cold-start misses that would have missed if the initial cache state was known [WOHK91].
The estimate is based on (1) the fraction of time that a cache block frame holds a block that will not
be referenced before it is replaced, and (2) the fraction of the cache loaded during the cold-start simulation
of an interval. When we could not estimate (1) with the references in an interval, we assume it
to be 0.7.
For a particular trace and cache, we evaluate a cold-start technique as follows. We select the number of
instructions in an interval, called the interval length, and collect a sample S of n =30 intervals spaced equally in
the trace. We use the cold-start technique to estimate the MPI for each interval, mpi - i , and calculate an MPI estimate
for sample S with
MPI
Since we have the full trace, we can simulate each interval with its initial cache state to determine the
interval's true MPI, mpi i , and calculate the true MPI for the sample, MPI S , with n#
We evaluate how
well a technique reduces cold-start bias in a sample S with 9 :
# .
It is important to note that MPI S is not the same as MPI long . In Section 4.2, we will examine how well a time sample
predicts the full trace MPI; here we seek to mitigate the cold-start bias of MPI - S .
We evaluate BIAS S for five cold-start techniques, eight traces, four interval lengths (100 thousand, 1 mil-
lion, 10 million, and 100 million instructions), three cache sizes (1, 4, and 16 megabytes) and two associativities
(direct-mapped and four-way). Since space precludes us from displaying 192 cases for each cold-start technique,
we present several subsets of the data.
For a 10-million-instruction interval length, Tables 5 and 6 display BIAS S for direct-mapped and four-way
set-associative caches, respectively. The data show several trends. First, COLD, HALF and STITCH tend to
overestimate MPI S . COLD does so because it assumes that all cold-start misses miss. Similarly, HALF tends to
8. Since with time sampling each interval has the same number of instructions, it is meaningful to compute MPI - S with
the arithmetic mean of the mpi - i 's.
9. We calculate BIAS S for PRIME with the secondary cache's local miss ratio rather than MPI, because counting the
number of instructions is not straightforward when some sets are initialized but others are not. Since BIAS S is a relative er-
ror, we expect that calculating it with local miss ratio will be comparable to calculating it with MPI.
Cache
Trace Size MPI S - 1000 COLD HALF PRIME STITCH INITMR
Mult1.2
Mult2.2
1M 2.55 +4% -0% -33% +32% -2%
Sor
1M 2.00 +13% -0% -10% +29% -1%
Tree
Lin
Table
5. Bias of Cold-Start Techniques With Direct-Mapped Caches.
This table displays BIAS S for five cold-start techniques, eight traces, interval length of 10 million instruc-
tions, three direct-mapped cache sizes (1, 4, and 16 megabytes).
overestimate MPI S when the first half of the trace does not sufficiently fill the cache. HALF can underestimate the
sample's MPI, however, when the second half of most of a sample's intervals have a lower MPI than the whole of
each interval. We believe STITCH overestimates MPI S , because (due to temporal locality) references are less
likely to miss when simulated with an interval's true initial state than with the final state from the previous interval
[WOOD90]. Second, PRIME underestimates MPI S for direct-mapped caches. PRIME calculates MPI S by effectively
assuming that cold-start misses are as likely to miss as any other reference. Wood et al. [WOHK91] have
shown, however, that this assumption is false, and that cold-start misses are much more likely to miss than
randomly-chosen references. PRIME is more accurate for four-way set-associative caches, where the heuristic of
ignoring initial references to a most-recently-referenced block mitigates the underestimation. Third, INITMR did
not consistently underestimate or overestimate MPI S . Finally, the large biases for the Lin trace with 4- and 16-
megabyte caches are probably not important, because the true MPI's are so small.
e
Trace Size MPI S - 1000 COLD HALF PRIME STITCH INITMR
Mult1.2
Mult2.2
1M 2.14 +4% -2% -22% +32% -2%
Sor
Tree
Lin
Table
6. Bias of Cold-Start Techniques With Four-Way Set-Associativity.
This table displays BIAS S for five cold-start techniques, eight traces, interval length of 10 million instruc-
tions, three four-way set-associative cache sizes (1, 4, and 16 megabytes).
Table
7 addresses which cold-start technique is best. For each the five cold-start techniques, we compute
Bias S for all 192 cases. We award a point in the "10%" category for biases less than -10% and award one in the
"Win" category for the cold-start technique closest to being unbiased. Multiple points are awarded in the case of
ties. The final row of Table 7 gives totals. HALF and INITMR have twice the "10%" score of the other
approaches, while INITMR has more "Wins" than all the other approaches combined. While HALF performs
well in many cases, INITMR performs best overall.
Table
8 illustrates how well INITMR performs with three direct-mapped caches (1, 4, and
and all four interval lengths (100,000, 1,000,000, 10,000,000, and 100,000,000 instructions). As expected, it
reduces bias more effectively as the interval lengths get longer or cache size gets smaller, because cold-start
becomes less dominant. The most striking aspect of this data is that INITMR, the best method, still performs terribly
for intervals containing 100,000 and 1,000,000 instructions. This should not be not surprising, since the
number of block frames in the caches (e.g., 8192 for 1-megabyte caches) far exceeds the number of true misses in
Cache Interval COLD HALF PRIME STITCH INITMR
Length #
Size (Mill) 10% Win 10% Win 10% Win 10% Win 10% Win
All
All All
Table
7. Scoring of Different Cold-Start Techniques.
This table displays scores of the cold-start techniques for 192 cases: the eight traces, four interval lengths
million, and 100 million instructions), three cache sizes (1, 4, and
bytes) and two associativities (direct-mapped and four-way). We award a point in the "10%" category if
-10% - Bias S - 10% and award one in the "Win" category for the cold-start technique closest to being
unbiased (log | Bias S | closest to zero). Multiple points are awarded in the case of ties.
these intervals (e.g., 1550 equals 1,000,000 instructions times a 0.00155 MPI for Mult1). Furthermore, it appears
that INITMR does not adequately mitigate cold-start bias unless interval lengths are, at least, 10 million instructions
for 1-megabyte caches, 100 million instructions for 4-megabyte caches, and more than 100 million instructions
for 16-megabyte caches. These results are consistent with the rule-of-thumb that trace length should be
increased by a factor of eight each time the cache size quadruples [STON90].
As
Table
8 also illustrates, however, we can determine when INITMR is likely to perform well. We
marked each entry in the table with an asterisk ("*") if, on average, the interval length was sufficient to (a) fill at
least half the cache and (b) there were at least as many misses to full sets as cold-start misses. All values Bias S
marked with an asterisk are less than -10%. Nevertheless, they imply that for multi-megabyte caches each interval
should contain more instructions than have previously been present in many "full" traces.
Cache Interval Length (Millions of Instructions)
Trace Size MPI long - 1000
Mult1.2 1M 1.45 103% 21% 2%* 0%*
Mult2.2 1M 1.18 127% 24% -1%* 0%*
Sor 1M 14.77 -41% -3%* 0%* 0%*
4M 7.54 -27% 44% 6%* 0%*
Tree 1M 2.16 249% 36% -1%* 0%*
Lin 1M 1.16 -30% -14% 16% 1%*
Table
8. Accuracy of INITMR Time-Sample MPI Estimates.
This table displays BIAS S for INITMR with eight traces, four interval lengths, three direct-mapped cache
sizes (1, 4, and 16 megabytes). We mark entries with an asterisk ("*") if, on average, interval lengths are
sufficient to (a) fill at least half the cache and (b) there are at least as many misses to full sets as cold-start
misses.
Fraction of Full Trace Data10Ratio
of
Estimate
to
Full
Trace
MPI
INITMR Estimates
100 Million Instructions
Million Instructions
Fraction of Full Trace Data
Unbiased10(a) Cones for MPI - S (b) Cones for MPI S (no hat)
Figure
5. Cones for Time Sampling with Mult1.2.
This figure displays cones for MPI - S (left) and MPI S (right) for the Mult1.2 trace and a 4-megabyte direct-mapped
cache. For an interval length and sample size (whose product gives the fraction of the trace used)
the height of a cone displays the range of the middle 90% of estimates from many samples.
4.2. What Fraction of the Full Trace is Needed?
This section examines how accurately time samples estimate MPI long , the MPI of the full trace. We estimate
the MPI of a sample S, MPI - S , with the arithmetic mean of MPI estimates for each interval in the sample,
where we use INITMR to reduce cold-start bias of each interval.
Figure 5a illustrates how we summarize the data 10 . For the Mult1.2 traces and a 4-megabyte direct-mapped
cache, it plots MPI - S /MPI long on the logarithmic y-axis and the fraction of the full trace contained in the sample on
the logarithmic x-axis. Consider the cone at the far left. We use 3000 1-million-instruction intervals to calculate
its shape. The left edge, near 0.00025, gives the fraction of the trace used in a sample of one interval. We determine
the end-points of the left edge with the empirical distribution of MPI - S for single-interval samples. The upper
end-point gives the 95-th percentile, while the lower gives the 5-th percentile. Thus, the length of the left edge is
10. We use a visual display here instead of coefficient of variation, because we believe it provides more insight. We did
not use a visual display with set sampling, because we did not have enough samples to smooth the data.
the range of the middle 90% of the MPI - S 's. We compute other vertical slices similarly. A vertical line (not
shown) in the same cone at example, gives the range of the middle 90% of the MPI S 's for
samples of 40 intervals each. The other two cones are for interval lengths of 10 million instructions (300 inter-
vals) and 100 million instructions (30 intervals). The right graph gives similar data for MPI S , where we calculate
the MPI of each interval with its true initial cache state.
time sample would meet the 10% sampling goal (Definition 1) if the sample's size times the length of
each interval were less than 10% of the trace (e.g., to the left of x-axis value 0.1 in Figure 5a), the lower point on
the appropriate cone falls between above 0.9 and 1.1 (on the y-axis) Unfortunately, none of the three cones for
Mult1.2 qualify. The cone for 1-million-instruction intervals is narrow enough but biased too far above 1.0, while
the cones of 10 million and 100 million instructions are too wide.
We found similar results for the rest of the traces, displayed in Figures 6a and 6b. The cones for the multiprogrammed
traces are similar to those of Mult1.2, although Mult2 and Mult2.2 have more cold-start bias. The
cones for the single applications, Tree, Tv, Sor and Lin, are more idiosyncratic, reflecting application-specific
behavior. The cones of Sor, for example, are skewed by Sor's behavior of alternating between low and high MPI
(with a period of around 300 million instructions [BOKW90]). For these traces and caches (and for direct-mapped
and four-way, 1- and 16-megabyte caches [KESS91]), time sampling fails to meet the 10% sampling goal.
Nevertheless, this data provides several insights into time sampling. First, the cones for MPI S (Figure 5b)
are vertically centered on 1.0 and have a shape similar to those of MPI - S (left). This data and data for other traces
(not shown) suggest that MPI S and MPI - S have different means but similar distributions. Therefore, it appears that
looking for better ways of mitigating cold-start bias in an interval (or sample) can be decoupled from examining
how well samples tend to predict MPI long .
Second, the height of the cones tends to vary as one over the square root of the sample size (number of
intervals per sample). This suggests that mpi - i 's are behaving as independent and identically distributed random
variables [MIFJ90].
Third, even if we eliminate cold-start bias, accurate estimates of MPI long must use hundred of millions of
instructions to capture temporal workload variations. With Mult1.2 and a 4-megabyte direct-mapped cache, Figure
5b shows that MPI S is within 10% of MPI long (for 90% of the samples examined) only with samples of 200
intervals of length 1 million instructions, 10-million-instruction intervals, or 20 100-million-instruction intervals
11 . This is roughly a factor of three decrease in sample size as interval length is multiplied by ten.
11. For much smaller caches, Laha et al. found a sample size of 35 intervals to be sufficient [LAPI88].
Finally, we investigate whether the error in MPI - S can be estimated from information within the sample
itself. We calculate 90% confidence intervals [MIFJ90] and then investigate whether they contain the true mean
approximately 90% of the time. In most cases, however, the 90% confidence intervals do not contain MPI long
90% of the time, because cold-start bias (that was not removed by INITMR) prevents the distribution of MPI - S
from being centered on MPI long . Furthermore, the confidence intervals provide no information on the magnitude
of cold-start bias. Confidence intervals did work in a few cases where samples contained 30 or more intervals and
interval lengths were long enough to make cold-start bias negligible [KESS91]. These cases, however, failed to
meet the 10% sampling goal because the samples contained much more than 10% of the trace. Confidence intervals
also worked for MPI S (whose expected value is MPI long because it has no cold-start bias), when samples contain
at least
4.3. Advantages and Disadvantages of Time Sampling
The major advantage of time sampling is that it is the only sampling technique available for caches with
timing-dependent behavior (e.g., that prefetch or are lockup-free [KROF81]) or shared structures across sets (e.g.,
write buffers or victim caching [JOUP90]). Furthermore, the cold-start techniques for time sampling can be
applied to any full-trace simulation, since a "full" trace is just a long observation from a system's workload.
However, in these simulations, time sampling fails to meet the 10% sampling goal for multi-megabyte
caches, because it needed long intervals to mitigate cold-start bias and many intervals to capture temporal work-load
variation. These results suggest that unless researchers develop better cold-start techniques, set sampling is
more effective than time sampling at estimating the MPI of multi-megabyte caches.
Fraction of Full Trace Data10Ratio
of
Estimate
to
Full
Trace
MPI
INITMR Estimates for Mult111
Fraction of Full Trace Data
INITMR Estimates for Mult2101
Fraction of Full Trace Data10Ratio
of
Estimate
to
Full
Trace
MPI
INITMR Estimates for Mult2.2101
Fraction of Full Trace Data
INITMR Estimates for Tree10Figure 6a. Cones for Time Sampling with Mult1, Mult2, Mult2.2, and Tree.
Similar to Figure 5a, these figures display cones for MPI - S with the Mult1, Mult2, Mult2.2, and Tree traces.
Fraction of Full Trace Data101Ratio
of
Estimate
to
Full
Trace
MPI
INITMR Estimates for Tv101
Fraction of Full Trace Data
INITMR Estimates for Sor101
Fraction of Full Trace
of
Estimate
to
Full
Trace
MPI
INITMR Estimates for Lin10Figure 6b. Cones for Time Sampling with Tv, Sor, and Lin.
Similar to Figure 5a, these figures display cones for MPI - S with the Tv, Sor, and Lin traces. Note that Lin
uses a different y-axis scale.
5. Conclusions
A straightforward application of trace-driven simulation to multi-megabyte caches requires very long traces
that strain computing resources. Resource demands can be greatly reduced using set sampling or time sampling.
sampling estimates cache performance using information from a collection of sets, while time sampling uses
information from a collection of trace intervals. This study is the first to apply both techniques to large caches,
where they are most useful. We use billion-reference traces of large workloads that include multiprogramming
but not operating system references [BOKW90]
For set sampling we obtained several results. First, calculating the MPI (misses per instruction) for a sample
using the number of instruction fetches to all sets is much more accurate than using only the number of
instruction fetches in the sample. Second, constructing samples from sets that share some common index bit
values works well, since such samples can be used to accurately predict the MPI of multiple alternative caches
and caches in hierarchies. Third, sets behave sufficiently close to normal that confidence intervals are meaningful
and accurate. Last and most important, set sampling meets the 10% sampling goal: using -10% of the references
in a trace it estimates the trace's true MPI with -10% relative error and at least 90% confidence.
Results sampling include the following. First, Wood et al.'s - split was the most effective technique
for reducing cold-start bias, although using half the references in a trace interval to (partially) initialize a cache
often performed well. Second, interval lengths must be long to mitigate cold-start bias (10 million instructions for
1-megabyte caches, 100 million instructions for 4-megabyte caches, and more than 100 million instructions for
16-megabyte caches). Third and most important, for these traces and caches, time sampling does not meet the
10% sampling goal: we needed more than 10% of a trace to get (trace) interval lengths that adequately mitigated
cold-start bias and have enough intervals in a sample to make accurate predictions.
Thus, we found that for our traces, set sampling is more effective than time sampling for estimating MPI of
the multi-megabyte caches. Time sampling will be preferred, however, when set sampling is not applicable, such
as for caches that have time-dependent behavior (e.g., prefetching) or structures used by many sets (e.g., write
buffers).
As with any experimental work, our results are sure to hold only for the specific cases examined. Neverthe-
less, we expect our results to extend to other similar cache configurations and to other user-mode traces from similar
workloads. It is an open questions whether our results apply to traces dominated by operating system activity
or radically different user-mode workloads.
6.
Acknowledgments
We would like to thank The Western Research Laboratory of Digital Equipment Corporation, especially
Anita Borg and David Wall, for the traces used in this study. Joel Bartlett, Renato De Leone, Jeremy Dion, Norm
Jouppi, Bob Mayo, and Don Stark all were a tremendous help in providing traceable applications. Paul Vixie and
Colleen Hawk helped to store the traces. Paul Beebe and the Systems Lab were able to satisfy our enormous computing
needs. Mike Litzkow and Miron Livny adapted Condor to the requirements of these simulations. Harold
Stone gave comments on an earlier version of this work, while Sarita Adve, Vikram Adve and Garth Gibson scrutinized
this paper.
7.
--R
"Cache Performance of Operating System and Multiprogramming Workloads,"
"Blocking: Exploiting Spatial Locality for Trace Compaction,"
"Long Address Traces from RISC Machines: Generation and Analysis,"
"Generation and Analysis of Very Long Address Traces,"
"Cold-Start vs. Warm-Start Miss Ratios,"
"Parallel Trace-Driven Cache Simulation by Time Partitioning,"
Computer Architecture: A Quantitative Approach
"Evaluating Associativity in CPU Caches,"
"Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers,"
"Analysis of Multi-Megabyte Secondary CPU Cache Memories,"
"Lockup-Free Instruction Fetch/Prefetch Cache Organization,"
"Accurate Low-Cost Methods for Performance Evaluation of Cache Memory Systems,"
"Accurate Low-Cost Methods for Performance Evaluation of Cache Memory Systems,"
Probability and Statistics for Engineers
"Titan System Manual,"
"Performance-Directed Memory Hierarchy Design,"
"Characteristics of Performance-Optimal Multi-Level Cache Hierarchies,"
"Analysis of Cache Replacement Algorithms,"
"Mache: No-Loss Trace Compaction,"
"Two Methods for the Efficient Analysis of Memory Address Trace Data,"
"Cache Memories,"
"Efficient Trace-Driven Simulation Methods for Cache Performance Analysis,"
"The Design and Evaluation of In-Cache Address Translation,"
"A Model for Estimating Trace-Sample Miss Ratios,"
--TR
Cache performance of operating system and multiprogramming workloads
Accurate Low-Cost Methods for Performance Evaluation of Cache Memory Systems
Accurate low-cost methods for performance evaluation of cache memory systems
Characteristics of performance-optimal multi-level cache hierarchies
Mache: no-loss trace compaction
Evaluating Associativity in CPU Caches
High-performance computer architecture (2nd ed.)
Efficient trace-driven simulation method for cache performance analysis
Blocking: exploiting spatial locality for trace compaction
A model for estimating trace-sample miss ratios
Analysis of multi-megabyte secondary CPU cache memories
Generation and analysis of very long address traces
Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers
Cache Memories
Cold-start vs. warm-start miss ratios
Lockup-free instruction fetch/prefetch cache organization
Design and Evaluation of In-Cache Address Translation
Analysis of cache replacement-algorithms
Performance directed memory hierarchy design
--CTR
Patrick Crowley , Jean-Loup Baer, On the use of trace sampling for architectural studies of desktop applications, ACM SIGMETRICS Performance Evaluation Review, v.27 n.1, p.208-209, June 1999
Michel Dubois , Jaeheon Jeong , Ashwini Nanda, Shared cache architectures for decision support systems, Performance Evaluation, v.49 n.1-4, p.283-298, September 2002
Greg Hamerly , Erez Perelman , Brad Calder, How to use SimPoint to pick simulation points, ACM SIGMETRICS Performance Evaluation Review, v.31 n.4, p.25-30, March 2004
P. Foglia , D. Mangano , C. A. Prete, A cache design for high performance embedded systems, Journal of Embedded Computing, v.1 n.4, p.587-597, December 2005
Andrew R. Pleszkun, Techniques for compressing program address traces, Proceedings of the 27th annual international symposium on Microarchitecture, p.32-39, November 30-December 02, 1994, San Jose, California, United States
Lieven Eeckhout , Koen De Bosschere, Yet shorter warmup by combining no-state-loss and MRRL for sampled LRU cache simulation, Journal of Systems and Software, v.79 n.5, p.645-652, May 2006
Lieven Eeckhout , Smal Niar , Koen De Bosschere, Optimal sample length for efficient cache simulation, Journal of Systems Architecture: the EUROMICRO Journal, v.51 n.9, p.513-525, September 2005
Thomas M. Conte , Mary Ann Hirsch , Wen-mei W. Hwu, Combining Trace Sampling with Single Pass Methods for Efficient Cache Simulation, IEEE Transactions on Computers, v.47 n.6, p.714-720, June 1998
Niki C. Thornock , J. Kelly Flanagan, Facilitating level three cache studies using set sampling, Proceedings of the 32nd conference on Winter simulation, December 10-13, 2000, Orlando, Florida
Luk Van Ertvelde , Filip Hellebaut , Lieven Eeckhout , Koen De Bosschere, NSL-BLRL: Efficient CacheWarmup for Sampled Processor Simulation, Proceedings of the 39th annual Symposium on Simulation, p.168-177, April 02-06, 2006
Uri Lublin , Dror G. Feitelson, The workload on parallel supercomputers: modeling the characteristics of rigid jobs, Journal of Parallel and Distributed Computing, v.63 n.11, p.1105-1122, November
Yue Luo , Lizy K. John , Lieven Eeckhout, SMA: a self-monitored adaptive cache warm-up scheme for microprocessor simulation, International Journal of Parallel Programming, v.33 n.5, p.561-581, October 2005
Rong Xu , Zhiyuan Li, A sample-based cache mapping scheme, ACM SIGPLAN Notices, v.40 n.7, July 2005
Aditya Toomula , Jaspal Subhlok, Replicating memory behavior for performance prediction, Proceedings of the 7th workshop on Workshop on languages, compilers, and run-time support for scalable systems, p.1-8, October 22-23, 2004, Houston, Texas
Humayun Khalid, Validating Trace-Driven Microarchitectural Simulations, IEEE Micro, v.20 n.6, p.76-82, November 2000
Lieven Eeckhout , Koen De Bosschere, Efficient simulation of trace samples on parallel machines, Parallel Computing, v.30 n.3, p.317-335, March 2004
Roland E. Wunderlich , Thomas F. Wenisch , Babak Falsafi , James C. Hoe, Statistical sampling of microarchitecture simulation, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.16 n.3, p.197-224, July 2006
Fast data-locality profiling of native execution, ACM SIGMETRICS Performance Evaluation Review, v.33 n.1, June 2005
Changkyu Kim , Doug Burger , Stephen W. Keckler, An adaptive, non-uniform cache structure for wire-delay dominated on-chip caches, ACM SIGPLAN Notices, v.37 n.10, October 2002
J. L. Peterson , P. J. Bohrer , L. Chen , E. N. Elnozahy , A. Gheith , R. H. Jewell , M. D. Kistler , T. R. Maeurer , S. A. Malone , D. B. Murrell , N. Needel , K. Rajamani , M. A. Rinaldi , R. O. Simpson , K. Sudeep , L. Zhang, Application of full-system simulation in exploratory system design and development, IBM Journal of Research and Development, v.50 n.2/3, p.321-332, March 2006 | consecutive references;memory architecture;relative error;reference traces;time sampling;performance evaluation;multi-megabyte caches;program diagnostics;sampling goal;cold-start bias;trace-sampling techniques;digital simulation;buffer storage |
626956 | Faster Numerical Algorithms Via Exception Handling. | An attractive paradigm for building fast numerical algorithms is the following: 1) try a fast but occasionally unstable algorithm, 2) test the accuracy of the computed answer, and 3) recompute the answer slowly and accurately in the unlikely event it is necessary. This is especially attractive on parallel machines where the fastest algorithms may be less stable than the best serial algorithms. Since unstable algorithms can overflow or cause other exceptions, exception handling is needed to implement this paradigm safely. To implement it efficiently, exception handling cannot be too slow. We illustrate this paradigm with numerical linear algebra algorithms from the LAPACK library. | Introduction
A widely accepted design paradigm for computer hardware is to execute the most common
instructions as quickly as possible, and replace rarer instructions by sequences of more common
ones. In this paper we explore the use of this paradigm in the design of numerical
algorithms. We exploit the fact that there are numerical algorithms that run quickly and
usually give the right answer as well as other, slower, algorithms that are always right. By
"right answer" we mean that the algorithm is stable, or that it computes the exact answer
for a problem that is a slight perturbation of its input [12]; this is all we can reasonably ask
of most algorithms. To take advantage of the faster but occasionally unstable algorithms,
we will use the following paradigm:
(1) Use the fast algorithm to compute an answer; this will usually be done stably.
(2) Quickly and reliably assess the accuracy of the computed answer.
(3) In the unlikely event the answer is not accurate enough, recompute it slowly
but accurately.
The success of this approach depends on there being a large difference in speed between the
fast and slow algorithms, on being able to measure the accuracy of the answer quickly and
reliably, and, most important for us, on floating point exceptions not causing the unstable
algorithm to abort or run very slowly. This last requirement means the system must either
continue past exceptions and later permit the program to determine whether an exception
occurred, or else support user-level trap handling. In this paper we will assume the first
response to exceptions is available; this corresponds to the default behavior of IEEE standard
floating point arithmetic [3, 4].
Our numerical methods will be drawn from the LAPACK library of numerical linear algebra
routines for high performance computers [2]. In particular, we will consider condition
estimation (error bounding) for linear systems, computing eigenvectors of general complex
matrices, the symmetric tridiagonal eigenvalue problem, and the singular value decomposi-
tion. What the first two algorithms have in common is the need to solve triangular systems
of linear equations which are possibly very ill-conditioned. Triangular system solving is one
of the matrix operations found in the Basic Linear Algebra Subroutines, or BLAS [9, 10, 18].
The BLAS, which include related operations like dot product, matrix-vector multiplication,
and matrix-matrix multiplication, occur frequently in scientific computing. This has led to
their standardization and widespread implementation. In particular, most high performance
machines have highly optimized implementations of the BLAS, and a good way to write
portable high performance code is to express one's algorithm as a sequence of calls to the
BLAS. This has been done systematically in LAPACK for most of numerical linear algebra,
leading to significant speedups on highly pipelined and parallel machines [2].
However, the linear systems arising in condition estimation and eigenvector computation
are often ill-conditioned, which means that over/underflow is not completely unlikely. Since
the first distribution of LAPACK had to be portable to as many machines as possible,
including those where all exceptions are fatal, it could not take advantage of the speed of the
optimized BLAS, and instead used tests and scalings in inner loops to avoid computations
that might cause exceptions.
In this paper we present algorithms for condition estimation and eigenvector computation
that use the optimized BLAS, test flags to detect when exceptions occur, and recover when
exceptions occur. We report performance results on a "fast" DECstation 5000 and a "slow"
DECstation 5000 (both have a MIPS R3000 chip as CPU [17]), a Sun 4/260 (which has a
SPARC chip as CPU [15]), a DEC Alpha [11], a CRAY-C90 and a SPARCstation 10 with
a Viking microprocessor. The "slow" DEC 5000 correctly implements IEEE arithmetic,
but does arithmetic with NaNs about 80 times slower than normal arithmetic. The "fast"
DEC 5000 implements IEEE arithmetic incorrectly, when the operands involve denormals or
NaNs, but does so at the same speed as normal arithmetic. Otherwise, the two DEC 5000
workstations are equally fast. 1 The CRAY does not have exception handling, but we can still
compare speeds in the most common case where no exceptions occur to see what speedup
there could be if exception handling were available. We measure the speedup as the ratio
of the time spent by the old LAPACK routine to the time spent by our new routine. The
speedups we obtained for condition estimation in the most common case where no exceptions
occur were as follows. The speedups ranged from 1.43 to 6.50 on either DEC 5000, from
1.50 to 5.00 on the Sun, from 1.66 to 9.28 on the DEC Alpha, and from 2.55 to 4.21 on
the CRAY. Results for computing eigenvectors were about 1.08. These are quite attractive
speedups. They would be even higher on a machine where the optimized BLAS had been
parallelized but the slower scaling code had not.
In the rare case when exceptions did occur, the speed depended very strongly on whether
the exception occurred early or late during the triangular solve, and on the speed of subsequent
arithmetic with NaN (Not-a-Number) arguments. On some examples the speedup
was as high as 5.41 on the fast DEC 5000, but up to 13 times slower on the slow DEC 5000.
This illustrates the price of implementing IEEE NaN arithmetic too slowly.
We discuss the bisection algorithm for finding the eigenvalues of symmetric tridiagonal
matrices. The LAPACK SSTEBZ routine takes special care in the inner loop to avoid overflow
or division by zero, whereas our algorithm takes advantage of infinity arithmetic defined in
the IEEE standard. We report performance results on a SPARCstation IPX (which has a
Weitek 8601 chip as FPU), as well as on a distributed memory multiprocessor - the CM-5.
The speedups range from 1.14 to 1.47.
We also discuss a singular value decomposition algorithm used in the LAPACK routine
SBDSQR, where the careful scaling code can be avoided by using exception handling. The
speedups we have obtained on a CRAY Y-MP (EL/2-256) were between 1.21 and 1.39.
The rest of this paper is organized as follows. Section 2 describes our model of exception
handling in more detail. Section 3 describes the algorithms for solving triangular systems
1 Normally a buggy workstation would be annoying, but in this case it permitted us to run experiments
where only the speed of exception handling varied.
both with and without exception handling. Section 4 describes the condition estimation
algorithms both with and without exception handling, and gives timing results. Section 5
does the same for eigenvector computations. Section 6 compares the bisection algorithms
for solving the symmetric tridiagonal eigenvalue problem both with and without exception
handling. Section 7 describes the benefit from exception handling when computing singular
values of a matrix. Section 8 draws lessons about the value of fast exception handling and
fast arithmetic with NaNs and infinity symbols. Section 9 suggests future research.
Exception Handling
In this section we review how IEEE standard arithmetic handles exceptions, discuss how the
relative speeds of its exception handling mechanisms affect algorithm design, and state the
assumptions we have made about these speeds in this paper. We also briefly describe our
exception handling interface on the DECstation 5000.
The IEEE standard classifies exceptions into five categories: overflow, underflow, division
by zero, invalid operation, and inexact. Associated with each exception are a status flag and
a trap. Each of the five exceptions will be signaled when detected. The signal entails setting
a status flag, taking a trap, or possibly doing both. All the flags are "sticky", which means
that after being raised they remain set until explicitly cleared. All flags can be tested, saved,
restored, or altered explicitly by software. A trap should come under user control in the sense
that the user should be able to specify a handler for it, although this capability is seldom
implemented on current systems. The default response to these exceptions is to proceed
without a trap and deliver to the destination an appropriate default value. The standard
provides a clearly-defined default result for each possible exception. The default values and
the conditions under which they are produced are summarized in Table 1. Once produced,
IEEE default behavior is for \Sigma1 and NaN to propagate through the computation without
producing further exceptions.
According to the standard, the traps and sticky flags provide two different exception
Exception raised Default value Condition
overflow
underflow 0; \Sigma2 e min or denormals e ! e min
division by zero \Sigma1 x=0, with finite x 6= 0
invalid NaN 1+ (\Gamma1), 0 \Theta 1,
0=0, 1=1, etc.
Inexact round(x) true result not representable
Table
1: The IEEE standard exceptions and the default values.
handling mechanisms. Their utility depends on how quickly and flexibly they permit exceptions
to be handled. Since modern machines are heavily pipelined, it is typically very
expensive or impossible to precisely interrupt an exceptional operation, branch to execute
some other code, and later resume computation. Even without pipelining, operating system
overhead may make trap handling very expensive. Even though no branching is strictly
needed, merely testing sticky flags may be somewhat expensive, since pipelining may require
a synchronization event in order to update them. Thus it appears fastest to use sticky flags
instead of traps, and to test sticky flags as seldom as possible. On the other hand, infrequent
testing of the sticky flags means possibly long stretches of arithmetic with \Sigma1 or NaN as
arguments. If default IEEE arithmetic with them is too slow compared to arithmetic with
normalized floating point numbers, then it is clearly inadvisable to wait too long between
tests of the sticky flags to decide whether alternate computations should be performed. In
summary, the fastest algorithm depends on the relative speeds of
conventional, unexceptional floating point arithmetic,
arithmetic with NaNs and \Sigma1 as arguments,
testing sticky flags, and
trap handling.
In the extreme case, where everything except conventional, unexceptional floating point
arithmetic is terribly slow, we are forced to test and scale to avoid all exceptions. This is
CPU denormal 1 NaN how measured
MIPS R3000/3010 cc
"slow" (correct) 120x slower full speed 80x slower
"fast" (buggy) "full speed" full speed "full speed" 2
MIPS R4000/4010 120x slower full speed 32x slower cc
full speed 10x slower f77
full speed 9x slower f77
full speed full speed full speed f77
PA-RISC 68x slower full speed 42x slower cc
RS/6000 full speed full speed full speed cc
full speed full speed full speed manual
387, 486, Pentium full speed full speed full speed manual
i860 868x slower 432x slower 411x slower cc
i960 full speed full speed full speed manual
DEC Alpha 690x slower 343x slower 457x slower cc
CRAY-C90 N/A abort abort manual
(non IEEE machine)
Table
2: Speed of arithmetic with denormal, 1 and NaN as arguments, as compared with
conventional arithmetic.
the unfortunate situation we were in before the introduction of exception handling, and it
would be an unpleasant irony if exception handling were rendered too unattractive to use
by too slow an implementation. In this paper, we will design our algorithms assuming that
user-defined trap handlers are not available, that testing sticky flags is expensive enough
that it should be done infrequently, and that arithmetic with NaN and \Sigma1 is reasonably
fast. Our codes will in fact supply a way to measure the benefit one gets by making NaN
and 1 arithmetic fast. Table 2 shows the speed of arithmetic with denormalized numbers,
1 and NaN, compared to conventional arithmetic on some machines. Some of the table
entries are measured from Fortran, some from C, while others are from the architecture
manuals. The DEC Alpha can only implement IEEE defaults, including infinities, NaNs and
denormals, by precise interrupts; this causes significant loss of speed as compared with the
normal arithmetic.
Our interface to the sticky flags is via subroutine calls, without special compiler support.
2 Returns the first argument for binary operations;
status flag is not set.
We illustrate these interfaces briefly for one of our test machines, the DECstation 5000
with the MIPS R3000 chip as CPU. On the DECstation 5000, the R3010 Floating-Point
Accelerator operates as a coprocessor for the R3000 Processor chip, and extends the
R3000's instruction set to perform floating point arithmetic operations. The FPA contains
a 32-bit Control/Status register, FCR31, that is designed for exception handling and can
be read/written by instructions running in User Mode. The FCR31 contains five Nonsticky
Exception bits (one for each exception in Table 1), which are appropriately set or cleared after
every floating point operation. There are five corresponding TrapEnable bits used to enable
a user level trap when an exception occurs. There are five corresponding Sticky bits to hold
the accrued exception bits required by the IEEE standard for trap disabled operation. Unlike
the nonsticky exception bits, the sticky bits are never cleared as a side-effect of any floating
point operation; they can be cleared only by writing a new value into the Control/Status
register. The nonsticky exception bits might be used in other applications requiring finer
grained exception handling, such as parallel prefix [6].
In the algorithms developed in this paper we need only manipulate the trap enable bits
(set them to zero to disable software traps) and the sticky bits. Procedure exceptionreset()
clears the sticky flags associated with overflow, division by zero and invalid operations, and
suppresses the exceptions accordingly. Function except() returns true if any or all of the
overflow, division by zero and invalid sticky flags are raised.
3 Triangular System Solving
We discuss two algorithms for solving triangular systems of equations. The first one is
the simpler and faster of the two, and disregards the possibility of over/underflow. The
second scales carefully to avoid over/underflow, and is the one currently used in LAPACK
for condition estimation and eigenvector computation [1].
We will solve is a lower triangular n-by-n matrix. We use the notation
to indicate the submatrix of L lying in rows i through j and columns k through
l of L. Similarly, is the same as L(i : l). The following algorithm accesses L
by columns.
Algorithm 1: Solve a lower triangular system
endfor
This is such a common operation that it has been standardized as subroutine STRSV,
one of the BLAS [9, 10, 18]. Algorithm 1 can easily overflow even when the matrix L is
well-scaled, i.e. all rows and columns are of equal and moderate length. For example,
c \Gamma2
c \Gamma3
c \Gamma4
overflows in IEEE single precision, even though each row and column of L
has largest entry 1 in magnitude, and no terribly small entries. Similarly, let L n (c) be the
analogous n-by-n matrix with in the second through 1-st elements along the
main diagonal. This means that (L n (c))
The second algorithm scales carefully to avoid overflow in Algorithm 1. The algorithm
works by choosing a scale factor 0 - s - 1 and solving
chosen whenever the solution x would overflow. In case x would overflow even if
s were the smallest positive floating point number, s is set to zero (for example, consider
single precision in the above example). If some L(i; exactly, so
that L is singular, the algorithm will set compute a nonzero vector x satisfying
Here is a brief outline of the scaling algorithm; see [1] for details. Coarse bounds on the
solution size are computed as follows. The algorithm begins by computing c
a lower bound G i on the values of x
after step i of Algorithm
1:
Y
and finally a lower bound g on the reciprocal of the largest intermediate or final values
computed anywhere in Algorithm 1:
1-i-n
Lower bounds on x \Gamma1
are computed instead of upper bounds on x j to avoid the possibility
of overflow in the upper bounds.
smallest floating point number that can safely be inverted. If g - UN,
this means the solution can be computed without danger of overflow, so we can simply call
the BLAS. Otherwise, the algorithm makes a complicated series of tests and scalings as in
Algorithm 2.
Now we compare the costs of Algorithms 1 and 2. Algorithm 1 costs about n 2 flops
(floating point operations), half additions and half multiplies. There are also n divisions
which are insignificant for large n. In the first step of Algorithm 2, computing the c i costs
as much as Algorithm 1. In some of our applications, we expect to
solve several systems with the same coefficient matrix, and so can reuse the c i ; this amortizes
the cost over several calls. In the best case, when g - UN, we then simply call STRSV. This
makes the overall operation count about 1:5n 2 (or n 2 if we amortize). In the worst (and very
rare) case, the inner loop of Algorithm 2 will scale at each step, increasing the operation
count by about n 2 again, for a total of 2:5n 2 (or 2n 2 if we amortize). Updating x max costs
another data accesses and comparisons, which may or may not be cheaper than the
same number of floating point operations.
More important than these operation counts is that Algorithm 2 has many data dependent
branches, which makes it harder to optimize on pipelined or parallel architectures than the
much simpler Algorithm 1. This will be borne out by the results in later sections.
Algorithm 2 is available as LAPACK subroutine SLATRS. This code handles upper and
lower triangular matrices, permits solving with the input matrix or its transpose, and handles
either general or unit triangular matrices. It is 300 lines long excluding comments. The
Fortran implementation of the BLAS routine STRSV, which handles the same input options,
is 159 lines long, excluding comments. For more details on SLATRS, see [1].
Algorithm 2: Solve a lower triangular system
Compute described above
if (g - UN) then
call the BLAS routine STRSV
else
else if (0 !
else if (L(i; . compute a null vector x:
else if (jx(i)j - 1 and jx(i)j
endif
endfor
endif
Condition Estimation
In this section we discuss how IEEE exception handling can be used to design a faster
condition estimation algorithm. We compare first theoretically and then in practice the old
algorithm used in LAPACK with our new algorithm.
4.1 Algorithms
When solving the n-by-n linear system we wish to compute a bound on the error
true . We will measure the error using either the one-norm jjxjj
the infinity norm jjxjj j. Then the usual error bound [12] is
where p(n) is a slowly growing function of n (usually about n), ffl is the machine precision,
is the condition number of A, and ae is the pivot growth factor. The condition number
is defined as
computing
A \Gamma1 costs more than solving we prefer to estimate jjA \Gamma1 jj 1 inexpensively from A's
LU factorization; this is called condition estimation. Since jjAjj 1 is easy to compute, we
focus on estimating jjA . The pivot growth may be defined as jjU jj 1
(other definitions are
possible). This is close to unity except for pathological cases.
In the LAPACK library [2], a set of routines have been developed to estimate the reciprocal
of the condition number k 1 (A). We estimate the reciprocal of k 1 (A), which we call
RCOND, to avoid overflow in k 1 (A). The inputs to these routines include the factors L and U
from the factorization modification [14] of Hager's method [13]
is used to estimate jjA \Gamma1 jj 1 . The algorithm is derived from a convex optimization approach,
and is based on the observation that the maximal value of the function
is attained at one of the vectors e j , for is the jth
column of the n-by-n identity matrix.
Algorithm 3 [13]: This algorithm computes a lower bound fl for jjA
Choose x with
Repeat
solve solving using Algorithm 2)
solve A T solving U T using Algorithm 2)
quit
else x := e j , for that j where jz
The algorithm involves repeatedly solving upper or lower triangular systems until a certain
stopping criterion is met. Due to the possibilities of overflow, division by zero, and invalid
exceptions caused by the ill-conditioning or bad scaling of the linear systems, the LAPACK
routine SGECON uses Algorithm 2 instead of Algorithm 1 to solve triangular systems like
discussed in Section 3. The details of the use of the scale factor s returned by
Algorithm 2 are not shown; see routines SGECON and SLACON in LAPACK [2].
Our goal is to avoid the slower Algorithm 2 by using exception handling to deal with these
ill-conditioned or badly scaled matrices. Our algorithm only calls the BLAS routine STRSV,
and has the property that overflow occurs only if the matrix is extremely ill-conditioned. In
this case, which we detect using the sticky exception flags, we can immediately terminate
with a well-deserved estimate RCOND=0. Merely replacing the triangular solver used in
Algorithm 3 by STRSV and inserting tests for overflow does not work, as can be seen by
choosing a moderately ill-conditioned matrix of norm near the underflow threshold; this
will cause overflow while solving though A is only moderately ill-conditioned.
Therefore, we have modified the logic of the algorithm as follows. Comments indicate the
guaranteed lower bound on k 1 (A) if an exception leads to early termination.
Algorithm 4: This algorithm estimates the reciprocal of
RCOND is the estimated reciprocal of condition number
Call exceptionreset()
Choose x with
Repeat
solve calling STRSV
if (except()) then RCOND := 0; quit /*
solve calling STRSV
if (except()) then RCOND := 0; quit /*
else y := y \Delta ff
if (except()) then RCOND := 0; quit /*
endif
else solve calling STRSV
if (except()) then RCOND := 0; quit /*
endif
else solve calling STRSV
if (except()) then RCOND := 0; quit /*
endif
solve U T calling STRSV
if (except()) then RCOND := 0; quit /*
*/
else solve L T calling STRSV
if (except()) then RCOND := 0; quit /*
*/
endif
else x := e j , where jz
endif
The behavior of Algorithm 4 is described by the following:
Lemma 1. If Algorithm 4 stops early because of an exception, then the "true rounded"
reciprocal of the condition number satisfies RCOND - max(n 3 ;ae)
is the pivot
growth factor.
Proof: In the algorithm there are seven places where exceptions may occur. We will analyze
them one by one. Note that x is chosen such that jjxjj
1. An exception occurs when computing L \Gamma1 x.
Therefore,
2. An exception occurs when computing U
so RCOND - 1=OV.
3. An exception occurs when computing ff
so RCOND - 1=OV.
4. An exception occurs when computing U
ff
Then
so
5. An exception occurs when computing U
so
6. An exception occurs when computing U \GammaT ff-.
Therefore,
OV .
7. An exception occurs when computing L \GammaT U \GammaT ff-, so
Therefore, RCOND - n 2
OV .
Combining the above seven cases, we have shown that RCOND - max(n 3 ;ae)
OV when an
exception occurs.
In practice, any RCOND ! ffl signals a system so ill-conditioned as to make the error
bound in (1) as large as the solution itself or larger; this means the computed solution has
no digits guaranteed correct. Since max(n 3 ;ae)
OV
or ae is enormous (both
of which also mean the error bound in (1) is enormous), there is no loss of information in
stopping early with
Algorithm 4 and Lemma 1 are applicable to any linear systems for which we do partial
or complete pivoting during Gaussian elimination, for example, LAPACK routines SGECON,
SGBCON and STRCON (see Section 4.2 for the descriptions of these routines), and their complex
counterparts.
For symmetric positive definite matrices, where no pivoting is necessary, an analogous
algorithm (e.g., SPOCON) was developed and analyzed, though omitted in this paper due to
the limitation of the length.
Machine Matrix dimension n 100 200 300 400 500
SGECON 2.00 1.52 1.46 1.44 1.43
SPOCON 2.83 1.92 1.71 1.55 1.52
STRCON 3.33 1.78 1.60 1.54 1.52
Sun 4/260 SGBCON 2.00 2.20 2.11 2.77 2.71
SGECON 3.02 2.14 1.88 1.63 1.62
SPOCON 5.00 2.56 2.27 2.22 2.17
STRCON 1.50 2.00 2.30 2.17 2.18
DEC Alpha SGBCON
SGECON 2.66 2.01 1.85 1.78 1.66
SPOCON 2.25 2.46 2.52 2.42 2.35
STRCON 3.00 2.33 2.28 2.18 2.07
CRAY-C90 SGECON 4.21 3.48 3.05 2.76 2.55
Table
3: Speedups on DEC 5000/Sun 4-260/DEC Alpha/CRAY-C90. No exceptions nor
scaling occur. sbw stands for semi-bandwidth.
4.2 Numerical Results
To compare the efficiencies of Algorithms 3 and 4, we rewrote several condition estimation
routines in LAPACK using Algorithm 4, including SGECON for general dense matrices,
SPOCON for dense symmetric positive definite matrices, SGBCON for general band matrices,
and STRCON for triangular matrices, all in IEEE single precision. To compare the speed and
the robustness of Algorithms 3 and 4, we generated various input matrices yielding unexceptional
executions with or without invocation of the scalings inside Algorithm 2, as well as
exceptional executions. The unexceptional inputs tell us the speedup in the most common
case, and on machines like the CRAY measure the performance lost for lack of any exception
handling.
First, we ran Algorithms 3 and 4 on a suite of well-conditioned random matrices where
no exceptions occur, and no scaling is necessary in Algorithm 2. This is by far the most
common case in practice. The experiments were carried out on a DECstation 5000, a SUN
4/260, a DEC Alpha, and a single processor CRAY-C90. The performance results are presented
in Table 3. The numbers in the table are the ratios of the time spent by the old
routines using Algorithm 3 to the time spent by the new routines using Algorithm
4. These ratios measure the speedups attained via exception handling. The estimated condition
numbers output by the two algorithms are always the same. For dense matrices or
matrices with large bandwidth, as matrix dimension n increases, the time to service cache
misses constitutes a larger portion of the execution time, resulting in decreased speedups.
When we ran SGBCON with matrices of small bandwidth, such that the whole matrix fit in
the cache, we observed even better speedups.
Second, we compared Algorithms 3 and 4 on several intentionally ill-scaled linear systems
for which some of the scalings inside Algorithm 2 have to be invoked, but whose condition
numbers are still finite. For SGECON alone with matrices of sizes 100 to 500, we obtained
speedups from 1.62 to 3.33 on the DECstation 5000, and from 1.89 to 2.67 on the DEC
Alpha.
Third, to study the behavior and performance of the two algorithms when exceptions
do occur, we generated a suite of ill-conditioned matrices that cause all possible exceptional
paths in Algorithm 4 to be executed. Both Algorithms 3 and 4 consistently deliver zero as the
reciprocal condition number. For Algorithm 4, inside the triangular solve, the computation
involves such numbers as NaN and \Sigma1. Indeed, after an overflow produces \Sigma1, the most
common situation is to subtract two infinities shortly thereafter, resulting in a NaN which
then propagates through all succeeding operations. In other words, if there is one exceptional
operation, the most common situation is to have a long succession of operations with NaNs.
We compared the performance of the "fast" and "slow" DECstation 5000 on a set of such
problems. 3 Recall that the fast DECstation does NaN arithmetic (incorrectly) at the same
speed as with conventional arguments, whereas the slow DECstation computes correctly but
times slower. Table 4 gives the speeds for both DECstations. The slow DEC 5000 goes
3 The test matrices together with the software can be obtained via anonymous ftp [7].
Example 1 Example 2 Example 3
"fast" DEC 5000 speedup 2.15 2.32 2.00
"slow" DEC 5000 slowdown 11.67 13.49 9.00
SPARCstation
Table
4: The speeds of some examples with exceptions. Matrix dimensions are 500.
to times slower than the fast DEC 5000. On some other examples, where only infinities
but no NaNs occurred, the speedups ranged from 3.5 to 6.0 on both machines. Table 4 also
shows the speedups observed on a SPARCstation 10, where both 1 and NaN arithmetic are
implemented correctly and with full speed.
5 Eigenvector Computation
We now consider another opportunity to exploit IEEE exception handling. The problem
is to compute eigenvectors of general complex matrices. This example, in contrast to early
ones, requires recomputing the answer slowly after an exception occurs, as in our paradigm.
Let A be an n-by-n complex matrix. If non-zero vectors v and u, and a scalar - satisfy
conjugate transpose), then - is called an eigenvalue, and v
and u are called the right and left eigenvectors associated with the eigenvalue -, respectively.
In LAPACK, the task of computing eigenvalues and the associated eigenvectors is performed
in the following stages (as in the routine CGEEV):
1. A is reduced to upper Hessenberg form H, which is zero below the first subdiagonal.
The reduction can be written
2. H is reduced to Schur form T . The reduction can be written
an upper triangular matrix and S is unitary [12]. The eigenvalues are on the diagonal
of T .
3. CTREVC computes the eigenvectors of T . Let V be the matrix whose columns are the
right eigenvectors of T . Then S \Delta V are the right eigenvectors of H, and Q are
the right eigenvectors of A. Similarly, we can compute the left eigenvectors of A from
those of T .
Let us first examine the important stage of calculating the eigenvectors of an upper
triangular matrix T . The eigenvalues of T are t To find a right eigenvector v
associated with the eigenvalue t ii , we need to solve the homogeneous equation
which can be partitioned into the block form6 6 6 6 6 4
By backward substitution, we have v satisfying the equation
Therefore, the problem is reduced to solving an upper triangular system (3) of dimension
To find all the n eigenvectors we need to solve triangular system (3)
any scalar multiple of v is also an eigenvector of T , we always
expect to obtain an answer by scaling the solution vector no matter how ill-conditioned
or badly scaled the triangular system (3) is. For this purpose, CTREVC calls the triangular
solve routine CLATRS instead of calling the triangular solver CTRSV in the BLAS. CLATRS
is a complex counterpart of SLATRS as discussed in Section 3, using Algorithm 2. In most
common cases, however, the scaling unnecessarily introduces overhead. We reimplemented
the part of CTREVC containing the triangular solve. When solving each equation (3), we first
call CTRSV and test the exception flags. If exceptions occur, then we go back to call CLATRS.
To study the efficiency of the modified CTREVC, we ran the old code and our new one
on random upper triangular matrices of various sizes. We observed the speedups of from
1.49 to 1.65 on the DECstation 5000, and from 1.38 to 1.46 on the Sun 4/260. In the case
of overflow, each triangular solve is invoked twice, first using CTRSV yet throwing away the
solutions, and second using CLATRS. Since CTRSV is about twice as fast as CLATRS (see Section
3), the performance loss is no more than 50% when a (rare) exception occurs.
To see how the performance attained from CTREVC alone effects the performance of the
whole process of computing eigenvectors of general complex matrices, we timed CTREVC in
the context of CGEEV. It turns out that CTREVC amounts to about 20% of the total execution
time of CGEEV. Therefore, we expect that the speed of the whole process can be increased by
about 8%.
6 Symmetric Tridiagonal Eigenvalue Problem
In this section we consider the problem of finding the eigenvalues of a real symmetric tridiagonal
matrix.
Let T be an n-by-n symmetric tridiagonal matrix of the form
. b
b
The bisection method is an accurate, inexpensive, and parallelizable procedure for calculating
the eigenvalues of T . The inner loop of this method is based on an integer-valued
function count(oe) of a real argument oe defined as
Function count(oe)
endfor
return C;
Thus, the count() function counts the number of non-positive t's in the above iteration.
It is known that this number equals the number of eigenvalues less than or equal to oe [12].
Suppose we wish to find the eigenvalues in (a; b] using bisection. First, we evaluate count(a)
and count(b). The difference of the two count values is the number of eigenvalues in (a; b].
Now let b)=2, the midpoint of the interval, and evaluate count(oe). From this
we can deduce how many eigenvalues lie in each of the intervals (a; oe] and (oe; b]. Then we
recursively bisect each of the two intervals. An interval containing a single eigenvalue is
bisected repeated until the eigenvalue has been determined with sufficient precision.
The division involved in the recurrence for t may cause division by zero or overflow. Again,
to prevent the occurrence of the exceptions, a more careful scheme was first developed by
W. Kahan [16] and later used in LAPACK SSTEBZ routine [2]. There, the algorithm first
computes a threshold pivmin, which is the smallest number that can divide b 2
overflow. Inside the inner loop the divisor t is compared with pivmin and changed to
\Gammapivmin if it is too close to zero. Algorithm 6 gives the details of the method.
Algorithm Computes the number of eigenvalues less than or equal to oe.
endfor
return C;
On machines with IEEE floating-point arithmetic, we may rewrite the count function as
in Algorithm 7, even though b 2
may overflow. Whenever this occurs, default values \Sigma1
are used to continue the computation.
Algorithm 7: Computes the number of eigenvalues less than or equal to oe.
endfor
return C;
Signbit(x) extracts the sign bit of a floating-point number x represented in the IEEE
format. The returned value is either 0 or 1 depending on whether x is positive or negative.
The signbit(x) can be computed quickly by logically shifting the sign bit of x to the rightmost
bit position of a register, leaving zeros in all the other bits.
The correctness of Algorithm 7 relies on the fact that arithmetic with \Sigma1 and signed
zeros \Sigma0 obeys certain rules defined by the IEEE standard. The merit of Algorithm 7 is
that it replaces the two explicit conditional branches with a single straight-line statement,
and this makes better use of floating-point pipelines. The only hardware requirement for
Algorithm 7 to attain good speed is the speed of infinity arithmetic.
On a SPARCstation IPX, where infinity arithmetic is as fast as conventional arithmetic,
we measured the speed of Algorithm 6 and 7 for various matrices of sizes ranging from 100
to 1000. Algorithm 7 achieved speedups ranging from 1.20 to 1.30 over Algorithm 6. We
also compared the two bisection algorithms, using Algorithm 6 and 7 as the inner loops
respectively, to find all eigenvalues. We were able to get speedups ranging from 1.14 to 1.24.
This is due to the dominant role of the count() function in the bisection algorithm.
We also did comparisons on a distributed memory multiprocessor - the Thinking Machines
CM-5 [19]. Our CM-5 configuration contains 64 33-Mhz SPARC 2 processors, interconnected
by a fat-tree network. Each processing node has 8 Mbytes of local memory.
Coordination and synchronization among processing nodes are achieved via explicitly pass-
Matrix size Ts (LAPACK bisect)
Table
5: Speedups of the parallel bisection algorithms on the CM-5.
ing messages. The floating-point arithmetic on the CM-5 conforms to IEEE standard, and
infinity arithmetic is as fast as conventional arithmetic. Inderjit Dhillon et al.[8] have designed
a parallel bisection algorithm on the CM-5, where the whole spectrum is divided into
subintervals, and each processing node is responsible for finding the eigenvalues within
one subinterval. A dynamic load balancing scheme is incorporated when eigenvalues are not
evenly distributed.
In
Table
5 we report three types of speedup numbers from our experiments. T s (algo)
stands for the running time of the algo on a single node of the CM-5; T p (algo) stands for the
running time of the algo on the 64 node CM-5. Thus, Ts(algo)
represents the parallel speedup
of the algo. The two algorithms we compared are: LAPACK bisect that used Algorithm
6 to get the count value, and IEEE bisect that used Algorithm 7 to get the count value.
The last column demonstrates the speedup of the parallel IEEE bisect against the parallel
bisect. We see the speedups attained by using IEEE arithmetic ranges from 1.18
to 1.47.
7 Singular Value Decomposition
In this section we discuss using exception handling to speed up the computation of the singular
value decomposition of a matrix [12]. This is an important linear algebra computation,
with many applications. It consists of two phases. Phase 1, reduction to bidiagonal form (i.e.
nonzero on the diagonal and first superdiagonal only), costs O(n 3 ) operations, where n is
the matrix dimension. Phase 2, computing singular values of a bidiagonal matrix, costs just
Phase 2 can take much longer than Phase 1 on machines
like the CRAY-C90 because Phase 1 is readily vectorized (or parallelized), whereas Phase 2
consists of nonlinear recurrences which run at scalar speeds. For example, when
Phase 1 does about 2:1 floating point operations at a speed of 594 Megaflops, for a time
of 0:036 seconds, whereas Phase 2 does 1:6 floating point operations at a speed of just
6.9 Megaflops, for a time of 0:23 seconds. Phase 2 takes longer than Phase 1 up to n - 1200.
So in this section we will discuss using exception handling to accelerate Phase 2. Phase 2 is
implemented by a slight modification of LAPACK subroutine SBDSQR [2], which we describe
below.
It suffices to consider one of the main loops in SBDSQR; the others are similar. In addition
to 12 multiplies and 4 addition, there are two uses of an operation we will call
rot(f; r). It takes f and g as inputs, and returns
and g=r as outputs. This simple formula is subject to failure or inaccuracy when either f
or g is greater than the square root of the overflow threshold, or when both are little larger
than the square root of the underflow threshold. Therefore, SBDSQR currently does a series
of tests and scalings to avoid this failure. (The difference between SBDSQR and our routine
is that our routine in-lines rot and uses a slightly different and move accurate scaling algo-
rithm.) Almost all the time, these tests indicate no scaling is needed, but it is impossible
to determine this without running through the whole loop. We compare the performance
of two versions of SBDSQR, one which tests and scales as above, and another, which we will
call SBDSQR UNSAFE, which just uses simple single line formulas for r, cs and sn. We tested
these two routines on a CRAY Y-MP (EL/2-256). The speedups depend somewhat on the
matrix. The test bidiagonal matrix A had entries of the form A
with dimensions ranging from 50 to 1000. With most speedups were between
1.28 and 1.39, with half over 1.35. With most speedups were between 1.21 and
1.31.
8 Lessons for System Architects
The most important lesson is that well-designed exception handling permits the most common
cases, where no exceptions occur, to be implemented much more quickly. This alone
makes exception handling worth implementing well.
A trickier question is how fast exception handling must be implemented. There are three
speeds at issue: the speed of NaN and infinity arithmetic, the speed of testing sticky flags,
and the speed of trap handling. In principle, there is no reason NaN and infinity arithmetic
should not be as fast as conventional arithmetic. The examples in section 4.2 showed that
a slowdown in NaN arithmetic by a factor of 80 from conventional arithmetic slows down
condition estimation by a factor of to 30.
Since exceptions are reasonably rare, these slowdowns generally affect only the worst case
behavior of the algorithm. Depending on the application, this may or may not be important.
If the worst case is important, it is important that system designers provide some method of
fast exception handling, either NaN and infinity arithmetic, testing the sticky flags, or trap
handling. Making all three very slow will force users to code to avoid all exceptions in the
first place, the original unpleasant situation exception handling was designed to avoid.
It is particularly important to have fast exception handling in a parallel computer for the
following reason. The running time of a parallel algorithm is the running time of the slowest
processor, and the probability of an exception occurring on at least one processor can be p
times as great as on one processor, where p is the number of processors.
9 Future Work
The design paradigm for numerical algorithms proposed in this paper is quite general and can
be used to develop other numerical algorithms. These include rewriting the BLAS routine
SNRM2 to compute the Euclidean norm of a vector, and the LAPACK routine SHSEIN (which
now calls SLATRS) to compute the eigenvectors of a real upper Hessenberg matrix.
In complex division, gradual underflow instead of flush to zero can guarantee a more
accurate result, see [5]. This requires fast arithmetic with denormalized numbers.
Floating point parallel prefix is a useful operation for various linear algebra problems.
Its robust implementation with the protection against over/underflow requires fine grained
detection and handling of exceptions [6].
Our final comment concerns the tradeoff between the speed of NaN and infinity arithmetic
and the granularity of testing for exceptions. Our current approach uses a very large
granularity, since we test for exceptions only after a complete call to STRSV. For this approach
to be fast, NaN and infinity arithmetic must be fast. On the other hand, a very fine grained
approach would test for exceptions inside the inner loop, and so avoid doing useless NaN
and infinity arithmetic. However, such frequent testing is clearly too expensive. A compromise
would be to test for exceptions after one or several complete iterations of the inner
loop in STRSV. This would require re-implementing STRSV. This medium grained approach
is less sensitive to the speed of NaN and infinity arithmetic. The effect of granularity on
performance is worth exploration.
The software described in this report is available from the authors [7].
Acknowledgements
The authors wish to thank W. Kahan for his detailed criticism and comments. We also wish
to thank Inderjit Dhillon for providing us the performance results of the bisection algorithms
running on the CM-5.
--R
Robust triangular solves for use in condition estimation.
Underflow and the reliability of numerical software.
Specifications for robust parallel prefix operations.
Faster numerical algorithms via exception handling.
A parallel algorithm for the symmetric tridiagonal eigenproblem and its implementation on the CM-5
A set of Level 3 Basic Linear Algebra Subprograms.
An Extended Set of FORTRAN Basic Linear Algebra Subroutines.
Sites (editor). Alpha Architecture Reference Manual.
Matrix Computations.
Condition estimators.
Algorithm 674: FORTRAN codes for estimating the one-norm of a real or complex matrix
SPARC International Inc.
Accurate eigenvalues of a symmetric tridiagonal matrix.
MIPS Risc Architecture.
Basic Linear Algebra Subprograms for Fortran usage.
The Connection Machine CM-5 Technical Summary
--TR
An extended set of FORTRAN basic linear algebra subprograms
MIPS RISC architecture
A set of level 3 basic linear algebra subprograms
The SPARC architecture manual
Alpha architecture reference manual
LAPACK''s user''s guide
FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation
Matrix computations (3rd ed.)
Basic Linear Algebra Subprograms for Fortran Usage
Accurate eigenvalues of a symmetric tri-diagonal matrix
Working Note No. 36: Robust Triangular Solves for Use in Condition Estimation
--CTR
Technical report for floating-point exception handling, ACM SIGPLAN Fortran Forum, v.15 n.3, p.1-28, Dec. 1996
David Bindel , James Demmel , William Kahan , Osni Marques, On computing givens rotations reliably and efficiently, ACM Transactions on Mathematical Software (TOMS), v.28 n.2, p.206-238, June 2002
Inderjit S. Dhillon , Beresford N. Parlett , Christof Vmel, The design and implementation of the MRRR algorithm, ACM Transactions on Mathematical Software (TOMS), v.32 n.4, p.533-560, December 2006
Xiaoye S. Li , James W. Demmel , David H. Bailey , Greg Henry , Yozo Hida , Jimmy Iskandar , William Kahan , Suh Y. Kang , Anil Kapur , Michael C. Martin , Brandon J. Thompson , Teresa Tung , Daniel J. Yoo, Design, implementation and testing of extended and mixed precision BLAS, ACM Transactions on Mathematical Software (TOMS), v.28 n.2, p.152-205, June 2002
John R. Hauser, Handling floating-point exceptions in numeric programs, ACM Transactions on Programming Languages and Systems (TOPLAS), v.18 n.2, p.139-174, March 1996 | exception handling;parallel algorithms;numerical linear algebra;digital arithmetic;parallel machines;fast numerical algorithms;LAPACK library;IEEE floating point arithmetic;convergence of numerical methods;eigenvalues and eigenfunctions;linear algebra;unstable algorithms |
626961 | Distributed Reset. | A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress. | Introduction
We describe in this paper how to "augment" an arbitrary distributed system so that each of
its processes can reset the system to a predefined global state, when deemed necessary. The
augmentation does not introduce new processes or new communication channels to the system.
It merely introduces additional modules to the existing processes. The added modules, communicating
with one another over existing channels, comprise what we call the reset subsystem.
Ideally, resetting a distributed system to a given global state implies resuming the execution
of the system starting from the given state. With this characterization, however, each reset
of a distributed system can be achieved only by a "global freeze" of the system. This seems
rather limiting and, in many applications, more strict than needed. Therefore, we adopt the
more lax, characterization: resetting a distributed system to a given global state
implies resuming the execution of the system from a global state that is reachable, by some
system computation, from the given global state.
There are many occasions in which it is desirable for some processes in a distributed system to
initiate resets; for example,
ffl Reconfiguration: When the system is reconfigured, for instance, by adding processes
or channels to it, some process in the system can be signaled to initiate a reset of the
system to an appropriate "initial state".
ffl Mode Change : The system can be designed to execute in different modes or phases.
If this is the case, then changing the current mode of execution can be achieved by
resetting the system to an appropriate global state of the next mode.
ffl Coordination Loss: When a process observes unexpected behavior from other pro-
cesses, it recognizes that the coordination between the processes in the system has
been lost. In such a situation, coordination can be regained by a reset.
ffl Periodic Maintenance: The system can be designed such that a designated process
periodically initiates a reset as a precaution, in case the current global state of the
system has deviated from the global system invariant.
As processes and channels can fail while a reset is in progress, we are led to designing a reset
subsystem that is fault-tolerant. In particular, our reset subsystem can tolerate the loss of
coordination between different processes in the system (which may be caused by transient failures
or memory loss) and, also, can tolerate the fail-stop failures and subsequent repairs of processes
and channels.
The ability to regain coordination when lost is achieved by making the reset subsystem self-stabilizing
in the following sense. If the reset subsystem is at a global state in which coordination
between processes is lost, then the reset subsystem is guaranteed to reach, within a finite number
of steps, a global state in which coordination is restored. Once coordination is restored, it is
maintained unless a later failure causes it to be lost again, and the cycle repeats [6, 7]. The
ability to tolerate fail-stop failures and subsequent repairs of processes and channels is achieved
by allowing each process and channel in the system to be either "up" or "down" and by ensuring
that the ability of the system to self-stabilize is not affected by which processes or channels are
"up" or "down".
Our reset subsystem is designed in a simple, modular, and layered manner. The design consists
of three major components: a leader election, a spanning tree construction, and a diffusing
computation. Each of these components is self-stabilizing, can tolerate process and channel
failures and repairs, and admits bounded-space implementations. These features distinguish our
design of these components from earlier designs [1, 9, 10] and redress the following comment
made by Lamport and Lynch [15, page 1193] : "A self-stabilizing algorithm [that translates a
distributed system designed for a fixed but arbitrary network into one that works for a changing
network] using a finite number of identifiers would be quite useful, but we know of no such
algorithm."
The rest of the paper is organized as follows. In the next section, we describe the layered
structure of our reset subsystem. This structure consists of three layers: a (spanning) tree layer,
a wave layer, and an application layer. These three layers are discussed in Sections 3, 4, and 5
respectively. In Section 6, we discuss implementation issues; in particular, we exhibit bounded,
low atomicity implementations of each layer. Finally, we make concluding remarks in Section 7.
2 Layers of the Reset Subsystem
We make the following assumptions concerning the distributed system to be augmented by our
reset subsystem. The system consists of K processes named P:1; ::: ; P:K. At each instant, each
process is either up or down , and there is a binary, irreflexive, and symmetric relation defined
over the up processes. We call this relation the adjacency relation. Only adjacent processes can
communicate with one another.
The set of up processes and the adjacency relation defined over them can change with time. For
simplicity, however, we assume that the adjacency relation never partitions the up processes in
the system. (Clearly, if partitioning does occur, then any reset request initiated in a partition
will result in resetting the state of only that partition.)
Each process P:i in the system consists of two modules adj:i and appl:i; see Figure 0a. The task
of module adj:i is to maintain a set N:i of the indices of all up processes adjacent to P:i. (Details
of the implementation of adj:i are outside the scope of this paper. One possible implementation,
however, is for each adj:i to communicate periodically with the adj:j module of every potentially
adjacent process P:j and to employ a timeout to determine whether the index j of process P:j
should be in N:i.) The task of the other module, appl:i, is application specific. To perform its
task, appl:i can communicate with module appl:j, j 6= i, only if j is in N:i. One state of appl:i is
distinguished. Together, the distinguished states of each appl:i module comprise the predefined
global "reset" state of the distributed system.
Augmenting such a distributed system with a reset subsystem consists of adding two modules,
tree:i and wave:i, to each process P:i in the system; see Figure 0b. The tree:i modules of
adjacent processes communicate in order to maintain a rooted spanning tree that involves all
the up processes in the system. (Henceforth, the two terms "process" and "up process" are
used interchangeably.) The constructed tree is maintained to be consistent with the current
adjacency relation of the system; thus, any changes in the adjacency relation are eventually
followed by corresponding changes in the spanning tree. Each tree:i module keeps the index
of its "father" process, f:i, in the maintained tree; this information is used by the local wave:i
module in executing a distributed reset.
A distributed reset is executed by the wave:i modules in three phases or "waves". In the first
phase, some appl:i requests a system reset from its local wave:i which forwards the request to
the root of the spanning tree. If other reset requests are made at other processes, then these
requests are also forwarded to the root process. It is convenient to think of all these requests
as forming one "request wave". In the second phase, module wave:i in the root process receives
the request wave, resets the state of its local appl:i to the state of appl:i in the predefined global
state, and initiates a "reset wave". The reset wave travels towards the leaves of the spanning
tree and causes the wave:j module of each encountered process to reset the state of its local
appl:j to the state of appl:j in the predefined global state. When the reset wave reaches a leaf
process it is reflected as a "completion wave" that travels back to the root process; this wave
comprises the third phase. Finally, when the completion wave reaches the root, the reset is
complete, and a new request wave can be started whenever some appl:i deems necessary.
From the above description, it follows that the states of different appl:i modules are reset at
different times within the same distributed reset. This can cause a problem if some appl:i whose
state has been reset communicates with an adjacent appl:j whose state has not yet been reset. To
avoid this problem, we provide a session number sn:i in each appl:i. In a global state, where no
distributed reset is in progress, all session numbers are equal. Each reset of the state of appl:i
is accompanied by incrementing sn:i. We then require that no two adjacent appl:i modules
communicate unless they have equal session numbers. This requirement suffices to ensure our
characterization of a distributed reset; that is, a distributed reset to a given global state yields
a global state that is reachable, by some system computation, from the given global state.
The tree:i modules in different processes constitute the tree layer discussed in Section 3. The
wave:i modules constitute the wave layer discussed in Section 4. The appl:i modules constitute
the application layer discussed in Section 5.
2.1 Programming Notation
The program of each process has the form
begin hmodulei [] . [] hmodulei end
Each module is of the form
module hmodule namei
var hvariable declarationsi ;
parameter hparameter declarationsi ;
begin
hactioni [] . [] hactioni
Thus, a module of a process is defined by a set of variables, a set of parameters, and a set
of actions. Each of these is defined in some detail next.
Each variable in the variable set of a module can be updated (i.e., written) only by modules in
that process; each variable can be read only by modules in that process and modules in adjacent
processes.
Each parameter in the parameter set of a module ranges over a finite domain. The function of
a parameter is to define a set of actions as one parameterized action. For example, let j be a
parameter whose value is 0, 1 or 2; then the parameterized action act:j in the action set of a
module abbreviates the following set of three actions.
Each action in the action set of a module has the form
hguardi \Gamma! hassignment statementi
A guard is a boolean expression over the variables and parameters in the module, and the
variables of one adjacent process. An assignment statement updates one or more variables in
the module.
The operational semantics for a system of such processes is as follows. A state of the system is
defined by a value for every variable in the processes of the system. An action whose guard is
true at some state of the system is said to be enabled at that state. A computation of the system
is a maximal, fair sequence of system steps: in each step, some action that is enabled at the
current state is executed, thereby yielding the next state in the computation. The maximality
of a computation implies that no computation is a proper prefix of another computation. The
fairness of a computation means that each continuously enabled action is eventually executed
in the computation [12].
3 The Tree Layer
The task of the tree layer is to continually maintain a rooted spanning tree even when there are
changes in the set of up processes or in the adjacency relation. In the solution described below,
we accommodate such changes by ensuring that the tree layer performs its task irrespective of
which state it starts from.
In our solution, the rooted spanning tree is represented by a "father" relation between the
processes. Each tree:i module maintains a variable f:i whose value denotes the index of the
current father of process P:i. Since the layer can start in any state, the initial graph of the father
relation (induced by the initial values of the f:i variables) may be arbitrary. In particular, the
initial graph may be a forest of rooted trees or it may contain cycles.
For the case where the initial graph is a forest of rooted trees, all trees are collapsed into a
single tree by giving precedence to the tree whose root has the highest index. This is achieved
as follows. Each tree:i module maintains a variable root:i whose value denotes the index of the
current root process of P:i. If root:i is lower than root:j for some adjacent process P:j then
tree:i sets root:i to root:j and makes P:j the father of P:i.
For the case where the initial graph has cycles, each cycle is detected and removed by using
a bound on the length of the path from each process to its root process in the spanning tree.
This is achieved as follows. Each tree:i module maintains a variable d:i whose value denotes the
length of a shortest path from P:i to P:(root:i). To detect a cycle, tree:i sets d:i to be d:(f:i)+1
whenever f:i 2 N:i and d:i ! K. The net effect of executing this action is that if a cycle exists
then the d:i value of each process P:i in the cycle gets "bumped up" repeatedly. Eventually,
some d:i exceeds K \Gamma 1, where K is the maximum possible number of up processes. Since the
length of each path in the adjacency graph is bounded by K\Gamma1, the cycle is detected. To remove
a cycle that it has detected, tree:i makes P:i its own father.
Because of our assumption that the initial state is arbitrary, we need to consider all other cases
where the initial values of f:i, root:i and d:i are inconsistent. One possibility is that these initial
values are "locally" inconsistent, that is, one or more of the following hold: root:i ! i,
root:i 6= i or d:i 6= 0, or f:i is not i nor in N:i. In this case, tree:i makes itself locally consistent
by setting root:i to i, f:i to i and d:i to 0.
Another possibility is that root:i may be inconsistent with respect to the state of the father
process of P:i, that is, root:i 6=root:(f:i) may hold. In this last case, tree:i corrects the value of
root:i to that of root:j.
Module tree:i is given in Figure 1.
module
var
begin
Figure
1: Module tree:i
We show in Appendix A that starting at any state (i.e., one that could have been reached by
any number of changes in the set of up processes and the adjacency relation over them), the
tree layer is guaranteed to eventually reach a state satisfying the state predicate G, where
At each state in G, for each process P:i, root:i equals the highest index among all up processes,
f:i is such that some shortest path between process P:i and the root process P:(root:i) passes
through the father process P:(f:i), and d:i equals the length of this path. Therefore, a rooted
spanning tree exists. Also, note that each state in G is a fixed-point; i.e., once the tree:i modules
reach a state in G, no action in any of the tree:i modules is enabled.
Our proof employs the "convergence stair" method [13]: we exhibit a finite sequence of state
predicates H:0; H:1; :::; H:K such that
(iii) For each l such that 0- l -K:
H:l is closed under system execution; that is, once H:l holds in an arbitrary system com-
putation, it continues to hold subsequently.
(iv) For each l such that 0- l !K:
Upon starting at an arbitrary state in H:l the system is guaranteed to reach a state in
.
We also show that convergence to a state in G occurs within O(K rounds, where
deg is the maximum degree of nodes in the adjacency graph, dia is the diameter of the adjacency
graph and, informally speaking, a round is a minimal sequence of system steps wherein each
process attempts to execute at least one action.
We conclude this section with the remark that the problems of leader election and spanning tree
construction have received considerable attention in the literature (see, for example, [15, 16, 17]).
Most of these algorithms are based on the assumption that all processes start execution in some
designated initial state. This restriction is too severe for our purposes, and we have lifted it
by designing the tree layer to be self-stabilizing; i.e., insensitive to the initial state. We note
that a self-stabilizing spanning tree algorithm has been recently described in [9]. However, the
algorithm in [9] is based on the simplifying assumption that, at all times, there exists a special
process which knows that it is the root. We have not made this assumption: if a root process
fails, then the remaining up processes elect a new root.
4 The Wave Layer
As outlined in Section 2, the task of the wave layer is to perform a diffusing computation [10]
in which each appl:i module resets its state. The diffusing computation uses the spanning tree
maintained by the tree layer, and consists of three phases. In the first phase, some appl:i
module requests its local wave:i to initiate a global reset; the request is propagated by the wave
modules along the spanning tree path from process P:i to the tree root P:j. In the second phase,
module wave:j in the tree root resets the state of its local appl:j and initiates a reset wave that
propagates along the tree towards the leaves; whenever the reset wave reaches a process P:k the
local wave:k module resets the state of its local appl:k . In the third phase, after the reset wave
reaches the tree leaves it is reflected as a completion wave that is propagated along the tree to
the root; the diffusing computation is complete when the completion wave reaches the root.
To record its current phase, each wave:i module maintains a variable st:i that has three possible
values: normal, initiate, and reset. When module wave:i has propagated the
completion wave of the last diffusing computation and is waiting for the request wave of the next
diffusing computation. When module wave:i has propagated the request wave of
the ongoing diffusing computation and is waiting for its reset wave. When reset, module
wave:i has propagated the reset wave of the ongoing diffusing computation and is waiting for
its completion wave.
Variable st:i is updated as follows. To initiate a new diffusing computation, the local appl:i
module updates st:i from normal to initiate. To propagate a request wave, wave:i likewise
updates st:i from normal to initiate. To propagate a reset wave, wave:i updates st:i from a
value other than reset to reset. Lastly, to propagate a completion wave, wave:i updates st:i
from reset to normal.
It is possible for some appl:i to update st:i from normal to initiate before the completion wave
of the last diffusing computation reaches the root process; thus, multiple diffusing computations
can be in progress simultaneously. To distinguish between successive diffusing computations,
each wave:i module maintains an integer variable sn:i denoting the current session number of
wave:i.
Recall that the operation of the wave layer is subject to changes in the set of up processes and
in the adjacency relation. As before, we accommodate such changes by ensuring that the layer
performs its task irrespective of which state it starts from. In our solution, starting from an
arbitrary state, the wave layer is guaranteed to reach a steady state where all the sn:i values
are equal and each st:i has a value other than reset. In particular, if no diffusing computation
is in progress in a steady state, then all the sn:i values are equal and each st:i has the value
normal. Furthermore, if a diffusing computation is initiated in a steady state where all sn:i
have the value m then it is guaranteed to terminate in a steady state where all
This is achieved by requiring that, during the reset wave, each wave:i module increments sn:i
when it resets the state of the local appl:i module.
Module wave:i is given in Figure 2. The module has five actions. Action (1) propagates the
request wave from a process to its father in the spanning tree. When the request wave reaches
the root process, action (2) starts a reset wave at the root process. Action (3) propagates the
reset wave from the father of a process to the process. Action (4) propagates the completion
wave from the children of a process to the process.
The above four actions of all wave:i modules collectively perform a correct diffusing computation
provided that the wave layer is in a steady state. The steady states of the wave layer are those
where each wave:i satisfies Gd:i,
Action (5) ensures the self-stabilization of the wave layer to steady states.
module
begin
st:i=normal
st:i=initiate - f:i= i \Gamma! st:i; sn:i := reset; sn:i+1 (2)
Figure
2: Module wave:i
We show in Appendix B that starting at any state, the wave layer is guaranteed to eventually
reach a steady state satisfying (8i : sn:i=n - st:i 6=reset) for some integer n. Our proof of this
consists of showing that
(i) Starting at an arbitrary state, the system is guaranteed to reach a state in GD, where
(ii) The state predicate GD is closed under system execution.
(iii) Starting at an arbitrary state in GD where the root process P:k has
is guaranteed to reach a state in (8i : sn:i=n - st:i 6=reset).
We also show that each diffusing computation that is initiated at a state in GD will terminate;
i.e., starting from a state satisfying (GD -
the system is guaranteed to reach a state in (GD -
Lastly, we show that convergence to a GD state occurs within O(ht) rounds and that diffusing
computations terminate within O(min (ht\Thetadg; n) ) rounds, where ht is the height of the spanning
tree constructed by the tree layer, dg is the maximum degree of nodes in the spanning tree, and
n is the number of up processes in the system.
5 The Application Layer
The application layer in a given distributed system is composed of the appl:i modules as shown
in
Figure
0. In this section, we discuss two modifications to the application layer by which our
reset subsystem can be correctly added to the given distributed system.
The first modification is to augment each appl:i module with actions that allow it to request
a distributed reset; as discussed in Section 4, these actions set the variable st:i to initiate and
are enabled when normal holds and a distributed reset is necessary. The situations in
which distributed resets are necessary are application specific. One such situation, however, is
when the global state of the application layer is erroneous. Erroneous states may be detected
by periodically executing a self-stabilizing global state detection algorithm [8, 14]. Towards this
end, we note that it is possible to implement a self-stabilizing global state detection with minor
modifications to our reset subsystem.
The second modification is to restrict the actions of each appl:i module so that the application
layer can continue its execution while a distributed reset is in progress. (Recall that one objective
of our design is to avoid freezing the execution of the given distributed system while performing
resets.) This modification is based on the observation that, during a distributed reset, appl:i
modules can continue executing their actions as long as there is no communication between
modules one of which has been reset and another which has not been reset. Equivalently, if
appl:i modules communicate they should have the same session number (sn) values. Therefore,
we require that the expression "sn:i =sn:j" be conjoined to the guard of each appl:i action that
accesses a variable updated by appl:j; i 6= j: The net effect of this modification is that upon
completion of a distributed reset the collective state of all appl:i modules is reachable by some
application layer execution from the given collective state that the appl:i modules are reset to.
6 Implementation Issues
In this section, we discuss two issues related to implementations of modules tree:i and wave:i .
First, we show that the state-space of each process can be bounded and, second, we show how
to refine the "high" atomicity actions employed thus far into "low" atomicity ones.
6.1 Bounded-Space Construction
Each tree:i module, i 2f1 ::: Kg, updates three variables each requiring log K bits. In contrast,
module wave:i uses an unbounded session number variable. A bounded construction is also
possible: wave:i can be transformed by making sn:i of type f0::N \Gamma1g, where N is an arbitrary
natural constant greater than 1, and replacing the increment operation in the first action with
an increment operation in modulo N arithmetic. Thus, each wave:i module can be implemented
using a constant number of bits. The proof of correctness of the transformed module is similar
to the proof presented in Appendix B, and is left to the reader.
6.2 Transformation to Read/Write Atomicity
Thus far, our design of the tree:i and wave:i modules has not taken into account any atomicity
constraints. Some actions in these modules are of high atomicity; these actions read variables
updated by other processes and instantaneously write other variables. We now refine our design
so as to implement these modules using low atomicity actions only.
Consider the following transformation. For each variable x:i updated by process P:i, introduce
a local variable ~ x:j:i in each process P:j; j 6= i; that reads x:i. Replace every occurrence of x:i in
the actions of P:j with ~ x:j:i, and add the read action ~
x:j:i := x:i to the actions of P:j. Based
on this transformation, read/write atomicity modules for tree:i and wave:i are presented next,
along with proofs of correctness.
The code for read/write atomicity implementation of module tree:i is shown in Figure 3.
We show in Appendix C that starting at any state, the tree layer is guaranteed to eventually
reach a state satisfying the state predicate G, where
The structure of our proof is identical to the proof presented in Appendix A; we exhibit a finite
sequence of state predicates H:0; H:1; :::; H:K such that
(iii) For each l such that 0- l -K:
H:l is closed under system execution; that is, once H:l holds in an arbitrary system com-
putation, it continues to hold subsequently.
(iv) For each l such that 0- l !K:
Upon starting at an arbitrary state in H:l the system is guaranteed to reach a state in
.
module
var
~
root:i:j; ~
~
begin
root:i:j; ~
root:i:j; ~
d:i:j := root:j; f:j; d:j
Figure
3: Implementation of tree:i using Read/Write Atomicity
The code for read/write atomicity implementation of module wave:i is shown in Figure 4.
We show in Appendix D that starting at any state, the wave layer is guaranteed to eventually
reach a state satisfying (8i : sn:i =n - st:i 6= reset) for some integer n. The structure of our
proof is identical to the proof presented in Appendix B; we exhibit a state predicate GD such
that
(i) Starting at an arbitrary state, the system is guaranteed to reach a state in GD.
(ii) GD is closed under system execution.
module
~
~
begin
sn:i:j \Gamma! st:i; sn:i := reset; ~
st:i:j 6=reset - sn:i= ~
sn:i:j \Gamma! sn:i := ~
sn:i:j
st:i:j; ~
Figure
4: Implementation of wave:i using Read/Write Atomicity
(iii) Starting at an arbitrary state in GD where the root process P:k has
is guaranteed to reach a state in (8i : sn:i=n - st:i 6=reset).
We also show that each diffusing computation that is initiated at a state in GD will terminate;
i.e., upon starting from a state satisfying (GD -
integer n the system is guaranteed to reach a state in (GD -
We note that a similar proof exists for a bounded construction of the low atomicity wave:i
module in which sn:i is replaced with a variable of type is an arbitrary
natural constant greater than 3, and the increment operation in the first action is replacing with
an increment operation in modulo N arithmetic.
Conclusions
We have presented algorithms that enable processes in arbitrary distributed systems to perform
distributed resets. These algorithms are novel in that they are self-stabilizing and can tolerate
the fail-stop failures and repairs of arbitrary processes and channels even when a distributed
reset is in progress.
Two comments are in order regarding our choice of fair, nondeterministic interleaving semantics.
First, the requirement of fairness with respect to continuously enabled actions is not necessary,
but is used only in simplifying the proofs of correctness. Second, our design remains correct
even if we weaken the interleaving requirement as follows: in each step, an arbitrary subset of
the processes each execute some enabled action, as long as no two executed actions access the
same shared variable [2, 3, 5].
A comment is also in order regarding our methodology for achieving fault-tolerance in distributed
systems. One way to achieve system fault-tolerance is to ensure that when faults occur
the system continues to satisfy its input-output relation. Systems designed thus "mask" the
effects of faults, and are hence said to be masking fault-tolerant. An alternative way to achieve
system fault-tolerance is to ensure that when faults occur the input-output relation of the system
is violated only temporarily. In other words, the system is guaranteed to eventually resume
satisfying its input-output relation. In this paper, it is the latter "nonmasking" approach to
fault-tolerance that we have adopted.
We give three reasons for sometimes preferring nonmasking fault-tolerance to masking fault-tolerance
when designing distributed systems. First, in some distributed systems, masking
fault-tolerance may be impossible to achieve. For example, there is no masking fault-tolerant
distributed system whose up processes communicate asynchronously and reach consensus on a
binary value even when one or more of the processes fail [11]. Second, even if it is possible to
implement masking fault-tolerance, the cost of doing so may be prohibitive. For example, the
amount of redundancy or synchronization required may be infeasible to implement. And third,
requiring masking fault-tolerance may be more strict than is desirable. For example, a call-back
telephone service that eventually establishes a connection may be quite useful even if it does not
mask its initial failure to establish a connection.
Of course, to be of practical use, nonmasking fault-tolerant distributed systems should be designed
so that the time taken to resume satisfying the desired input-output relation, when faults
occur, is within acceptable bounds.
We envisage several applications of distributed resets where their nonmasking fault-tolerance
is useful. We are currently implementing distributed operating system programs based on distributed
resets including, for example, system programs for multiprocess resynchronization. We
are also currently studying reconfiguration protocols for high speed networks.
We note that distributed resets provide a systematic method for making arbitrary distributed
systems self-stabilizing (cf. [14]): application layer modules can be augmented to perform a self-stabilizing
global state detection periodically, and to request a distributed reset upon detecting
erroneous global states thereby making the distributed system self-stabilizing. Distributed resets
can also be used to transform an arbitrary self-stabilizing program into an equivalent self-stabilizing
program implemented in read/write atomicity.
There are several issues that need to be further investigated. One such issue is the transformation
of our read/write atomicity programs (cf. Figures 3 and 4) into message passing programs, and
the analysis of the resulting programs. Note that for message passing programs the predefined
global reset state includes, in addition to the states of each appl:i module, the state of each
channel in the system. Therefore, in addition to resetting the local state of the module appl:i,
each wave:i module has to send some - possibly empty - sequence of application messages,
each tagged with the new session number, on every outgoing channel of P:i.
Another issue for further study is the design of an efficient mechanism for maintaining a timely
and consistent state of neighboring process indices. A third issue is the security problems
involved in allowing any application process to reset the distributed system, and the protection
mechanism necessary to enforce that application processes interact with the reset subsystem
in the desired manner. Finally, observing that self-stabilizing systems are only one type of
nonmasking fault-tolerant systems, it is desirable to investigate alternative nonmasking fault-tolerant
solutions to the distributed reset problem that are less robust than our self-stabilizing
solutions but are even more efficient.
Acknowledgements
We thank George Varghese for helpful discussions on this paper and the anonymous referees for
their suggestions.
--R
"Applying static network protocols to dynamic networks"
"A foundation of fault-tolerant computing,"
"Convergence of iteration systems"
"Distributed reset (extended abstract)"
"On relaxing interleaving assumptions"
"Token systems that self-stabilize"
"Uniform self-stabilizing rings"
"Distributed snapshots: Determining global states of distributed systems"
"Self-stabilization of dynamic systems assuming only read/write atomicity"
"Termination detection for diffusing computa- tions"
"Impossibility of distributed consensus with one faulty process"
"Stabilizing communication protocols"
"Self-stabilizing extensions for message-passing systems"
"Distributed computing: models and methods"
"An algorithm for distributed computation of a spanning tree in an extended LAN"
"A correctness proof of a topology information maintenance protocol for a distributed computer network"
--TR
Uniform self-stabilizing rings
Token Systems That Self-Stabilize
Self-stabilizing extensions for message-passing systems
Self-stabilization of dynamic systems assuming only read/write atomicity
Distributed computing
Stabilizing Communication Protocols
A foundation of fault-tolerant computing
Impossibility of distributed consensus with one faulty process
Distributed snapshots
An algorithm for distributed computation of a spanningtree in an extended LAN
A correctness proof of a topology information maintenance protocol for a distributed computer network
Distributed Reset (Extended Abstract)
--CTR
Jorge A. Cobb , Mohamed G. Gouda, Stabilization of general loop-free routing, Journal of Parallel and Distributed Computing, v.62 n.5, p.922-944, May 2002
Hongwei Zhang , Anish Arora, GS3: scalable self-configuration and self-healing in wireless sensor networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.43 n.4, p.459-480, 15 November
Wilfried Steiner , Michael Paulitsch , Hermann Kopetz, The TTA's Approach to Resilience after Transient Upsets, Real-Time Systems, v.32 n.3, p.213-233, March 2006
Franck Petit , Vincent Villain, Optimal snap-stabilizing depth-first token circulation in tree networks, Journal of Parallel and Distributed Computing, v.67 n.1, p.1-12, January, 2007
Christian Boulinier , Franck Petit , Vincent Villain, When graph theory helps self-stabilization, Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing, July 25-28, 2004, St. John's, Newfoundland, Canada
Neeraj Mittal , Prajwal K. Mohan, A priority-based distributed group mutual exclusion algorithm when group access is non-uniform, Journal of Parallel and Distributed Computing, v.67 n.7, p.797-815, July, 2007
Mohamed G. Gouda , Marco Schneider, Maximizable routing metrics, IEEE/ACM Transactions on Networking (TON), v.11 n.4, p.663-675, August
Mohamed G. Gouda , Marco Schneider, Memory requirements for silent stabilization, Proceedings of the fifteenth annual ACM symposium on Principles of distributed computing, p.27-34, May 23-26, 1996, Philadelphia, Pennsylvania, United States
Alain Cournier , Ajoy K. Datta , Franck Petit , Vincent Villain, Optimal snap-stabilizing PIF algorithms in un-oriented trees, Journal of High Speed Networks, v.14 n.2, p.185-200, April 2005
Mehmet Hakan Karaata, Self-Stabilizing Strong Fairness under Weak Fairness, IEEE Transactions on Parallel and Distributed Systems, v.12 n.4, p.337-345, April 2001
Anish Arora , Mikhail Nesterenko, Unifying stabilization and termination in message-passing systems, Distributed Computing, v.17 n.3, p.279-290, March 2005
Mehmet Hakan Karaata, An optimal self-stabilizing strarvation-free alternator, Journal of Computer and System Sciences, v.71 n.4, p.480-494, November 2005
Mehmet Hakan Karaata, A stabilizing algorithm for finding biconnected components, Journal of Parallel and Distributed Computing, v.62 n.5, p.982-999, May 2002
Fatima Belkouch , Marc Bui , Liming Chen , Ajoy K. Datta, Self-stabilizing deterministic network decomposition, Journal of Parallel and Distributed Computing, v.62 n.4, p.696-714, April 2002
Mikhail Nesterenko , Anish Arora, Stabilization-preserving atomicity refinement, Journal of Parallel and Distributed Computing, v.62 n.5, p.766-791, May 2002
Joffroy Beauquier , Maria Gradinariu , Colette Johnen, Memory space requirements for self-stabilizing leader election protocols, Proceedings of the eighteenth annual ACM symposium on Principles of distributed computing, p.199-207, May 04-06, 1999, Atlanta, Georgia, United States
Sandeep S. Kulkarni , Ravikant, Stabilizing causal deterministic merge, Journal of High Speed Networks, v.14 n.2, p.155-183, April 2005
Azzedine Boukerche , Kaouther Abrougui, An efficient leader election protocol for mobile networks, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada
Anish Arora , Paul C. Attie , E. Allen Emerson, Synthesis of fault-tolerant concurrent programs, Proceedings of the seventeenth annual ACM symposium on Principles of distributed computing, p.173-182, June 28-July 02, 1998, Puerto Vallarta, Mexico
Albert Mo Kim Cheng , Seiya Fujii, Self-Stabilizing Real-Time OPS5 Production Systems, IEEE Transactions on Knowledge and Data Engineering, v.16 n.12, p.1543-1554, December 2004
Yehuda Afek , Shlomi Dolev, Local stabilizer, Journal of Parallel and Distributed Computing, v.62 n.5, p.745-765, May 2002
Yehuda Afek , Anat Bremler, Self-stabilizing unidirectional network algorithms by power-supply, Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms, p.111-120, January 05-07, 1997, New Orleans, Louisiana, United States
Felix C. Grtner, Fundamentals of fault-tolerant distributed computing in asynchronous environments, ACM Computing Surveys (CSUR), v.31 n.1, p.1-26, March 1999
Hongwei Zhang , Anish Arora, GS3: scalable self-configuration and self-healing in wireless networks, Proceedings of the twenty-first annual symposium on Principles of distributed computing, July 21-24, 2002, Monterey, California
A. Arora , P. Dutta , S. Bapat , V. Kulathumani , H. Zhang , V. Naik , V. Mittal , H. Cao , M. Demirbas , M. Gouda , Y. Choi , T. Herman , S. Kulkarni , U. Arumugam , M. Nesterenko , A. Vora , M. Miyashita, A line in the sand: a wireless sensor network for target detection, classification, and tracking, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.46 n.5, p.605-634, 5 December 2004 | diffusing computation;robustness;fault tolerant computing;process failures;self-stabilizing components;system recovery;distributed reset subsystem;distributed processing;channel failures;process repairs;up-process coordination;leader election;embedded system;layered design;channel repairs;fault tolerance;spanning tree construction;reliability;fail-stop failure tolerance |
626974 | Memory Latency Effects in Decoupled Architectures. | Decoupled computer architectures partition the memory access and execute functions in a computer program and achieve high-performance by exploiting the fine-grain parallelism between the two. These architectures make use of an access processor to perform the data fetch ahead of demand by the execute process and hence are often less sensitive to memory access delays than conventional architectures. Past performance studies of decoupled computers used memory systems that are interleaved or pipelined, and in those studies, latency effects were partially hidden due to interleaving. A detailed simulation study of the latency effects in decoupled computers is undertaken in this paper. Decoupled architecture performance is compared to single processors with caches. The memory latency sensitivity of cache based uniprocessors and decoupled systems is studied. Simulations are performed to determine the significance of data caches in a decoupled architecture. It is observed that decoupled architectures can reduce the peak memory bandwidth requirement, but not the total bandwidth, whereas data caches can reduce the total bandwidth by capturing locality. It may be concluded that despite their capability to partially mask the effects of memory latency, decoupled architectures still need a data cache. | Introduction
The execution of a computer program involves two interrelated processes - accessing
data elements from memory and the true computations. A large amount of parallelism
exists between these two tasks. The concurrent execution of these tasks can result in high
performance and this is the principle of decoupled access execute architectures. Many
early high performance computers such as the IBM 360/370, CDC 6600, and CRAY-1
incorporated some techniques to exploit the parallelism between access and execute tasks,
but several architectures in the past few years like the MAP-200 [4], DAE [19] [24], PIPE
[8] [23], SMA [15], SDP [17], FOM [2], ZS-1 [20] [25] [26] and WM [28] partition access
operations and computation functions in the program more distinctly.
Almost all of the aforementioned architectures consist of two processors, one to perform
address calculations and load and store operations, and the other to operate on the
data and produce results. The two processors are often referred to as the access processor
and execute processor respectively. The essence of the job is the part performed by
the execute processor, but the execute processor cannot perform its role without the information
furnished by the access processor. In decoupled architectures FIFO buffers or
queues are provided between the access and execute processors to maximize the overlap
and independence of the two processors. The independence of the two processes allows the
access processor to fetch data elements ahead of demand by the execute processor. This
phenomenon has been called slip by previous researchers [9] [24] [27]. Slip actually refers
to the extent of decoupling between the access and execute processes.
Memory latency effects in decoupled architectures have been studied in [24], [8], [26],
[28] etc. Smith et al [24] compared the performance of a pipelined decoupled architecture
to that of the scalar CRAY-1. This particular research effort also included studies on the
effect of memory latency by varying the access time of main memory. Since a comparison
is made with the CRAY-1, an interleaved memory configuration, (as in the CRAY-1), is
assumed. Memory bank conflicts are also ignored. Here memory access time was varied
from 5 cycles to 32 cycles, but since the memory system was 16 way interleaved, some of
the memory latency effects may have been hidden. The first twelve Lawrence Livermore
Loops were used as the simulation workload.
Goodman et al evaluated the performance of PIPE, a VLSI decoupled architecture
using the Lawrence Livermore Loops [8] [9]. This work includes the effects of memory
speed on performance by conducting studies on both a fast and a slow memory module.
The fast memory had an access time of one clock cycle and the slow one had an access
time of four clock cycles and was four way interleaved. The systems included a memory
controller. With the overheads incurred in the controller, the effective delay seen in the
case of the fast memory is 3 cycles and it is 6 cycles for the slow memory [9]. They also
ignore memory module conflicts in their study.
Smith, Abraham and Davidson [26] reported results on the effects of memory latency
and fine grain parallelism on the Astronautics ZS-1, when connected to a pipelined memory
system. They observed that once the slip limits were high, the computer system is almost
insensitive to memory latency. This study is also based on the Lawrence Livermore Loops.
Wulf [28] presented preliminary results on the performance of the WM architecture.
Though specific memory latency studies are not performed, it is mentioned that data
FIFOs would partially mask the effects of memory latency. They also comment that the
probable Achilles heel of the architecture would be to build a memory system capable of
supplying the bandwidth that the processor can absorb.
1.1 Objective
The performance study presented in this paper has three goals: first to compare the
performance of decoupled computers to uniprocessor systems with caches, second to study
memory latency sensitivity of decoupled computers with a non-interleaved non-pipelined
memory, and third to determine the significance of a data cache in a decoupled architecture.
The performance of decoupled computers is compared against single processors without
caches in the previous studies. Conceptually, in decoupled systems, we are using a
processor to do the access task and eliminate the delay for data operands. In cache based
systems, we utilize a cache to capture locality and eliminate the long delays required to
access main memory. It would be interesting to find out how the two schemes compare to
one another. In this paper, we investigate whether an access processor hides the memory
latency of a computer system better than a data cache. Hence we perform a comparison
of the performance in decoupled mode against uniprocessors with caches.
Another objective of this paper is to study the sensitivity of a decoupled architecture
to memory latency, when the memory is noninterleaved and nonpipelined. Previous
studies report that decoupled computers have less sensitivity to memory path length than
conventional systems [24] [8]. It is also reported that the speed up over a single processor
configuration becomes greater as the memory becomes slower. But as mentioned before,
these studies used interleaved or pipelined memory systems. We want to study whether
system behavior will exhibit a similar pattern even when the memory is noninterleaved.
We are not suggesting that decoupled architectures do not need interleaved memories. Our
aim is only to isolate the memory latency insensitivity contributed by 'decoupling'.
A third objective of this paper is to study the significance of data caches in decoupled
architectures when the memory system is not interleaved. Decoupled architectures
generally do not have data caches. In the architectures described above, only the ZS-1
has a data cache. The reduced sensitivity to memory access time observed in the previous
studies tend to suggest that the improvement possible with a data cache would be minor.
Interleaving and memory pipelining obscure some of the memory latency and we suspect
that, this is one reason for the insensitivity to longer memory cycle times in the studies
in the past. We investigate whether a data cache would result in any serious performance
advantage in decoupled architectures with noninterleaved memory units.
1.2
Overview
In section 2, we briefly describe the decoupled architecture that is used to conduct
our performance studies. The description illustrates that this architecture is very similar
to other decoupled architectures and hence results obtained from this system should apply
at least qualitatively to other decoupled architectures as well. In section 3, we analyze
the mechanism by which decoupled computers alleviate the memory latency problem. In
section 4, we explain our simulation tools and describe the benchmarks used. In section 5,
we detail the simulation results obtained. We compare cache based uniprocessors to de-coupled
systems, study the sensitivity of decoupled systems to memory latency, examine
the limitations of caches and decoupled architectures in eliminating memory latency, and
study the significance of a data cache in a decoupled architecture. We conclude the paper
in section 6.
2. The DEAP Architecture
In this section, we describe the Decoupled Execute Access Processor (DEAP), which
was used to conduct our simulation studies. The DEAP architecture uses two processors,
an Execute Processor (EP) and an Access Processor (AP) as shown in Fig.1. As in standard
(decoupled access execute) architectures [19], the EP executes program instructions
and performs all the required data computations. All accesses of the data memory are
done by the AP. The two processors communicate through architectural queues. To avoid
memory contentions between the EP and the AP, each processor is equipped with a separate
memory unit. Access related instructions are fetched by the AP and computation
instructions are fetched by the EP into their respective instruction caches. Data operands
needed by the EP are fetched from the data memory unit by the AP and passed to the
EP via architectural queues. Results of EP computations are deposited into the queues
and transferred to the data memory by the AP. The EP has no access path to the data
memory. AP instructions are stored in an instruction memory private to the AP to avoid
bus contentions with the data fetch. Since pin limitations of the AP in a VLSI implementation
might pose a problem, the AP instructions could also be stored in the global
data memory unit. This will not degrade the performance in problems with loops since
the AP has an instruction cache. Our performance results in this paper are based on the
DEAP architecture with AP instructions stored in the same memory unit as the data. The
architecture exists only in simulation form at this time.
The AP and the EP see entirely different instruction streams. The code is split at
compile time in such a way that all computation instructions are put in the EP section
of the code and the address calculation and access instructions are put in the AP section.
At execute time two different instruction streams enter the two processors from the two
respective instruction memories. The AP makes address calculations and performs data
memory accesses and furnishes the EP with the data it requires for the computations.
The EP is thus free to perform its data computations while the AP is waiting for access
requests to be satisfied by the memory.
The two instruction streams run at their own speed, with the queues absorbing excess
data. Unlike PIPE, there are only two queues in DEAP, the Read Queue and the Write
Queue (See Fig. 1). The EP reads data from the Read Queue and stores its results into
the Write Queue. It may be noticed that the queues are named with reference to the
EP and not the AP. Wherever coordination is required, the AP and EP use tokens to
ensure correct operation. While accessing multiple-element data structures, the AP uses
End-of-Data (EOD) tokens to separate batches of data such as an array, a column of a
matrix, or the entire matrix. The AP can use the Read Queue to pass this EOD token to
the EP. The EP uses these tokens to control its iterations. There is no potential problem
due to sending such control information intermingled with data in the queues. Explicit
instructions are used to deposit tokens and also to access them. Correctness of the program
can be ensured as long as the EOD token is deposited only after the completion of any issued
load instruction. In addition to keeping the system simple, a two queue implementation is
as efficient as a system with separate data and control information queues. Since we use
tokens to denote End-of-Data, it would necessitate using one more bit, called the EOD bit
for each element of the queue.
In problems with static data bounds, the AP has prior knowledge about when to
insert the EOD tokens. In problems with dynamic bounds such as the C library string
copy (strcpy) and string compare (strcmp) where the end of the string is not known until it
is actually encountered, the scenario is different. To exploit any advantage of a decoupled
architecture in such an application, the AP can fetch the string elements one by one
without checking for the delimiter. The EP can perform the comparison to find the end
of the string. Due to slip, the AP would have fetched beyond the end of the string by
the time the EP finds the end of string. In such a case, the EP sends an EOD token to
the AP whereupon the AP stops its fetch operation and also flushes any unnecessary data
it has fetched. The EP can meanwhile continue operations on its own, but it should not
read any data until the AP has flushed its queue. This can be accomplished if the AP
sends a token to the EP to mark that it has flushed the queue, and the EP waits for the
token before reading data operands from the queue again. This creates a certain amount
of busy waiting but it seems to be inevitable, if any parallelism is to be exploited in such
a problem.
3. Memory Latency and the Access Processor
The Access Processor (AP) eliminates latency of main memory by performing the
access process ahead of demand by the execute processor. When a program is looping, the
access instruction stream often precedes the execution stream by at least one iteration.
The Read Queue buffers the prefetched data. The execute processor does not have to wait
to obtain data for its computations. The access processor would have already loaded the
data into the Read Queue. Hence if sufficient slip is present, the EP obtains its operands
with no delay. Similarly, in the case of memory writes, the EP can put the data into the
Write Queue and proceed. The AP would store it back into the main memory later. It is
assumed that the queue, like registers, can be accessed in a single cycle. Hence, if the AP
can run ahead and load the queue with the data before the EP reaches the section of the
code with a reference to the data, and if the Write Queue is long enough for the EP to
dump its result and proceed, the EP never experiences a delay in accessing main memory.
The AP thus hides memory latency from the EP.
The length of the queues is a critical factor in a decoupled architecture since often the
distance the access process can run ahead is limited only by the queues. A slow memory
access path can be compensated for by using longer queues [24]. With sufficiently long
queues, a high average transfer rate can be achieved even with a memory of relatively
low peak transfer rate capacity. The queues enable the system to utilize given memory
bandwidth more efficiently. The memory latency insensitivity that can be achieved by a
decoupled architecture depends on the slip that can be attained.
In the case of very fast memories, the address calculation instructions consume a
significant fraction of the total execution time and there is a limit on the slip that can be
attained. With slower memories, the environment permits more slip. But memory poses a
bottleneck and slip is limited again as the memory speed decreases beyond a certain point,
which can be considered as an optimum memory speed. This optimum is not a constant,
but will depend on the characteristics of the program under execution, such as the fraction
of load and store instructions, the relative AP and EP workload etc.
4. Simulation Methodology
Performance simulators are developed for a uniprocessor, actually the MIPS R2000
[11] and a DEAP system with access and execute processors having the MIPS instruction
set. For DEAP, modifications necessary in the R2000 for queue operations and the EP-AP
interface are assumed. The simulators are written in C and run on a DEC 3100 station
under the UNIX operating system. The results from the MIPS R2000 system form the
baseline for comparison.
The AP and EP instructions are pipelined through fetch-decode-ALU-writeback stages
in a fashion similar to that in MIPS R2000 [11], but with hardware interlocks. The R2000
tries to achieve single cycle execution for its instructions by delayed load and delayed
branch techniques. The uniprocessor that forms the baseline system for comparison as
well as the EP in the decoupled implementation incorporate these techniques in identical
manners so that the effect of decoupling could be easily identified. For the decoupled mode,
the length of the queues was kept at 20 in our simulations. It has been reported in [24],
[29] and [8] that short queues are sufficient to achieve performance close to the maximum
available with unbounded queues. We also performed some experiments on queue lengths.
Our observations confirm past results, except that loop unrolling slightly increases the
queue length requirements.
We performed simulations with some of the Lawrence Livermore Loops (LLLs), two
signal processing algorithms convolution and correlation, the saxpy routine from the LINPACK
benchmark and the C library string copy strcpy. The LLLs were chosen since they
were used in research in the past [24] [8] [9] and since they are important to a wide range
of scientific algorithms and applications. The signal processing algorithms used contain
addressing patterns other than sequential [10]. They exhibit good locality properties also.
Since our studies involve cache based systems, these algorithms are very relevant for our
studies. The saxpy routine from the linpack benchmark is run with both loop increments
equal to one and also with unequal increments. The strcpy routine operates on character
data which is 8 bits wide, while the other benchmarks use data that is 32 bits wide.
The benchmarks are compiled with the DEC 3100 workstation compilers with the
highest level of optimization. The assembly output from the compiler is machine coded
in the required trace format for the uniprocessor cases. For the decoupled version, the
assembly code is manually split into the access and execute streams and coded into the
required trace format. We could not include results from large benchmarks such as the
SPEC due to the difficulty in generating the two streams of traces without a compiler for
the decoupled system. The DEC compiler performs loop unrolling for most of the loops
we used. If the uniprocessor trace is unrolled, the same degree of unrolling is retained in
the traces for the decoupled system also.
5. Discussion of Simulation Results
In this section we present our simulation results. We first compare the performance
of decoupled systems with single processors with caches. In order to relate our work to
previous research, we also simulate uniprocessors without caches. During simulation runs,
we vary the main memory access time to find out the sensitivity of system performance
to memory path length. The simulation results are analyzed to identify limitations of
decoupled architectures and cache based systems. Finally, the relevance of a data cache in
a decoupled architecture is studied by simulating a system with a cache in the AP. At this
stage a comprehensive comparison of uniprocessors with and without caches and decoupled
systems with and without caches is presented. Similar studies with handcoded traces were
presented in [13].
5.1 Comparison with Uniprocessors with Caches
The performance of the decoupled system in comparison with uniprocessors with and
without caches is shown in Fig. 2. Memory cycle time is denoted by t mm and is expressed
in terms of processor cycles. Since block sizes and cache sizes can have a critical effect
on the performance of cache based systems, we performed simulations with a few different
cache parameters. But in order not to clutter the figures with too much data, we plot
only one typical organization of the cache with a size of 1024 bytes. This cache size might
seem to be unrealistically small, but it should be remembered that the benchmarks used
are small too. The cache is assumed to have a single cycle access time. In Fig. 2, it
can be seen that in the 5 cycle case, the decoupled architecture executes the code faster
than the uniprocessor with and without cache, while at 15 cycles the uniprocessor with
cache performs better than the decoupled system for several of the traces. The behavior
is similar for all traces except for strcpy. For strcpy, even at cycles, the uniprocessor
with cache is superior to the decoupled system. The strcpy trace is unique in that the data
element size is smaller than the bus-width and that multiple data elements can be fetched
in one access.
5.2 Sensitivity of Performance to Memory Access Time
The variation in execution time with increase in memory cycle time is illustrated in
Fig. 3 and Table I. Three types of behavior can be observed Fig. 3 for memory latency
sensitivity. The first graph corresponds to strcpy trace which is able to make use of the
cache due to its spatial locality, and the uniprocessor with cache exhibits less sensitivity to
memory access time, than the decoupled system. The second graph illustrates the typical
behavior of benchmarks which do not make use of any locality and both uniprocessors
with caches and decoupled architectures exhibit the same range of variation in execution
time. The third graph illustrates convolution and correlation traces that contain true temporal
locality and cached uniprocessors exhibit significant insensitivity to memory access
time, whereas decoupled architectures are sensitive to the access time. Table I illustrates
that for convolution, correlation and strcpy, the increase in execution time for decoupled
architecture is significantly higher than that in cached uniprocessors. For the other five
traces, both cached uniprocessors and decoupled architectures exhibit the same range of
variation.
Benchmark Uniprocessor with cache Decoupled architecture
lll1 12022 10422
lll3 20000 20000
lll11 19524 20065
convolution 1938 31560
correlation 1070 30580
saxpy-un 24497 25435
strcpy
Table
I. Increase in execution time (cycles) with tripling of memory cycle time
Fig. 4 illustrates the change in speed up of the decoupled system as the memory
access time is varied. Results reported in [24] and [8] indicate that the speedup with a
decoupled organization improves as the memory speed decreases. For some of the loops,
we do observe the effect they reported, but we also notice that beyond a certain memory
speed, the speedup declines. (More loops showed the effect reported in [24] and [8], when
we used handcoded traces in [13].) In [8], though it is mentioned that the performance
advantage is more significant with a slower memory, actually 5 out of the 12 Lawrence
Livermore loops they used show a smaller speedup for the slower memory module. Since
they considered interleaved memories, this decrease in speed-up after certain latency was
not evident in the other loops in the ranges of memory speed they used.
5.3 Limitations of Decoupled Architectures
Decoupling can smooth out burst bandwidth requirements. But if the total bandwidth
requirement is higher than the time the execute processor would take to complete its section
of the code, memory becomes a bottleneck unless some technique to alleviate the bottleneck
is incorporated. We quantify the memory bandwidth problem in decoupled architectures
by comparing the total access time requirement to the pure computation time.
The time the EP takes to complete its section of the code assuming that it always
finds the requested data in the load queue without waiting and that it is able to deposit
the result into the store queue without waiting is the EP execution time with a perfect
memory and perfect AP. Let us denote this time as the EP stand-alone execution time.
The total memory access time (which is analogous to bandwidth requirement) should be
less than EP stand-alone execution time if sufficient memory bandwidth is available, but
Fig. 5 shows that the memory access takes 1.8, 3.6 or 5.4 times the time the EP takes
to complete its computations, at memory speeds 5, 10 and 15 cycles respectively. If the
memory system was capable of furnishing the required bandwidth to complete the entire
access in a time period less than the EP stand-alone execution time, the decoupled system
would have performed better.
Decoupled architectures can eliminate memory latency only as long as the total time
required for data fetch can be accommodated within the time the execute processor would
consume to complete its section of the code without having to wait for any operands.
Beyond that point, the effect of memory latency will be evident in the total execution
time. This explains the sensitivity that decoupled systems exhibited to memory access
speed in our simulations. It might be noted that increasing the queue length cannot hide
the latency, once memory has become a bottleneck like this. Decoupled architectures can
reduce the peak bandwidth requirement, but not the total bandwidth. Caches can reduce
the total bandwidth requirement by capturing locality, but have other limitations which
are addressed in section 5.5.
Load unbalance between the access and execute processors might also limit the speed
up that can be achieved by the decoupled configuration. Typical general purpose instruction
streams contain more access related instructions than true computations. The access
processor often has to execute more instructions than the execute processor, as illustrated
by the instruction counts in Table II. Severe load unbalance exists in saxpy.unequal. All
the traces used for the reported results are generated with the highest level of compiler
optimization (-O4) and no address calculation instructions appear within the loop in them
except in saxpy.unequal. Code with less optimization exhibits more AP-EP unbalance due
to the presence of address calculation instructions. The ratio of AP load to EP load is
less than 2:1 in saxpy.equal where the optimizer was successful in removing the address
calculation. In saxpy.unequal, which has detailed address calculation in each iteration, the
corresponding ratio is greater than 5:1. The average ratio of AP instruction count to EP
instruction count is 2.15:1. We have performed extensive studies on memory bandwidth,
AP-EP unbalance and other bottlenecks in decoupled architectures. Due to space con-
straints, we cannot present all our results here, but interested readers may refer to [14]
where we present all our results.
Benchmark AP instr count EP instr count AP count/EP count
convolution 5000 3975 1.26
correlation 11083 5926 1.87
saxpy.unequal 12253 2251 5.44
saxpy.equal
strcpy 10012 6001 1.67
Table
II. AP and EP instruction counts and their ratio
Another limitation of decoupled processors relates to the overhead incurred in the
process of exploiting the access execute parallelism in the computing process. Slight code
expansion results when AP-EP code is generated. Branch instructions must appear in
both processors. If the execute processor does not support operations with both operands
from the same queue (Read Queue), one of the operands has to be moved from the queue
to a register which also contributes to code expansion. This problem can be alleviated by
having two Read Queues, with the two data elements being loaded to alternate queues. In
such a case, two register addresses have to be used for the queues.
In certain problems, such as the transposition of a matrix, array copy, etc. the whole
problem is of an access nature and there is no role that the execute processor can play.
Effort to parallelize these problems can result in serious overhead that may increase the
execution time to more than the uniprocessor mode. If the compiler recognizes such
problems, the DAE system should not be slower than a uniprocessor.
5.4 Significance of a Data Cache in a Decoupled Architecture
In section 5.1 we observed that uniprocessors with data caches performed considerably
better than decoupled architectures, for slow memories. (See t=15 cycle case in
Fig. 2.) This observation naturally leads to the question of whether a decoupled architecture
could also benefit from caches. We performed simulations to investigate this and
the results appear in Fig. 6. The total execution time of each benchmark is plotted for
decoupled systems with caches, uniprocessors with and without caches and ordinary de-coupled
architectures. The results are characteristic of the memory referencing behavior
of each benchmark. For the convolution and correlation algorithms, the decoupled system
with caches performs better than the other systems. This can be attributed to the strong
temporal locality present in the data references in these benchmarks. The strcpy program
benefits from spatial locality, and for this benchmark also, decoupled system with cache
exhibits superior speedup than other systems. The LLLs and the saxpy do not benefit
from caches. For these benchmarks, the cache has a similar effect in both uniprocessors
and decoupled architectures.
t=15cycles
Benchmark Uniproc with cache Decoupled Decoupled with cache
saxpy 0.96 1.15 1.10
Mean 0.96 1.15 1.14
Correlation 3.35 2.22 6.29
Convolution 3.95 2.19 5.15
strcpy 2.97 1.06 3.63
Mean 3.42 1.82 5.02
t=5cycles
Benchmark Uniproc with cache Decoupled Decoupled with cache
saxpy 0.96 1.43 1.35
Mean 0.93 1.41 1.27
Correlation 1.59 2.39 3.08
Convolution 1.89 2.47 2.47
strcpy 1.45 1.19 1.91
Mean 1.64 2.02 2.49
Table
III. Speedup comparison
The speedup figures for memory access times equal to 5 cycles and 15 cycles are presented
in Table III. Speedup is calculated with reference to uniprocessors without caches.
Mean values of the speedup figures are shown separately for programs with different locality
characteristics. The Lawrence Loops and saxpy do not benefit from caches and form
one group. Convolution, correlation and strcpy which exhibit locality characteristics form
another group.
Limitations Associated with Caches
Caches hide the latency of the main memory by capturing the temporal and spatial
locality of the data references. Locality of reference enables caches to reduce the bandwidth
requirements of programs. It was illustrated in the previous section that data caches can
improve the performance of decoupled architectures also. But there are several limitations
associated with caches.
Lack of temporal locality limits the capability of some problems to exploit the cache.
In Fig. 2, one can note that caches cause an increase in execution time for some of the
benchmarks. Consider Lawrence Livermore Loop 3.
do
Here the program steps through the arrays and hence there is spatial locality. In our
study, the elements of the array have a representation and the data bus width is
bits as well. The main memory is organized with a word size of 32 bits and each access
furnishes one word or 32 bits. If a block size of 4 bytes is used, only the array element
in demand is loaded each time there is a miss. Since no array element is used again (or
in other words there is no temporal locality), a cache of 4 byte blocks does not improve
the performance. Having a larger block size would exploit the spatial locality, but since
fetching a larger block requires a proportionately larger number of cycles, the cache does
not decrease the total execution time. The cache can yield an advantage if some of these
fetch cycles could run in parallel with computation cycles when the CPU is busy, but does
not use the bus. Otherwise, the cache just slows down the system by adding the cache
access time to each reference. This phenomenon can be observed in all the LLLs in Fig. 2.
The execution time for the cache based system is higher than that for the system without
the cache. This could have been avoided by looking into the cache and the main memory at
the same time, in which case, there would have been no deterioration, but still there would
be no improvement. We thus notice that problems with only spatial locality sometimes
do not benefit from caches. If the data size is smaller than a word, or in other words, if
more than one data element could be fetched at the expense of one fetch, performance
improvement could be obtained. Among the benchmarks we used, strcpy is the only trace
which exhibits this characteristic. This benchmark does achieve strong insensitivity to
memory latency.
Success of a cache organization depends on minimizing the miss ratio, the delay due
to a miss, and the penalty for updating main memory. The number of fetches required
to load a block of a given size, depends on the bus width. Larger block sizes capture
spatial locality and may decrease the miss ratio, but increases both the delay due to a miss
and also the updating penalty. The increase in hit ratio obtained with larger block sizes
may mislead designers and architects. It should be cautioned that in several cases, the
bus traffic and the memory bandwidth requirements increase dramatically with increase
in block size and the overall effect is a reduction in performance despite the increase in
hit ratio. A typical example is shown in Fig. 7 for saxpy with unequal increments. The
execution time increases threefold when the block size is increased from 4 bytes to 16 bytes.
Similar behavior was observed in several other traces with only spatial locality. Another
fact that can be observed in this figure is that the limitations associated with caches affect
both uniprocessors and decoupled systems identically. The sensitivity of performance to
block size is also illustrated for the correlation algorithm in Fig. 7, and it can be observed
that this benchmark which has strong temporal locality does not exhibit such sensitivity
to cache block size. Also, the execution time in the system with the cache is always less
than that of the system without a cache. A general observation is that temporal locality
is often stronger than spatial locality.
It may be concluded that precise tuning of cache parameters is essential for the success
of cache organizations. As also observed in [12] [1] and [21], unless carefully designed
and implemented, caches may result in minimal performance improvement, or may even
constitute a burden.
6. Conclusion
We have presented simulation results on the memory latency effects in decoupled
access execute architectures. Since caches are a time-tested mechanism to solve the memory
access problem, we also compared the decoupled architecture performance to uniprocessors
with caches. We see that caches and decoupled systems achieve their best performance
in different domains, since their mechanisms to alleviate memory bottleneck depend on
different characteristics of a computer program. There are cases in which both might
result in a performance advantage and there are problems where one scheme might not
contribute significantly to the performance. Caches have the potential of efficiently hiding
the main memory latency in programs that exhibit strong locality properties but they
may also slow down the system if not carefully designed. If one fixed organization of the
cache is going to be used for all applications, there is heavy risk of the cache affecting the
system adversely under some conditions. Another observation is that temporal locality
often produces stronger effects than spatial locality.
In the case of decoupled systems, we note that there is more scope for improvement
from 'decoupling' and `slip', once the main memory is fast enough to provide the bandwidth
that the processor demands. With very slow memory, the effect of memory latency will be
clearly evident in the total execution time. Memory poses a bottleneck here and decoupled
computers cannot lower the execution time below the total memory access time. But even
in that region, programs might benefit from caches if strong temporal locality is present.
In spite of the memory posing a bottleneck in some cases, the speedup of the decoupled
system relative to a noncached uniprocessor is significant.
We also performed simulations to determine whether decoupled architectures can obtain
a performance advantage from data caches. The contribution of caches is minor when
the main memory is fast. But in cases with strong temporal locality, decoupled architectures
with caches achieve a level of memory path insensitivity superior to all other
configurations. It can be concluded that caches are as relevant to decoupled computers as
they are to uniprocessors.
Though we used noninterleaved memory for the studies in this paper, the major conclusions
in this paper will hold for systems with interleaved memories also, once memory
latency is very high. Interleaving can hide latency effects up to a certain latency, but
bandwidth is likely to become a bottleneck, beyond a certain latency. Caches would be
significant in decoupled architectures with interleaved memory also, once memory band-width
is a bottleneck. Using non-interleaved memory simply enabled us to see the effects
with low latency itself.
--R
"Performance Trade-offs for Microprocessor Cache Mem- ories"
"Organization and architecture tradeoffs in FOM"
"A Queue- based Instruction Cache Memory"
"Functionally parallel architectures for array processors"
"Performance Evaluation of On-chip Register and Cache Organizations"
"Improving Performance of Small on-chip Instruction Caches"
"Implementation of the PIPE processor"
"PIPE: A VLSI Decoupled Architecture"
"Performance Evaluation of the PIPE Computer Architecture"
"Performance Analysis of an Address Generation Coprocessor"
"Classification and Performance Evaluation of Instruction Buffering Techniques"
"Memory Latency Effects in Decoupled Architectures with a Single Data Memory Module"
"Bottlenecks in Decoupled Architecture Performance"
"Structured Memory Access Architecture"
"Features of the Structured Memory Access (SMA) Architecture"
"Architecture of a Programmable Digital Signal Processor"
"Cache Memories"
"Decoupled Access/Execute Computer Architecture"
"The ZS-1 Central Processor"
"Line (Block) Size Choice for CPU Cache Memories"
" Dynamic Instruction Scheduling and the Astronautics ZS-1"
"PIPE: A High Performance VLSI Architecture"
"A Simulation Study of Decoupled Architecture Computers"
"A performance comparison of the IBM RS/6000 and the Astronautics ZS-1"
"The Effects of Memory Latency and Fine-Grain Parallelism on Astronautics ZS-1 Performance"
"Performance of the Structured Memory Access Archi- tecture"
"Evaluation of the WM Architecture"
"A Simulation Study of Architectural Data Queues and Prepare-to-Branch Instruction"
--TR
A simulation study of decoupled architecture computers
Line (block) size choice for CPU cache memories
The ZS-1 central processor
Performance evaluation of on-chip register and cache organizations
MIPS RISC architecture
Dynamic Instruction Scheduling and the Astronautics ZS-1
Improving performance of small on-chip instruction caches
Implementation of the PIPE Processor
A Performance Comparison of the IBM RS/6000 and the Astronautics ZS-1
Classification and performance evaluation of instruction buffering techniques
Memory latency effects in decoupled architectures with a single data memory module
Evaluation of the WM architecture
Cache Memories
Decoupled access/execute computer architectures
Performance Trade-Offs for Microprocessor Cache Memories
--CTR
A. Milidonis , N. Alachiotis , V. Porpodas , H. Michail , A. P. Kakarountas , C. E. Goutis, Interactive presentation: A decoupled architecture of processors with scratch-pad memory hierarchy, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Roger Espasa , Mateo Valero, A Simulation Study of Decoupled Vector Architectures, The Journal of Supercomputing, v.14 n.2, p.124-152, Sept. 1999
Joan-Manuel Parcerisa , Antonio Gonzalez, Improving Latency Tolerance of Multithreading through Decoupling, IEEE Transactions on Computers, v.50 n.10, p.1084-1094, October 2001
Won W. Ro , Stephen P. Crago , Alvin M. Despain , Jean-Luc Gaudiot, Design and evaluation of a hierarchical decoupled architecture, The Journal of Supercomputing, v.38 n.3, p.237-259, December 2006 | interleaving;decoupled architectures;simulation study;memory access delays;performance evaluation;computer architecture;decoupled systems;fine-grain parallelism;cache based uniprocessors;memory latency effects;performance studies;digital simulation;buffer storage |
626977 | A Time Redundancy Approach to TMR Failures Using Fault-State Likelihoods. | Failure to establish a majority among the processing modules in a triple modular redundant (TMR) system, called a TMR failure, is detected by using two voters and a disagreement detector. Assuming that no more than one module becomes permanently faulty during the execution of a task, Re-execution of the task on the Same HardWare (RSHW) upon detection of a TMR failure becomes a cost-effective recovery method, because 1) the TMR system can mask the effects of one faulty module while RSHW can recover from nonpermanent faults, and 2) system reconfiguration-Replace the faulty HardWare, reload, and Restart (RHWR)-is expensive both in time and hardware. We propose an adaptive recovery method for TMR failures by "optimally" choosing either RSHW or RHWR based on the estimation of the costs involved. We apply the Bayes theorem to update the likelihoods of all possible states in the TMR system with each voting result. Upon detection of a TMR failure, the expected cost of RSHW is derived with these likelihoods and then compared with that of RHWR. RSHW will continue either until it recovers from the TMR failure or until the expected cost of RSHW becomes larger than that of RHWR. As the number of unsuccessful RSHW's increases, the probability of permanent fault(s) having caused the TMR failure will increase, which will, in turn, increase the cost of RSHW. Our simulation results show that the proposed method outperforms the conventional reconfiguration method using only RHWR under various conditions. | Introduction
Fault tolerance is generally accomplished by using redundancy in hardware, software,
time, or combination thereof. There are three basic types of redundancy in hardware and
software: static, dynamic, and hybrid. Static redundancy masks faults by taking a majority
of the results from replicated tasks [13]. Dynamic redundancy takes a two-step procedure
for detection of, and recovery from, faults [2]. The effectiveness of this method relies on
selecting a suitable number of spares, a fault-detection scheme, and a switching operation.
Hybrid redundancy is a combination of static and dynamic redundancy [4]. A core based
on static hardware redundancy, and several spares are provided to tolerate faults. Such
redundant systems could provide very high reliability depending on the number of spares
used under the assumption of perfect coverage and switching operation. However, new faults
may occur during the detection of existing faults, and the switching operation becomes very
complex as the number of spares increases. In order to reduce the complexity of switching
operation and enhance reliability at low cost, self-purging [12] and shift-out [5] schemes were
developed, where faulty modules were removed but not replaced by standby spares. In these
schemes, the additional operation required to select nonfaulty spare(s) is not needed, thus
making the switching operation simpler. But it is difficult to implement either a threshold
voter or a shift-out checking unit which requires comparators, detectors, and collectors.
Triple Modular Redundancy (TMR) has been one of the most popular fault-tolerance
schemes using spatial redundancy. In the Fault-Tolerant MultiProcessor (FTMP) [6], computations
are done on triplicated processors/memories connected by redundant common
serial buses, and its quad-redundant clocks use bit-by-bit voting in hardware on all transactions
over these buses. C.vmp [18] is also a TMR system which traded performance
for reliability by switching between TMR mode with voting and independent modes under
program control. In [22], an optimal TMR structure to recover from a transient fault
was shown to extend significantly the lifetime of a small system in spite of its requirement
of reliable voter circuits. The authors of [3] proposed a modular TMR multiprocessor to
increase reliability and availability by using a retry mechanism to recover transient faults,
and switching between TMR and dual-processor modes to isolate a permanent fault. A
simple multiple-retry policy (retry a pre-specified number of times) - also used to discriminate
a permanent fault - was employed there. This policy can tolerate multiple faults
only by treating them as a sequence of single faults with repair between fault occurrences,
thus requiring frequent voting for effective fault detection. A TMR failure caused by near-
Ccoincident faults in different modules must also be detected and recovered. The effect of
dependent faults inducing a TMR failure was eliminated by periodic resynchronization at
an optimal time interval [7]. However, the fault model of [7] and [22] did not include the
possibility of permanent faults for which resynchronization is no longer effective.
In addition to the use of spatial redundancy with fault masking or reconfiguration,
time redundancy can be applied effectively to recover from transient faults. Such recovery
techniques are classified into instruction retry [10], program rollback [16], program reload
and restart with module replacement. Several researchers attempted to develop an optimal
recovery policy using time redundancy, mainly for simplex systems. Koren [9] analyzed
instruction retries and program rollbacks with such design parameters as the number of
retries and intercheckpoint intervals. Berg and Koren [2] proposed an optimal module
switching policy by maximizing application-oriented availability with a pre-specified retry
period. Lin and Shin [10] derived the maximum allowable retry period by simultaneously
classifying faults and minimizing the mean task-completion time.
The main intent of this paper is to develop an approach of combining time and spatial
redundancy by applying time redundancy to TMR systems. (Note that spatial redundancy
is already encapsulated in the TMR system.) When a TMR failure - failure to
establish a majority due to multiple-module faults - is detected at the time of voting,
or when a faulty module, even if its effects are masked, is identified, the TMR system is
conventionally reconfigured to replace all three or just the faulty module with fault-free
modules. If the TMR failure had been caused by transient faults, system reconfiguration
or Replacement of HardWare and Restart (RHWR), upon detection of a TMR failure, may
not be desirable due to its high cost in both time and hardware. To counter this problem,
we propose to, upon detection of a TMR failure, Re-execute the corresponding task on
the Same HardWare (RSHW) without module replacement. Instruction retry intrinsically
assumes almost-perfect fault detection, for which TMR systems require frequent voting,
thereby inducing high time overhead. However, the probability of system crash due to
multiple-channel faults is shown in [17] to be insignificant for general TMR systems, even
when the outputs of computing modules are infrequently voted on as long as the system
is free of latent faults. Unlike simplex systems, program rollback is not adequate for TMR
systems due to the associated difficulty of checkpointing and synchronization. So, we consider
re-execution of tasks on a TMR system with infrequent voting. For example, since
more than 90% of faults are known to be non-permanent - as few as 2% of field failures
are caused by permanent faults [14] - simple re-execution may be an effective means to
recover from most TMR failures. This may reduce (i) the hardware cost resulting from the
hasty elimination of modules with transient faults and (ii) the recovery time that would
otherwise increase, i.e., as a result of system reconfiguration. Note that system reconfiguration
is time-consuming because it requires the location and replacement of faulty modules,
program and data reloading, and resuming execution.
We shall propose two RSHW methods for determining when to reconfigure the system
instead of re-executing a task without module replacement. The first (non-adaptive) method
is to determine the maximum number of RSHWs allowable (MNR) before reconfiguring the
system for a given task according to its nominal execution time without estimating the
system (fault) state - somewhat similar to the multiple-retry policy applied to a general
rollback recovery scheme in [20]. By contrast, the second (adaptive) method (i) estimates
the system state with the likelihoods of all possible states and (ii) chooses the better of
RSHW or RHWR based on their expected costs when the system is in one of the estimated
states. RHWR is invoked if either the number of unsuccessful RSHWs exceeds the MNR
in the first method or the expected cost of RSHW gets larger than that of RHWR in the
second method. For the second method, we shall develop an algorithm for choosing between
RSHW and RHWR upon detection of a TMR failure. We shall also show how to calculate
the likelihoods of all possible states, and how to update them using the RSHW results and
the Bayes theorem.
The paper is organized as follows. In the following section, we present a generic methodology
of handling TMR failures, and introduce the assumptions used. Section 3 derives the
optimal voting interval (X v ) for a given nominal task-execution time X . The MNR of the
first method and the optimal recovery strategy of the second method are computed for
given X . We derive the probability density function (pdf) of time to the first occurrence
of a TMR failure, the probabilities of all possible types of faults at that time, transition
probabilities up to the voting time, the costs of RSHW and RHWR, and the problem of up-dating
likelihoods of the system state and the recovery policy after an unsuccessful RSHW.
Section 4 presents numerical results and compares two recovery methods of RSHW and
RHWR. The paper concludes with Section 5.
Detection and Recovery of a TMR Failure
Detection and location of, and the subsequent recovery from, faults are crucial to the
correct operation of a TMR system, because the TMR system fails if either a voter fails at
the time of voting or faults manifest themselves in multiple modules during the execution of
a task. The fault occurrence rate is usually small enough to ignore coincident faults which
are not caused by a common cause, but non-coincident fault arrivals at different modules
are not negligible and may lead to a TMR failure.
Disagreement detectors which compare the values from the different voters of a TMR
system can detect single faults, but may themselves become faulty. FTMP [6], JPL-STAR
[1], and C.vmp [18] are example systems that use disagreement detectors. In FTMP, any
detected disagreement is stored in error latches which compress fault-state information
into error words for later identification of the faulty module(s). System reconfiguration
to resolve the ambiguity in locating the source of a detected error is repeated depending
on the source of the error and the number of units connected to a faulty bus. Two fault
detection strategies - hard failure analysis (HFA) and transient failure analysis (TFA)
are provided according to the number and persistency of probable faulty units. These
strategies may remove the unit(s) with hard failures or update the fault index (demerit)
of a suspected unit. Frequent voting is required to make this scheme effective, because
any faulty module must be detected and recovered before the occurrence of a next fault on
another module within the same TMR system.
Voting in a TMR system masks the output of one faulty module, but does not locate
the faulty module. One can, however, use a simple scheme to detect faulty modules and/or
voter. Assuming that the probability of two faulty modules producing an identical erroneous
output is negligibly small, the output of a module-level voter becomes immaterial when
multiple modules are faulty [8]. A TMR failure can then be detected by using two identical
voters and a self-checking comparator as shown in Fig. 1. These voters can be implemented
with conventional combinational logic design [23]. The comparator can be easily made self-checking
for its usually simple function: for example, a simple structure made of two-rail
comparators in [11] for each bit can be utilized for its high reliability and functionality. This
TMR structure can also detect a voter fault. When a TMR failure or a voter fault occurs,
the comparator can detect the mismatch between the two voters that results from either
the failure to form a majority among three processing modules, or a voter fault. (Note that
using three voters, instead of two, would not make much difference in our discussion, so we
processor 3
processor 2
processor 1
voter 2
voter 1
ae
oe
comparator
\Phi \Phi \Phi \Phi \Phi*
\Omega \Omega \Omega \Omega \Omega \Omega \Omega OE
\Phi \Phi \Phi \Phi \Phi*
J" \Phi \Phi \Phi \Phi \Phi*
Figure
1: The structure of a TMR system with two voters and a comparator.
will focus on a two-voter TMR structure.)
If the comparator indicates a mismatch between two voters at the time of voting, an
appropriate recovery action must follow. Though RHWR has been widely used, RSHW may
prove more cost-effective than RHWR in recovering from most TMR failures. To explore
this in-depth, we will characterize RSHW with the way the MNR is determined. The
simplest is to use a constant number of RSHWs irrespective of the nominal task-execution
time and the system state which is defined by the number of faulty modules and the fault
type(s). Taking into account the fact that the time overhead of an unsuccessful RSHW
increases with the nominal task-execution time X , one can determine the MNR simply
based on X , without estimating the system state. A more complex, but more effective,
method is to decide between RSHW and RHWR based on the estimated system state.
Since the system state changes dynamically, this decision is made by optimizing a certain
criterion which is dynamically modified with the additional information obtained from each
unsuccessful RSHW. In this adaptive method, the probabilities of all possible states will
be used instead of one accurately-estimated state. Upon detection of a TMR failure, the
expected cost of RSHW is updated and compared with that of RHWR. The failed task will
then be re-executed, without replacing any module, either until RSHW recovers from the
corresponding TMR failure or until the expected cost of RSHW becomes larger than that
of task execution. 2 As the number of unsuccessful RSHWs increases, the possibility of
permanent faults having caused the TMR failure increases, which, in turn, increases the
cost of RSHW significantly.
This procedure is described in the algorithm of Fig. 4.
Throughout this paper, we assume that the arrival of permanent faults and the arrival
and disappearance of non-permanent faults are Poisson processes with rates
respectively.
Optimal Recovery from a TMR Failure Using RSHW
3.1 The Optimal Voting Interval
n) be the nominal task-execution time measured in CPU cycles between
the (i \Gamma 1)-th and i-th voting, and let X 1 be that between the beginning of the task and the
first voting, in the absence of any TMR/voter failure. As shown in Fig. 2, for 1 i n let
task-execution time from the beginning of the task to the first completion of
the i-th voting possibly in the presence of some module failures, and let W
E(w is the expected execution time of the task. Upon detection of a TMR failure,
let p and q be the probabilities of recoverying a task with RSHW and RHWR, respectively,
1. Assuming that the time overhead of reconfiguration is constant T c , W n
is expressed as a recursive equation in terms of W i , 1 i n. Let F i (t) (2 i n) be
the probability of a TMR failure in t units of time from the system state at the time of the
(j \Gamma 1)-th voting, and let F 1 (t) be that from the beginning of the task. The probability of
a recovery attempt (i.e., RSHW or RHWR) being successful depends upon F i (t). When a
TMR failure is detected at the time of first voting (i.e., it occurred during the execution
of the task portion corresponding to X 1 ), the system will try RSHW (or RHWR) with
probability p (or q) to recover from the failure. This process is renewed probabilistically
for the variable w 1 which is the actual task-execution time corresponding to the nominal
task-execution time X 1 . Thus,
where T c is the setup time for system reconfiguration.
Let T v be the time overhead of voting which is in practice negligible. The above equation
is also renewed for all w i 's (2 i n) after each successful recovery. Hence,
defined as the actual task-execution time between the (i \Gamma 1)-th and i-th votings,
oe -oe -oe - oe -
TMR failure
voting
oe
Figure
2: Graphical explanation for V i and w i for 1 i n.
From the above equations, the following recursive expressions are derived for 2 i n:
Applying this recursively times, we can get:
Y
The optimal voting frequency is derived by minimizing W n with respect to n and X i ,
subject to
If all inter-voting intervals are assumed to be identical then the constant voting interval is
an optimal value of n must be determined by minimizing
Eq. (3.1). Examples of n for a given X with typical values of are shown in
Table
1. The voting points can be inserted by a programmer or a compiler.
3.2 Pre-determination of Non-adaptive RSHWs
In the first method, we determine a priori the maximum number of RSHWs (MNR), km ,
based on X without estimating the system state. The associated task will be re-executed up
to km times. As X increases, the effect of an unsuccessful RSHW becomes more pronounced;
that is, the possibility of successful recovery with RSHW (instead of RHWR) will decrease
with X due to the increased rate of TMR failures, and the time overhead of an unsuccessful
RSHW also increases with X while the time overhead of RHWR remains constant. So, km
decreases as X increases.
Let C 1 (k; X) be the actual time/cost of task execution in the presence of up to k RSHWs
for a task with the nominal execution time X , which can be expressed as:
Y
Y
s (p n
the probability of the n-th RSHW becoming successful
(unsuccessful) and the probability of a TMR failure during X after system reconfiguration,
respectively, where p n
In fact, p n
s and p n
u cannot be determined
without knowledge of the system state after the (n \Gamma 1)-th unsuccessful RSHW, which is too
complicated to derive a priori. We will approximate these probabilities using the following
useful properties of a TMR system. Since the probability of permanent faults having caused
the TMR failure increases with the number of unsuccessful RSHWs, p n
s is monotonically
decreasing in n:
Though
s and R(n) j p n+1
s depend upon X and fault parameters, it is assumed for
simplicity that p 1
s is given a priori as a constant P and R(n) is a constant R for all n.
modified in terms of P and R:
Y
The cost of RHWR, denoted by C 2 (X), is derived by using recursive equations:
Now, km can be determined as the integer that minimizes C 1 (k; X) subject to C 1
Example values of km for typical values of P and R are shown in Table 2.
3.3 Adaptive RSHW
In this method, the system chooses, upon detection of a TMR failure, between RSHW
and RHWR based on their expected costs. RSHW will continue either until it becomes
successful or until the expected cost of the next RSHW becomes larger than that of RHWR.
The system state is characterized by the likelihoods of all possible states because one can
observe only the time of each TMR failure detection, which is insufficient to accurately
estimate the system state. The outcome of one RSHW, regardless whether it is successful
or not, is used to update the likelihoods of states in one of which (called a prior state)
the RSHW started. The possible states upon detection of a TMR failure can be inferred
from the posterior states which are the updated prior states using the RSHW result and
the Bayes theorem.
Unlike a simplex model, there are too many possible states and events to analyze a
TMR system accurately. We will thus use the simplified Markov-chain model in Fig. 3 to
derive the state probabilities and transition probabilities in a TMR system. The model
consists of six states which are distinguished by the number of permanent faults and that
of non-permanent faults, where two- and three- fault states are merged into one state due
to their identical effects in our analysis. In Fig. 3 the transitions over the bidirectional
horizontal lines result from the behavior of non-permanent faults and the transitions over
the unidirectional vertical lines are caused by the occurrence of permanent faults. Note
that even occurrences of near-coincident faults can be represented by sequential occurrences
with slightly different interarrival times. The model, thus, includes only transitions between
neighboring states - any transition from a state due to multiple faults occurs in two steps
through one of its neighboring states.
Some faults may disappear without affecting the execution of a task. This happens
when the latency of a fault is greater than its active duration, i.e., it will not manifest itself.
Note that the occurrence of an error in a module during the task execution may produce
an erroneous output for the task, even if the fault which had induced the error disappeared
before producing the final output of the task. In other words, a transient fault may have
permanent effects on task execution. 3
The optimal recovery algorithm based on the adaptive method in Fig. 4 can be illustrated
as follows. Upon detection of a TMR failure, the first step is to derive the probabilities
3 In fact, this problem can be eliminated by resynchronizing the processors after a transient fault is
detected [21]. This, however, requires frequent voting and additional mechanisms for detecting errors in
each processor and resynchronizing the processors.
of all possible states at time X f evolved from each prior state. Let T i
f be the time when
the TMR system moved to the failure state from prior state i during [0; X f ], where X f is
the time of detecting a TMR failure (i.e., a voting time). Occurrence of a TMR failure
is then represented by an event (T i
We want to calculate the
probabilities of all possible states i
at voting time X f evolved from prior state i,
which are actually conditional probabilities given the observed event (T i
be calculated from the probabilities of all types of TMR failures i
f ) at time T i
f and the
transition probabilities Pmn
during the remaining task-execution time,
f .
The probabilities of all possible states are thus
f
f
where subscripts indicate the prior state, the state at time T i
f , and the state
at the time, X f , of detecting a TMR failure, respectively. As mentioned earlier, a voting
failure may result from a voter fault or multiple-module faults. Multiple-module faults
can be classified based on the number of modules with permanent faults: Type-I, Type-
II, and Type-III failures represent zero, one, and more than one permanent-fault module,
respectively, where all possible states of each type are listed in Fig. 3. Let S(x; y) be
the state with x permanent-fault modules, y non-permanent-fault modules, and 3
nonfaulty modules.
Although there are ten different states, we only need to consider six of them by merging
This merger of states simplifies the model of a TMR system without losing model accuracy,
because:
ffl By modifying the transition rates, one can make the simplified Markov-chain model
in Fig. 3 represent a TMR system very accurately, and
ffl The merger is based on a realistic assumption that simultaneous occurrence of faults
in different processor modules is highly unlikely.
Moreover, the merger does not change the analysis of a TMR failure because merged
states have similar effects on the TMR failure as compared to the original states. For
example, the merged states induce the same type of TMR failure, where the 'type' is
determined by the number of permanent-fault modules. There are four possible states,
which led to Type-I failures (i.e., it was S(0; 1),
&%
&%
&%
&%
&%
oe
oe
oe
TYPE-III failure
possible states
at time X f
TYPE-II failure
possible states
at time X f
TYPE-I failure
possible states
at time X f
Figure
3: A simplified Markov-chain model for a TMR system.
S(0; 2), or S(0; 3) at time T i
f , because a non-permanent fault might disappear after inducing
error(s). Type-II and Type-III failures have three possible states, fS(1; 0); S(1; 1); S(1; 2)g
and fS(2; 0); S(2; 1); S(3; 0)g, respectively, at time T i
f and X f .
For notational simplicity, let state S i j S(x; y) where y. Then, the set of all
possible states after the merging is out of these, fS 1
are the set of possible fault states transited from S 0
f , and T 2
respectively. S 4 and S 5 may change to S 5 (or S 8 ) at T 4
f (or T 5
f ), and S 8 remains unchanged
due to the persistence of a permanent fault.
Let a path denote the transition trajectory between a pair of states. Since there are
usually more than one path between a given pair of nodes, each of these paths is assigned
an ID number. From the simplified model in Fig. 3, T i
f is the minimum-time path from S i
to any type of TMR failure. Let t i
j be the time taken from S i to a TMR failure via path
j. Then, T i
where the pdf of t i
j is calculated by convolving the pdf 's of all
sub-paths that make up path j. The pdf of a sub-path between two states
is obtained by using the distribution of sojourn time
of
with several exits in the
Markov chain model (Fig. 3):
\Gamma(
represents the set of all outgoing arcs of S j k . Then, the pdf of t i
j is
where path j is composed of sub-paths fij must be one of possible
fault states: Sm 2 fS 1 g. (When the inter-arrival time of events such as fault
occurrence, fault disappearance, and fault latency, is not exponentially distributed, we need
a semi-Markov chain model in place of a Markov chain model.) Let J i
represent the set of
all paths to a fault state Sm from S i . The likelihood of a fault state Sm at time T i
equal to
f ), which is obtained by:
is the set of all paths to all possible fault states evolved from S i , i.e.,
8g. The probabilities of S 1 and S 2 leading to Type-I failure are
computed based on the behavior of non-permanent faults, i.e., depending on whether or
not a non-permanent fault, after having induced some error(s), is still active when a second
non-permanent fault occurs. Likewise, the probabilities of S 4 and S 5 leading to Type-II
failure are computed by the behavior of a non-permanent fault, if it had occurred earlier
than permanent fault(s). When an intermittent fault is considered, the fault state must
be divided by fault active and fault benign states as in [15], which makes the problem
too complicated to be tractable. The numerical examples of F T i
f
(X) and the mean of T i
f
several X are given in Figs. 5 and 6, in which analytic results are compared
against the results obtained from Monte-Carlo simulations.
In addition to f T i
f
and i
m , the transition probabilities Pmn from Sm to S n during
f must be derived in order to obtain the likelihood of every possible state at the time
of voting (failure detection), X f . Although the matrix algebra using the transition matrix
or Chapman-Kolmogrov theorem can be applied to give accurate expressions, we will use a
simplified method for computational efficiency at an acceptably small loss of accuracy. For
the transition probabilities from T i
f , we need not consider subsequent errors but can focus
on only those states useful in choosing between RSHW and RHWR.
Observe that the occurrence rate, p , of permanent faults is much smaller than both the
appearance and disappearance rates of non-permanent faults. Using this observation, one
can analyze the behavior of permanent faults separately from that of non-permanent faults.
The transition probabilities due to the occurrence of permanent faults are represented by
because of the persistence of permanent faults.
Although these probabilities depend upon i
are approximated
by using only the prior probabilities of source states, i
f ). This approximation causes
only a very small deviation from the exact values because the occurrence rate of permanent
faults is usually very small as compared to the other rates. For example, consider P 1n
for n 4, i.e., transitions from S 1 due to the occurrence of permanent fault(s). The
corresponding transition probabilities are derived from the model in Fig. 3 in terms of the
pdf 's of sub-paths between two states. Let
f , then
The probability i
f ) for S 1 is thus reduced to (1\GammaF 15 (T
transitions
from other source states due to the occurrence of permanent faults can be derived. Conse-
quently, the prior probabilities are transformed into
f ); respectively. Using these transformed prior probabilities, we will
derive the transition probabilities based only on the behavior of non-permanent faults.
Considering only the behavior of non-permanent faults divides the above model into
a two-state model fS 4 and a three-state model fS 0 as shown in Fig. 3. The
transition matrix of the three-state model fS 0 is derived by (i) using the Laplace
transform which reduces the linear differential equations of three states to algebraic equations
in s, (ii) solving the algebraic equations, and (iii) transforming the solution back into
the time domain.
The linear differential equation of fS 0 with only the effects of non-transient
faults is
The Laplace transform of T is:
The solution requires the inverse of A:
A
Let the roots of s
be ff and fi, then a ij , the ij-th
element of A, can be obtained by partial fraction expansion:
c (ij)1
s
c (ij)2
c (ij)3
Since c (ij)2 and c (ij)3 are conjugates, c ff). The effect of
permanent faults changes the initial probabilities of
where A
Thus, the i-th column of the 3 \Theta 3 transition matrix P(T ) reduces to:6 6 6 4
where
The above equations indicate that the coefficients of exponentials in A 0 , A 1 , and A 2 include
the effects of the occurrence of permanent fault(s) on the prior probabilities. Likewise, the
transition matrix of a two-state model for fS 4 can be derived as:4 (
e \Gamma(2 n+)T )A 4 (
e \Gamma(2 n+)T )A 5
e \Gamma(2 n+)T )A 4 ( 2n
where A the effects of permanent-
fault occurrences on the transitions to S 8 . These transition matrices and probabilities
(resulting from the occurrence of permanent faults) can describe all possible transitions in
the simplified model of Fig. 3.
When the TMR system is in S 2 , S 5 or S 8 at time X f , RSHW will be unsuccessful again
due to multiple active faults (in more than one module). If it is not in those states at
due to disappearance of active fault(s) after inducing some error(s), the system
moves to a recoverable state by RSHW. Let F T i
f
(X) be the probability of a TMR failure
evolved from S i during the execution time X , where F T i
f
is the probability distribution
function of T i
f . Since exact knowledge of the system state is not available, we estimate the
state probabilities, which are then used to calculate the expected cost of a single RSHW as
follows:
f
(X)@ X
i2f2;5;8g
i2f0;1;4g
f
where i (0) is the probability that the state before starting one RSHW (upon detecting a
TMR failure) is S i , i.e., the probabilities of the present states become those of the prior
states for the next RSHW. The expected cost of RHWR is obtained similarly to Eq. (3.4):
f
(X)
When RSHW is unsuccessful or a voting failure occurs again, the (prior) state probabilities
are updated with the additional information obtained from the RSHW using the
Bayes theorem. The observed information tells us that a TMR failure has occurred again
during the current execution. (Note that the TMR failure detection time during the current
execution is X f .) As a result, the prior probabilities of all possible fault states for the
are renewed from those of the k-th RSHW ( k
during X f from S i )
Prob(a TMR failure during
where Prob(a TMR failure during X f
during X f from S i
f
From Eq. (3.10) one can see that the probability of the TMR system
being in a permanent-fault state increases with each unsuccessful RSHW, which, in turn,
increases the chance of adopting RHWR over RSHW upon detection of next TMR failure.
Using the above updated state probabilities, we can get the conditional probabilities of all
states upon detection of a TMR/voting failure.
When RSHW is successful, one can likewise update the probabilities of possible states,
which will then be used to guess the prior state of the next voting interval.
When the hardware cost is high and the time constraint is not stringent, one may do the
following. Since the fault occurrence rate is much smaller than the disappearance rate of
non-permanent faults, we may wait for a certain period of time (called a back-off
time) in order for the current non-permanent fault(s) to disappear before task re-execution.
An optimal back-off time is determined by minimizing the expected time overhead. When
a task is re-executed without any back-off, the cost of one RSHW is equal to Eq. (3.8).
When re-execution starts after backing off for r units of time, the cost changes (due to the
change of prior states):
f
(X)@ X
i2f2;5;8g
i2f0;1;4g
f
The optimal back-off time is obtained by minimizing C 1 (r) with respect to r.
a TMR failure
derive n (X)
compute C 1
yes
no
successful ?
yes no
update prior prob.
oereconfiguration
re-execute
continue execution
Figure
4: Algorithm to recover from a TMR failure by estimating the system state and
comparing the costs of RSHW and RHWR.
Table
1: n vs. X for (p; T
Table
2: km vs. X for
200 3000 0:0001 0:002 50
Table
3: Parameter values used in simulations, all measured in hours.
4 Numerical Results and Discussion
A system with three replicated processing modules, two voters, and a comparator is
simulated to compare the proposed method (called Method 1) with an alternative which
is based on RHWR (called Method 2). Upon detection of a TMR failure, Method 1 will
decide between RSHW and RHWR according to their respective costs. Method 2, however,
will reconfigure the TMR entirely with a new healthy TMR or partially with healthy spare
modules following an appropriate diagnosis. If a non-permanent fault does not disappear
during the diagnosis, it will be treated as a permanent fault and replaced by a new, non-faulty
spare. We assume that (A1) an unlimited number of tasks with the same nominal
task-execution time are available to keep the running module busy, which simplifies the
description of system workload, and (A2) there are an unlimited number of spares available.
The performances of these two methods are characterized by the overhead ratio:
where E is the real execution time (including the RSHW and/or RHWR overheads) of a
task whose nominal execution time is X .
We ran simulations under the fault generation process with the parameters as given
Table
3, where the symbol * indicates a parameter varied while the others are fixed, in
order to observe the effects of the parameter on OVR in both methods. Since fault oc-
currence/disappearance rates are difficult to estimate on-line, some experimental data or
numerical data based on a model reflecting the maturity of design/fabrication process, the
environmental effects, operating conditions, and the number and ages of components, can
be used [19].
In Figs. 5 and 6, the probabilities of a TMR failure and the failure times from S 0 and
S 4 are computed from the Markov-chain model and simulations, and are then compared.
The simulation and modeling results are very close to each other. The modeling analyses
proved to be very effective in determining when and how to choose between RSHW and
RHWR under various conditions, as shown in Figs. 7-11.
The results obtained while varying X from 10 to 100 hours with T are plotted
in Figs. 7-9. The OVRs of Methods 1 and 2 with the optimal number of votings are
compared in Fig. 7. The difference between the OVRs of Method 1 and Method 2 increases
significantly with X . When X is small, the OVRs of the two methods are too small to
distinguish, which is due mainly to the small probability of a TMR failure. Fig. 8 compares
the multi-voting policy (with the optimal number of votings) and one voting policy.
Generally, the overhead of a TMR system with infrequent voting increases significantly as
increases, because the probability of a TMR failure increases with X ; e.g., if there is
no voting during the task execution, a TMR failure means the waste of the entire nominal
execution time, X . As X increases, the OVR of a one-voting policy increases more rapidly
than that of multi-voting policy. The number of RHWRs - which is represented by the
percentage of RHWR from the total number of simulations in Fig. 9 - will determine the
hardware cost of spares used. The increase in this percentage is much larger in Method
than Method 1, since the number of TMR failures increases with X , and Method 1 can
recover from most TMR failures with RSHW.
The second comparison is made while varying T c - the resetting time for system reconfiguration
- from 2:5 to 12:5 hours for X=50 hours, and the results are plotted in Fig. 10.
A larger resetting time generally results in a larger OVR. Increasing T c greatly affects the
performance of Method 2. But, it has little influence on the OVR of Method 1, since the
system recovers from most TMR failures with RSHW which has nothing to do with T c .
The third comparison in Fig. 11 is made while varying n
p from 5 to 25, where n is fixed
at 0:005 /hr, and hours and T hours. The OVRs of both methods decrease
with n
p , but the magnitude of decrease in Method 1 is larger than that in Method 2. This
is because the probability of a TMR failure decreases as p decreases with n fixed, and
because the probability of successful RSHW increases with n
.
We simulated the proposed and other schemes for units of time with the fault
parameters of Table 3 for each comparison (of the mean overhead ratios of different schemes).
The fault parameters are assumed not to change during the simulation. Since the estimation
of system states depends upon the fault parameters, they must be estimated first. This
problem can be solved by assuming the parameters to be time-varying and estimating them
on-line with certain adaptive methods which, in turn, require more samples.
5 Conclusion
In this paper, we have proposed a strategy for recovering TMR failures using two different
methods that determine when and how to apply RHWR. Both methods are shown
to outperform the conventional method based solely on reconfiguration. This finding is
consistent with the fact that most faults are non-permanent, so simple re-execution can recover
from non-permanent faults and the TMR structure can mask the effects of one faulty
module.
The distinct characteristic of the proposed strategy is that it uses the estimated state
of a TMR system even with incomplete observation of system states. Detection of a TMR
failure and/or an unsuccessful RSHW does not always call for reconfiguration (RHWR) but
requires us to derive and compare the expected costs of reconfiguration and one additional
RSHW. Most TMR failures are represented by using a simplified Markov-chain model, and
the TMR failure time and the probability of another unsuccessful RSHW are also analyzed
with the model. One can therefore conclude that combining time and spatial redundancy
appropriately can be effective in handling component failures.
Acknowledgement
The authors would like to thank Allan White, Chuck Meissner, and Felix Pitts of the
NASA Langley Research Center, and Jim Smith of the Office of Naval Research for their
technical and financial assistance.
--R
"The STAR(self-testing and repairing) computer: An investigation of theory and practice of fault-tolerant computer design,"
"On switching policies for modular redundancy fault-tolerant computing systems,"
"Modular TMR multiprocessor sys- tem,"
"Reliability and analysis of hybrid redundancy,"
"Shift-out modular redundancy,"
"FTMP-a highly reliable fault-tolerant multiprocessor for aircraft,"
"Design of dependent-failure-tolerant microcomputer system using triple-modular redundancy,"
"Embedding triple-modular redundancy into a hypercube architecture,"
"Analysis of a class of recovery procedures,"
"An optimal retry policy based on fault classification,"
"A RAM architecture for concurrent access and on-chip testing,"
"A highly efficient redundancy scheme: Self-purging redundancy,"
"The use of triple-modular redundancy to improve computer reliability,"
"The measurement and analysis of transient errors in digital computer systems,"
"Error detection process - model, design, and its impact on computer performance,"
"Optimal checkpointing of real-time tasks,"
"Study on fault-tolerant processor for advanced launch system,"
"A case study of C.mmp, Cm*, and C.vmp: Part I - experiences with fault tolerance in multiprocessor systems,"
The Theory and Practice of Reliable System Design
"A watchdog processor based general rollback technique with multiple retries,"
"Transient failures in triple modular redundancy systems with sequential modules,"
"Microcomputer reliability improvement using triple-modular redun- dancy,"
"A new design method of voter in fault-tolerant redundancy multiple-module multi-microcomputer system,"
--TR
Analysis of a class of recovery procedures
A watchdog processor based general rollback technique with multiple retries
On switching policies for modular redundancy fault-tolerant computing systems
Optimal checkpointing of real-time tasks
Embedding triple-modular redundancy into a hypercube architecture
A RAM Architecture for Concurrent Access and on Chip Testing
An Optimal Retry Policy Based on Fault Classification
--CTR
Hagbae Kim , Kang G. Shin, Sequencing Tasks to Minimize the Effects of Near-Coincident Faults in TMR Controller Computers, IEEE Transactions on Computers, v.45 n.11, p.1331-1337, November 1996
Hagbae Kim , Kang G. Shin, Design and Analysis of an Optimal Instruction-Retry Policy for TMR Controller Computers, IEEE Transactions on Computers, v.45 n.11, p.1217-1225, November 1996
Jae Kwon Kim , Byung Kook Kim, Probabilistic Schedulability Analysis of Harmonic Multi-Task Systems with Dual-Modular Temporal Redundancy, Real-Time Systems, v.26 n.2, p.199-222, March 2004
Jos Manuel Cazeaux , Daniele Rossi , Cecilia Metra, Self-Checking Voter for High Speed TMR Systems, Journal of Electronic Testing: Theory and Applications, v.21 n.4, p.377-389, August 2005
Byonghyo Shim , Naresh R. Shanbhag, Energy-efficient soft error-tolerant digital signal processing, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.14 n.4, p.336-348, April 2006 | bayes methods;voters;time redundancy approach;redundancy;adaptive recovery method;triple modular redundant system;system reconfiguration;processing modules;disagreement detector;fault-state likelihoods;simulation results;bayes theorem;TMR failures;fault tolerant computing;digital simulation |
626980 | Efficient Boolean Manipulation with OBDD''s Can be Extended to FBDD''s. | OBDD's are the state-of-the-art data structure for Boolean function manipulation. Basic tasks of Boolean manipulation such as equivalence test, satisfiability test, tautology test and single Boolean synthesis steps can be performed efficiently in terms of fixed ordered OBDD's. The bottleneck of most OBDD-applications is the size of the represented Boolean functions since the total computation merely remains tractable as long as the OBDD-representations remain of reasonable size. Since it is well known that OBDD's are restricted FBDD's (free BDD's, i.e., BDD's that test, on each path, each input variable at most once), and that FBDD-representations are often much more (sometimes even exponentially more) concise than OBDD-representations. We propose to work with a more general FBDD-based data structure. We show that FBDD's of a fixed type provide, similar as OBDD's of a fixed variable ordering, canonical representations of Boolean functions, and that basic tasks of Boolean manipulation can be performed in terms of fixed typed FBDD's similarly efficient as in terms of fixed ordered OBDD's. In order to demonstrate the power of the FBDD-concept we show that the verification of the circuit design for the hidden weighted bit function proposed Bryant can be carried out efficiently in terms of FBDD's while this is, for principal reasons, impossible in terms of OBDD's. | Introduction
The need for data structures for Boolean functions becomes obvious if one has applications
in mind as circuit design, optimization, verification, testing, etc. Let us
consider e.g. basic problems of logic verification. By means of hardware description
languages circuits can be described at a very high level of abstraction, that allows
the designer to specify the behavior of a circuit before realizing it. In order to validate
these specifications and in order to verify a designed circuit, with respect to
its specification, formal methods were developed that lead to problem descriptions
in terms of Boolean functions. The verification problem is then solved by analyzing
and manipulating these functions. Let us consider as an example the problem of
determining whether a combinational circuit C correctly implements a given specification
S. That is to test whether are the functions realized
by C and S, respectively. One possibility to do this [e.g. Eve91, FFK88, MWBS88,
is, first, to construct representations for fC and f S (usually in terms of the
primary inputs), and, second, to perform the equivalence test for both functions in
terms of these representations. Hence, efficient algorithms for solving the verification
task under consideration require
ffl efficient algorithms for deriving the representations of the involved Boolean
functions (e.g. from a circuit description) as well as
ffl efficient algorithms to solve the equivalence test or similar tests such as satisfiability
or tautology in terms of these representations.
In the past, a great variety of representations of Boolean functions such as truth-
tables, disjunctive (DNF) and conjunctive normal forms (CNF), Reed-Muller-expansions,
various types of formulas, Boolean circuits, or branching programs have been inves-
tigated. However, the demand for supporting the mentioned basic tasks in Boolean
manipulation became obvious only then when computer applications really had
started to work with more complex Boolean functions.
In order to come to a full understanding of the fundamental trade-off between
succinctness of the representation and efficiency of solving the basic tasks observe
that the complexity has to be measured in the length of the representations of
the input functions. Hence, working e.g. with truth-tables one has much time for
computations. However, such a representation requires in any case space ressources
that are exponential in the number of primary inputs. On the other side, working
with e.g. formulas one often obtains very succinct and, consequently, space efficient
representations but solving e.g. the equivalence test becomes co-NP-hard.
Unfortunately, a systematic inspection of the different succinct representation schemes
[e.g. GM92a] has shown that allmost all of them do not support efficient solutions
of the Boolean manipulation basic tasks. More worse, performing the basic tasks
often requires the solution of NP-hard problems. The only exceptions seem to be
BDD-based Boolean function representations [e.g. Ake78, Bry86, Mei91, GM92a]
that provide with so-called OBDD's the state-of-the-art data structure for Boolean
functions. They allow
1. canonical representations (and, hence, efficient solutions of the equivalence test
and similar tests such as satisfiability or tautology test), and allow
2. efficient performance of binary Boolean synthesis steps (and, hence, efficient
procedures for deriving the OBDD-representation of a Boolean function from
a given circuit description).
Due to these nice properties, OBDD's are successfully used in many applications such
as, for example, sequential circuit verification [e.g MB90, BCM90, Fil91], testing
[e.g. Bec92, KBS93], or logic optimization [Kar89]. For a survey see [Bry92].
Unfortunately, OBDD-representations are not very succinct and, hence, not very
space-efficient. It arises the question whether there are more sophisticated BDD-
representations that are, first, more succinct and space-efficient as OBDD's, and,
second, allow efficient solutions of the basic tasks of Boolean manipulation similarly
as OBDD's do. A number of OBDD-extensions were proposed with the aim to overcome
the mentioned disadvantages of OBDD's [e.g. ADG91, JPHS91, BJAAF92].
However, the obtained increase in space-efficiency of the representation has to be
paid in the mentioned approaches with co-NP-completeness of the equivalence test
[GM92a]. A natural candidate for an OBDD-extension that allows efficient equivalence
test (at least probabilistical [BCW80]) are BDDs that read input variables in
the course of computation at most once, so-called FBDD's. Since OBDD's are special
structured BDDs with this read-once-only property, FBDD's indeed generalize
the OBDD-concept.
BDD-based Data Structures for
Boolean Functions
BDD-based data structures for Boolean functions use the following representations
schemes.
A binary decision diagram (BDD) over a set X of Boolean variables
is a directed acyclic graph with one source and at most two sinks labeled by 0 and
1. Each non-sink node v is labeled by a Boolean variable l(v) 2 X n , and has two
outgoing edges one labeled by 0 and the other by 1. The computation path for an
input starts at the source. At an inner node with label x i the
outgoing edge with label a i is chosen. The BDD P represents a Boolean function
if the computation path for each input a leads to the sink with label f(a).
f is sometimes denoted by f P .
A BDD is called on each path, each
variable is tested at most once.
An ordered binary decision diagram (OBDD) is an FBDD with the property that
on each path the variables are tested in a fixed order.
Examples of an FBDD and an OBDD are given in Figure 1. Generally we draw
BDDs in such a way that, if the edges are not labeled, then we assume the left edge
to be labeled by 0 and the right edge by 1.
A
A
A
A
\Gamma\Psi
@
@
@
@R
\Gamma\Psi
@
@
@
@R
A
A
A
AU
\Deltaff
A
A
A
AU
\Deltaff
\Gamma\Psi
@
@
@
@R
\Gamma\Psi
A
A
A
AU
\Deltaff
A
A
A
AU
\Phi-
x 40(a) (b)
Figure
1.
Example of an FBDD (a) and an OBDD (b) for the function f(x 1
Because of the remarkable property that (the logarithm of) BDD-size corresponds to
Turing machine space under the name of branching programs BDD-
representations have been extensively studied in complexity theory [e.g. Weg87,
Mei89]. The theoretical interest in FBDD-representations which are known in complexity
theory as read-once-only branching programs arises from a similar correspondence
to eraser Turing machine space.
It is well known that each Boolean function f 2 IB n over X n can be represented
in terms of BDDs, in terms of FBDD's or, for any variable ordering, in terms of
OBDD's. Optimal BDD-representations are, in comparison with two-level-represen-
tations such as DNFs or CNFs or with multi-level-representations such as Boolean
formulas, more succinct and space-efficient [e.g. Mei89]. However this succinctness
makes it often difficult or sometimes even infeasible to perform the basic tasks of
Boolean manipulation. For example, it is a co-NP-complete problem to test whether
two BDDs represent the same Boolean function [CHS74].
The situation changes dramatically if one works with restricted types of BDDs.
Then, it is sometimes possible to perform efficiently the basic tasks of Boolean
manipulation, although restriction properties have to be maintained in the course
of the computation. Bryant [Bry86] was the first who observed that OBDD's have
this property. In more detail, OBDD's possess the following outstanding properties.
Fact 1 [Bry86].
1. With respect to a fixed variable ordering the representation of a Boolean function
by means of a reduced OBDD is canonical, i.e. uniquely determined.
2. Let P 0 and P 00 be two OBDD's with the same variable ordering. Each binary
Boolean synthesis step can be performed in time O(size(P
Due to these properties OBDD's defined over a fixed variable ordering are well suited
to be used as a data structure for Boolean functions [Bry86]. Indeed, nowadays
such OBDD's are the state-of-the-art data structure for Boolean functions utilized
in various packages for applications in CAD [e.g. BRB91]. However, the disadvantage
of OBDD data structures is the often very low space efficiency of the OBDD-
representations. Although wide classes of practically important Boolean functions
possess - at least with respect to some well-suited variable orderings - space-efficient
(i.e. polynomial size) OBDD-representations, there are many important functions
without such succinct representations. For example, there exist Boolean functions
such as integer multiplication, hidden weighted bit function (HWB), or indirect storage
access function (ISA) that can not be represented by OBDD's of polynomial size
[FHS78, Bry91, BHR91] for any variable ordering.
Computational Advantages of the FBDD's
Although there are many Boolean functions such as e.g. all symmetric functions for
which optimal FBDD-representations are OBDD's for many important functions
the restriction of FBDD's to OBDD's causes an exponential increase in size. Here
we are going to discuss some Boolean functions with the property that, for each
variable ordering, the OBDD-size is exponential in the FBDD-size.
The first idea to construct examples of Boolean functions with small (i.e. polynomial
FBDD-size and large (i.e. exponential size) OBDD-size is to consider Boolean
functions of the form
where f 0 and f 1 have polynomial size OBDD's for variable orderings - 0 , and - 1 ,
and exponential size OBDD's for - 1 , and - 0 , respectively. If we replace the single
multiplexer variable x 0 by a larger Figure 2), and if
we take functions f with polynomial size OBDD's which do not have a
good variable ordering in common we can get a large class of Boolean functions with
small FBDD-size and large OBDD-size.
@
@
@
@@
Figure
2.
Due to a similar idea, Fortune, Hopcroft and Schmidt have constructed in [FHS78]
an FBDD P of size smaller than 3n 2 such that each OBDD for
at least 2
n=2\Gamma(log 3 n+1)=2 . Verification methods and circuit realizations of FHS are
presented in [JBAF92].
A second interesting example is the indirect storage access function ISA. ISA is
defined variables as follows: Let let the variables of
ISA be partitioned in m+ 1 groups
If P
then the output of ISA is x j with
and 0 otherwise.
Breitbart, Hunt III and Rosenkrantz have proven that any OBDD computing ISA
has size at least 2 n=logn\Gamma1 [BHR90]. On the other hand, they have shown that ISA
can be computed by an FBDD (even by a decision tree) of size at most 2n 2 =logn.
@
@
@ @
@
@
\Gamma\Psi
. y
x (l\Gamma1)k x (l\Gamma1)k+1 . x
Memory
Register
Figure
3.
Indirect storage access function ISA.
Finally we are going to discuss in more detail the hidden weighted bit function HWB
discussed by Bryant in [Bry91]. Let wt(x) be the number of ones in the input
assignment
and Although each OBDD-representation of HWB is of
exponential size [Bry91] it can be computed by a polynomial (even a quadratic) size
FBDD. To explain this we use a construction due to Bryant [Bry92b]. It is based
on the computation of appropriately chosen restrictions of HWB. If x i:j represents
and
One can easily verify that
and
The main idea is explained in Figure 4. Simultaneously we compute H 1:n and G 1:n .
restrictions on the same level are merged together. Since, on each level,
k is fixed, we have at most 2n nodes on each level. Furthermore, on any source-
to-sink-path each variable is tested. Hence, the FBDD size of HWB is at most
@
@
@ @R
\Gamma\Psi
@
@
@ @R
\Gamma\Psi
@
@
@ @R
\Gamma\Psi
@
@
@ @R
\Gamma\Psi
@
@
@ @R
\Phi-
G 1:n
G 2:n
Figure
4.
Construction of an FBDD PHWB that computes the hidden weighted bit function
HWB.
After having seen that there are important functions with small size FBDD's whose
OBDD-sizes are, with respect to any variable ordering, exponential, let us mention
a further computational advantage of FBDD's over other Boolean function repre-
sentations. Due to [BCW80] we can assign to each FBDD a signature that allows
to test functional equivalence of FBDD's probabilistically in polynomial time. This
property applied to OBDD's is the basis of very efficient hash techniques that are
used extensively in efficiently working OBDD-packages such as in the package of
Brace, Bryant and Rudell [BRB90]. The signatures of Boolean functions have found
many further applications [YBAF92, Kri93, KBS93]. Since signatures can be computed
for FBDD's similarly as for OBDD's these hash techniques can be used to
work with FBDD's, too. We remark only, that no similar property is known for
other (compact) Boolean function representations.
4 FBDD Types
In order to extend the efficient manipulation of OBDD's to FBDD's we have to show
that it is possible to perform single Boolean synthesis steps in terms of FBDD's similar
efficiently (i.e. in polynomial time) as in the case of OBDD's. To be more precise,
let be a Boolean operator 2 IB 2 . Then, by
we denote the problem of constructing an FBDD P for
the function f 00 from given FBDD-representations P 0 and P 00 for f 0 and
f 00 . Sometimes we suppress the operator and write easily SY N FBDD . SY NOBDD
denotes the similar problem for OBDD's. Unfortunately, investigations of the complexity
of SY N FBDD have shown that performing Boolean synthesis steps in terms
of FBDD's is NP-hard [GM92a]. The reason for the intractability of SY N FBDD
is caused by the different ways the input FBDD's can test the variables. Indeed,
considering OBDD's instead of FBDD's the same effect can be observed: Performing
a Boolean synthesis step (SY NOBDD ) with OBDD's of different variable orders
is NP-hard, too [GM92a]. However, if we restrict the problem to OBDD's of the
same variable order - (SY N -OBDD ) the problem becomes efficiently solvable in time
The goal of this section is to introduce formally the
notion of a type of an FBDD that generalizes the linear variable ordering of OBDD's
and that allows to
canoncially represent Boolean functions in terms of FBDD's with respect to
any given complete type - , and to
ffl efficiently perform a single Boolean synthesis step for FBDD's of the same
An (FBDD-)type is defined similarly as an FBDD with the only exception that it
possesses merely one sink that is labeled by a symbol t.
If P is an FBDD than tp(P ) is derived easily from P by identifying the sinks of P .
@
@
@
@R
\Gamma\Psi
J-
\Omega \Omega \Omega \Omega AE
\Omega \Omega \Omega \Omega AE
A
A
A
A AU
\Omega \Omega \Omega \Omega \Omega AE
A
A
A
AU
'/
@
@
@ @R
@
@
@
@ @R
\Gamma\Psi
\Gamma\Psi
'/
\Omega \Omega \Omega \Omega \Omega AE
A
A
A
A AU
\Omega \Omega \Omega \Omega AE
A
A
A
AU
Figure
5.
An FBDD P and the type tp(P ).
There are two syntactical reduction rules usually applied to BDDs, FBDD's, OBDD's,
and types in order to reduce their sizes without any functional change. An important
observation is that these reductions do not change substantially the way a BDD, an
FBDD, or an OBDD tests the input variables. This is the reason we can use these
reductions to generalize the notion of variable order to FBDD's.
Merging rule:
If two nonterminal nodes u and v have the same label
v0 and eliminate one of these two nodes and redirect all incoming
edges to the other node. Figure 6 illustrates the application of the merging
rule to type tp(P ) of Figure 5. Observe, that, for each input assignment,
the sequence of variable tests remains invariant w.r.t. to applications of the
merging rule. Moreover, if one considers BDDs as an algebraical structure,
then the application of the merging rule defines a congruence relation on the
set of nodes of the BDD. For this reason sometimes we speak of the merging
rule as of an algebraical reduction.
Deletion rule:
If nonterminal node v, then eliminate v and redirect all incoming
edges to v0. Figure 7 illustrates the application of the deleting rule. A node
on which deletion rule can by applied is called simple reducible. In opposite
to the merging rule the deletion rule decreases the information contained in a
type.
@
@
@
@
@R
@
@
@
@ @R
\Gamma\Psi
'/
\Omega \Omega \Omega \Omega \Omega AE
A
A
A
A AU
\Omega \Omega \Omega \Omega \Omega AE
A
A
A
A AU
Z Z
Z Z Z~
Z Z
Z Z
Z Z Z~
ae
ae
ae
ae
ae
ae ae=
ae
ae
ae
ae
ae
ae
ae=
'/
@
@
@R
\Gamma\Psi
""/
x 400111
Figure
6.
Application of the merging rule to tp(P ) from Figure 5.
Z
Z Z
Z Z Z Z~
Z
Z Z
Z Z
Z Z~
ae
ae
ae
ae
ae
ae
ae=
ae
ae
ae
ae
ae
ae
ae=
'/
""/
@
@
@ @R
\Gamma\Psi
x 400111
Z Z
Z Z Z~
Z Z
Z Z
Z Z Z~
ae
ae
ae
ae
ae
ae ae=
ae
ae
ae
ae
ae
ae
ae=
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta \Thetafl
'/
'/
Figure
7.
Application of the deleting rule to the type - from Figure 6.
(Reduced FBDD's and types.)
A FBDD P is called reduced if neither the merging rule nor the deletion rule can be
applied to P .
or an FBDD P is said to be algebraically reduced if the merging rule can
not be applied to - or to P .
Z Z
Z Z
Z Z Z~
Z
Z Z Z
Z Z Z~
ae
ae
ae
ae
ae
ae ae=
ae
ae
ae
ae
ae
ae
ae=
'/
""/
@
@
@ @R
\Gamma\Psi
red a (tp(P))
Figure
8.
Algebraical reduced type for the type tp(P ) of Figure 5.
Proposition 1. (Unique reduction.)
The reduced FBDD obtained from an FBDD P by applying merging and deletion
rule is uniquely determined and can be computed in linear time. Similarly, the
algebraically reduced FBDD and the algebraically reduced type obtained by applying
the merging rule are uniquely determined and can be computed in linear time.
Proof.
For an FBDD P we denote a reduced FBDD by red(P ), and an algebraical reduced
FBDD by red a (P ). Similar, for a type - we denote by red a (- ) an algebraical reduced
type red a (- ).
First we prove that red a (P ) and red a (- ) are uniquely determined. Two nodes u and
are said to be algebraical congruent (u - a v) if
1.
2. u0 - a v0 and u1 - a v1 (if u and v are inner nodes).
Obviously, - a defines an equivalence relation. Interpreting the left-son-relation
(v0) and the right-son-relation (v1) as unary operations on the set of nodes, the
labels l(v) as unary predicates on the set fx 1 by means of
a few simple axioms, P or - can be considered as an algebra. Now it can be checked
easily that - a is in fact a congruence relation. The factorization over this relation
produces red a (P ) or red a (- ), respectively. This is true since FBDD's or types rooted
in algebraical congruent nodes are isomorphic. Hence, any two merging sequences of
maximal length merge all concruent nodes into a single node and lead to the same
result red a (P ) or red a (- ).
The algebraical idea behind the proof of the uniqueness of red a (P ) and red a (- ) can
not be applied directly to red(P ) since there is a fundamental difference between
the merging and the deletion rule. In terms of universal algebra this difference
can be explained by the fact that the relation - a defined by the merging rule is a
congruence relation while the relation to be defined in the following by means of the
merging and the deletion rule is merely an equivalence relation. Two nodes u and v
of the FBDD P are said to be equivalent (u - v) if
1. u and v are algebraical congruent (u - a v), or
2. one of them is the only successor of the other node in P , or
3. there exist nodes
Observe that neither the deletion rule nor the merging rule does affect the equivalence
of two nodes in P . However, it is clear that any application of the merging
or the deletion rule merges together two different equivalent nodes. It is easy to
see that in order to prove the uniqueness of red(P ) it suffices to prove that P can
be transformed into an FBDD all of whose equivalent nodes are merged together.
The proof can be done easily by induction on the difference between size(P ) and
the number of equivalence classes on P . If this difference is zero, i.e. if there do not
exist two different equivalent nodes in P then we have nothing to merge together. P
is reduced. But if the difference is positive then we can always find two equivalent
nodes which can be merged together by means of the deletion rule or the merging
rule. At the next step we can use the induction hypothesis.
What about the complexity of constructing red(P ), red a (P ) or red a (- ). Reductions
of BDDs were already discussed by Akers [Ake78]. An efficient algorithm for the construction
of red(P ) was developed by Bryant [Bry86]. By the way, OBDD-packages
[e.g. BRB90] do not use separate reduction procedures. However, reduction can be
performed in the average by means of linear many arithmetical operations Finally,
the algorithm of Bryant was improved in [SW92] to run in deterministic linear time
by replacing the sorting procedure by a (linear time) bucket sort technique.
In order to identify FBDD's testing the variables in a similar way it suffices to
compare their types. We define the notion of a subtype in analogy to the notion of
a linear suborder.
Let - be type. A type - 0 is called a subtype of - can be
constructed from - by applying the merging and the deletion rule.
@
@
@
@ @R
@
@
@
@ @R
\Gamma\Psi
\Gamma\Psi
'/
\Omega \Omega \Omega \Omega \Omega AE
A
A
A
A AU
\Omega \Omega \Omega \Omega AE
A
A
A
AU
Z Z Z
Z
Z Z~
Z Z
Z Z
Z Z~
ae
ae
ae
ae
ae
ae=
ae
ae
ae
ae
ae
ae=
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
'/
Figure
9.
A type - and a subtype - 0 .
It is easy to see that, with respect to -, the set of all types constitutes a partially
ordered set.
(Complete type, FBDD of type - .)
is complete if, on each source-to-sink path, each
variable of X n is tested.
Let - be complete type. An FBDD P is of type - if there exists a type - 0 such that
red a (- 0 red a (- ).
Due to the definition of a type - of an FBDD we can always assume that - is algebraically
reduced. Sometimes, by write -FBDD to indicate an FBDD's to be of a
complete type - . Obviously, a single FBDD can belong to several types. Variable
orderings provide very simply structured complete types the so-called OBDD-types
(for an example see Figure 10). However, in Section 3 it was shown that for important
functions such OBDD-types are not optimal.
To give a more interesting example of a type of an FBDD we deduce the type
of the FBDD PHWB for the hidden weighted bit function HWB
described in Section 3. Since each variable is tested on each source to sink path
- HWB is a complete type. Figure 11 shows - HWB for giving types for
H 1:n (= HWB) and for G 1:n .
Figure
10.
Example of an OBDD-type.
Proposition 2. (Efficient type-check for FBDD's)
Let P be an FBDD P , and let - be a complete type. Then it can be checked efficiently
whether P is of type - .
Proof:
Since the apply algorithm to be presented in Section 5 produces always FBDD's
consistent with the given type, in most practical applications we do not need to
check whether an FBDD belongs to a predefined type. Hence it suffice to scetch an
test whether there exists a type - 0 such that two FBDD's
are of type - 0 [GM92b]. In particular, for
answer the question whether P is of type - .
Each node v of an FBDD or a type can be characterized by the set s v of variables
that are tested on a v-to-sink path. s v can be computed with a straightforward
traverse algorithm in time that is linear in the output. We say that two nodes u
and v are consistent if, for l(u) 6= l(v), it is not the case that l(u) 2 s v as well as
Now, if we run the algorithm synthesis (the apply procedure described in Section 5.2)
taking P 0 and P 00 as inputs then we can check whether the sources of every recursive
call of synthesis are consistent. (Let us remark that in this case the procedure
synthesis needs no input type and can work with any binary Boolean operation
since the resulting FBDD is not of interest.) By induction it can be easily proved
that, all checked pairs of nodes are consistent if and only if there exists a common
@R
\Gamma\Psi
@
@ @R
\Gamma\Psi
@
@
@R
@
@
@R
@
@
@R
@
@
@R
\Gamma\Psi? ?
@
@
\Gamma\Psi
\Gamma\Psi
@
@
\Gamma\Psi
@
@
@R
\Gamma\Psi
@
@
@R
@
@
@R
\Gamma\Psi
A
A
AU
\Deltaff
\Deltaff
A
A
AU
A
A
AU
\Deltaff
@
@
\Gamma\Psi
A
A AU
\Theta
\Theta
\Deltaff
A
A
AU
BN
\Deltaff
\Theta
\Theta
@
@
@R
A
A
AU
BN
\Deltaff
\Theta
\Theta
\Gamma\Psi
A
A AU
BN
\Deltaff
\Theta
\Theta
@
@
@R
@
@
@R
A
A AU
BN
\Deltaff
\Theta
\Theta
\Gamma\Psi
\Gamma\Psi
Z Z
Z
Z Z Z
Z Z
Z Z
Z Z
Z Z Z~
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae=
@
@
@
@
@
@
@
@
@
@
@ @R
A
A
A
A
A
A
A
A
A
A
A AU
\Deltaff
Figure
11.
The type - of the FBDD for the hidden weighted bit
function given in Section 3 given by means of the types for H 1:n (= HWB) and for
G 1:n . (- HWB roots in the source labeled by x 7 .)
Efficient Solutions of the Basic Tasks of Boolean
Manipulation in Terms of FBDD's
5.1 Canonical FBDD Representations
Now we are ready to prove that, with respect to a given complete type, reduced
FBDD's similarly to reduced OBDD's provide a canoncial representation for Boolean
functions. In order to do this we start with the following easy observation.
For each input variable x there exists a BDD that consists of exactly one nonterminal
node labelled by x, a 0-sink, and a 1-sink. Let us call this BDD (it is actually an
OBDD) the standard representation of x.
Proposition 3.
Let - be a complete type over X n , and let x 2 X n . The standard representation of
x is of type - .
Proof:
Since - is a complete type the nodes labelled by x form a cut in - . By means of the
deletion rule we can eliminate all predecessors of the nodes labeled by x in - . After
iterated applications of the merging rule we get a type - 0 such that a single node
labelled by x is the only predecessor of the sink t. Finally, using the deletion rule
we can successively construct from - 0 the standard representation of x.
Theorem 4. (Canonical FBDD representation.)
Let - be a complete type over X n and let f 2 IB n . There is, up to isomorphism,
exactly one reduced FBDD of type - that represents f .
Proof.
There is exactly one complete binary decision tree T for f such that red a (T
red a (- ). Let P be an FBDD for f of type - . It is easy to see that tp(P ) - tp(T ).
Since P and T represent the same Boolean function, P can be constructed from T
by applying the merging and deletion rules. As a consequence of Proposition 1 we
get red(P
As an easy consequence of Theorem 4 and Proposition 1 we obtain an efficient
algorithm that solves the equivalence problem EQU -FBDD
Corollary 5.
EQU -FBDD , the equivalence of two FBDD's P 0 and P 00 of type - can be decided in
linear time O(size(P
Let us mention that the best result for solving EQU FBDD is a probabilistic polynomial
time algorithm [BCW80].
5.2 Efficient Performance of Boolean Synthesis Steps
In the following we show how the efficient APPLY procedure for OBDD's of the
same variable ordering [Bry86] can be extended to FBDD's of the same type. In the
meantime, similar results could be obtained in [SW92].
Theorem 6. (Efficient Boolean synthesis of FBDD's.)
Let be a binary Boolean operation, and let P 0 , P 00 be two FBDD's of the same
complete type - . An FBDD P of type - representing the Boolean function
can be constructed in time O(size(-
Before proving the Theorem we remark only that in many cases, binary Boolean
synthesis steps for FBDD's can be performed in quadratic instead of cubic time.
This is true for instance if we consider FBDD's of bounded width types or if at least
one of the input FBDD's is large enough (i.e. each variable appears on each source-
to-sink path). Then we can eliminate the factor size(- ) and obtain a time bound
of O(size(P 0 )size(P 00 )). The quadratic upper bound O(size(P 0 )size(P 00 )) can be
obtained also if we do not require as result an FBDD of type - [GM92b]. However,
since also small size FBDD's such as the standard representations (i.e. FBDD's of
size 1) and for certain types - the result of the synthesis can be of size O(size(- ))
there is no hope to eliminate the factor size(- ) from the general upper bound.
Proof:
We start with a brief sketch of the synthesis algorithm.
Algorithm for SY N -FBDD
Input:
- a binary Boolean operation 2 IB
- a complete type
- two FBDD's P 0 and P 00 of type
Output:
- an FBDD P of type - that represents
begin
compute P := red(Q);
return(P
end.
We give now a short recursive description of the procedure synthesis( , - ,
P ) which is a generalization of the apply procedure proposed in [Bry86].
Let top(- ) denote the top variable (the label of the source) of - . Then - j top(-)=ff is
the type rooting in the ff-successor, ff 2 f0; 1g, of the source of - . Similar notations
are used in the case of FBDD's. The terminal case is reached if one of the two input
FBDD's P 0 and P 00 represents a constant. Then the recursion stops, and the output
FBDD can be derived by modifying the sinks of the other FBDD. For instance, if P 0
is 1-sink and each fi-sink of P 00 has to be replaced by a -
1g.
begin
if (P 0 or P 00 is a sink) /* terminal case */
then return the result; /* the construction of P is straightforward */
else
construct P such that top(P
return(P
For any 2 IB 2 , by means of the classical Shannon expansion applied in the form
each recursion step of synthesis( , - , easily be verified. Let us only
remark that if P 0 and P 00 are of type - then are of type - j x=ff . Since
modifications of the sinks of an FBDD do not change its type we get, as result, an
FBDD P of type - .
To improve the running time we use a global table T of size size(- ) \Theta size(P 0 ) \Theta
us consider the call synthesis( , - 0 , P 0, P 00, Q). Since the restrictions
Qj x=ff have the property that either x = top(Q) or x does not appear in Q the
sources
2 correspond to nodes of - , P 0 and P 00 , respectively. If
synthesis is supported by the table T such that T [v; contains a pointer to the
result of the corresponding synthesis call (or nil if synthesis was not called on these
nodes) we can save much computation and bound the running time by the size of
T .
Let us remark an important property of this straightforward synthesis algorithm.
The procedure synthesis generates as result an FBDD P which is not reduced.
Moreover, each variable appears on each source-to-sink path in P . That is why for
the next call of synthesis applied to P and P new as input (P new is any FBDD of type
- ) we do not need anymore the type - (the information of - is always encoded in
repeated applications of synthesis we get a quadratic upper bound
new )) for the running time. As an example, let us mention that the
FBDD PHWB for the hidden weighted bit function described in Sections 3 encodes
all informations of the type - HWB (Section 4). Hence, working with PHWB we do
not need at all the type - HWB . (For an application we refer to Section 6.)
Finally we remark, that the algorithm presented in the proof of Theorem 6 is not
optimal. The programming techniques used e.g. in the OBDD-package developed
by Brace, Rudell and Bryant [BRB90] can be applied also in the case of FBDD's.
For instance with the help of an appropriate hash table we can force synthesis to
produce only reduced FBDD's. Hence, no reduction algorithm is needed.
6 FBDD's versus OBDD's
Let us start the comparision between the FBDD data structure and the OBDD data
structure with a general remark. Since both data structures provide canonical repre-
sentations, since binary Boolean synthesis steps can be performed, in both cases, in
quadratic time (at least for "large" inputs which is the interesting case in practice),
and since FBDD's provide more - sometimes even exponentially more - concise
representations than OBDD's, in any application that is based on these properties
it makes sense to use FBDD's instead of OBDD's. However, since OBDD's are
especially easily structured FBDD's, and since there is a great variety of reason-able
heuristics [e.g. FS90, BRKM91] for designing efficient OBDD's it seems to be
a meaningful strategy to work with OBDD's as long as they fit into the computer.
Only when OBDD's became too large then one should work with the more efficient
and sophisticated structure of FBDD's.
Before we start to demonstrate the power of the FBDD-concept by showing that the
verification of the circuit design for the hidden weighted bit function HWB given in
[Bry91] can be efficiently carried out in terms of FBDD's let us present some ideas
how to perform efficiently variable quantification in terms of FBDD's.
6.1 FBDD's and variable quantifications. First insights.
Beside of performing two-valued logical synthesis some applications of OBDD's
are based on the possibility of an efficient performance of variable quantification
[e.g. CMB90, Bry92]. For x 2 X n and f 2 IB n variable quantifications are defined
by the identities
Starting with an OBDD for f these operations can be performed by deriving OBDD's
for both restrictions f j x=0 and f j x=1 and carrying out the according Boolean synthesis
step. However, in terms of FBDD's the situation is more difficult. Starting
with an FBDD P for f similarly to the case of OBDD's one can easily construct
the FBDD's P 0 and P 1 for the restrictions f j x=0 and f j x=1 of f by deleting in P
each node v labelled by x and, to obtain P 0 , replacing it by the 0-successor node v0,
or, to obtain P 1 , by the 1-successor node v1. But now we get into trouble. Since,
in general, P 0 and P 1 are not of the same FBDD-type no efficient algorithm for
performing the necessary Boolean synthesis step for P 0 and P 1 is known [GM92b].
Nevertheless there are many and important situations for which, also in terms of
FBDD's, efficient quantification is possible. In the following we describe two paradigmatic
situations.
First let us consider an FBDD P of type - , and let all nodes of - that are labeled
by a variable x are simple reducible (i.e. can be reduced by means of the deletion
rule). Then it is guaranteed that the FBDD's for the restrictions constructed from
P are of the same type. Hence quantification can be performed efficiently similar as
in the case of OBDD's.
In order to illustrate the importance of that observation let us discuss an interesting
application in the verification of switch-level circuits [FMK90]. A modeling method
of a transistor as a switch is as follows: If a switch (transistor) is on, then source
and drain have the same value. Hence, the circuit shown in Figure 12 is modeled as
a switch using propositional logic as
where d is an internal signal.
c
a
d
e
Figure
12.
A transitor-level circuit.
Since d is not observable from outside the circuit, d can be considered to be an existential
quantified variable. To get the relationship of only external signals (a; b; c; e)
the internal signal d must be eliminated. If the logical description of the circuit
(e.g. (1)) is expressed in terms of FBDD's then this operation can be efficiently
implemented if quantification of internal signals can be efficiently performed. Since,
from the very beginning, the external signals are known the following heuristic for
designing well-suited FBDD-types guarantees that quantification of internal signals
can be done efficiently. We separately create
ffl an appropriate FBDD-type - e that is complete with respect to the variables
corresponding to the external signals, and
ffl an appropriate order - i of the variables corresponding to the internal signals.
(Obviously, - i defines an OBDD-type that is complete with respect to the
internal variables.)
Then, starting with - e , a complete FBDD-type - is designed by including as occasion
demands - i piecewise into - e . As a result we get a type where we can quantify, restrict
and compose in the internal variables without problems. Without going into further
details we mention merely that this technique generally can be successfully applied
to all approaches that are based on verifying modularized circuits.
A second more general situation where variable quanitification can be perform efficiently
in terms of FBDD's is the following: Assume the set X n of variables is
partitioned into blocks X . We consider types - that are segmented,
i.e. that consists of segments - r k; such that segment - r starts with exactly
one node and tests the variables from X (r) . See Figure 13 for an example.
if P is an FBDD of a segmented type - then it is easy to see that restricting
simultaneously all variables of a block provides FBDD's that are again of type
- . Hence, due to Theorem 6, the variables of each block can be (simultaneously)
quantified efficiently although in the course of restricting the single variables of a
block, there emerge FBDD's that, in general, are not of type - . Let us mention that
in many application we do not quantify single variables but blocks of variables. For
example, consider any FBDD P of the type - cited in Figure 13. If we restrict x k+1
to 0 and to 1, respectively, then we obtain two inconsistent FBDD's whose types
are incomparable. This is true, since, for the input assigment x
the variable x k+2 ist tested before x k+3 while in P j x k+1 =1 the variable x k+3 ist tested
before x k+2 . Figure 14 shows the type - j x k+1 =0 of P j x k+1 =0 and the type - j x k+1 =1 of
\Deltaff
\Gamma\Psi
A
A
A AU
@
@
@ @R
ae
ae
ae
ae=
ae
ae
ae
ae=
\Omega \Omega \Omega \Omega \Omega \Omega \Omega AE
ae
ae
ae
ae=
Z Z
Z Z~
x
x k+2 x k+3
x k+3 x k+2
x k+4
Figure
13.
Segment of a segmented type - .
\Deltaff
\Gamma\Psi
A
A AU
@
@ @R
ae
ae
ae
ae=
BBN
\Theta
\Theta
\Theta
\Theta
\Theta
Z
Z Z Z~
Z Z
Z Z~
x k+4
x k+2
x k+3
x k+3
x k+2
\Deltaff
\Gamma\Psi
A
A AU
@
@ @R
ae
ae
ae
ae=
BN
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta \Thetafl j
Z Z
Z Z~
Z Z
Z Z~
x k+4
x k+2
x k+3
x k+3
x k+2
Figure
14.
Segments of the restrictions - j x k+1 =0 and - j x k+1 =1 of type - cited in Figure 13.
6.2 The power of FBDD's in circuit verification.
The hidden weighted bit function.
In the following we review a result described in detail in [GM93b] that shows that
the verification of the circuit design proposed by Bryant in [Bry91] for the hidden
weighted bit function HWB can be carried out efficiently in polynomial space using
FBDD's. This demonstrates the power of the FBDD-concept since in [Bry91] it was
shown that working with OBDD's exponential space is needed.
Recall from Section 3 that the hidden weighted bit function HWB can be mathematically
specified by means of the recursive equations:
and
?From the above description an FBDD PHWB for HWB can be derived easily (see
Section 3). Its type - HWB was drawn in Section 4.
The circuit design C for HWB given in [Bry91] leads to a nearly optimal VLSI
implementation of area \Theta time 2 complexity O(n 1+ffl ) for any ffl ? 0. The main idea
of this design is illustrated in Figure 15.
@
@
@
@
@
a a a a a a
a a a a a a666HWBx 1
Figure
15.
Circuit design C for the hidden weighted bit function HWB. The w
the weight of
?From
Figure
it becomes obvious how C looks like for any n 2 IN . For simplicity
we assume 1. In this case log
In
order to verify C in terms of FBDD's of type - HWB we have to construct a reduced
FBDD PC of type - HWB for the function fC computed by C, and then to test its
equivalence (in fact the equality since FBDD-representations of a fixed type are
canonical) with PHWB . For constructing PC we introduce, motivated by the logical
structure of C, a few
satisfy the equation
. The FBDD P cited in Figure 16 exactly
mirrors the main design idea for C. Obviously,
\Omega \Omega \Omega AE
J-
BBBN
\Omega \Omega \Omega AE
\Theta
\Theta
J-
JJ-
\Omega \Omega \Omega AE
J-
\Omega \Omega \Omega AE
\Omega \Omega \Omega AE
J-
\Omega \Omega \Omega AE
J-
\Omega \Omega \Omega AE
J-
\Omega \Omega \Omega AE
J-
Figure
16.
The output function of the circuit of HWB in terms of -FBDD over the primary
inputs and the new internal variables.
let us consider the type - cited in Figure 17. It is easy to see that P is
an FBDD of type - . Now we start to eliminate all internal variables in the order
. We are going to prove that all FBDD's arising in this
elimination process are of small size. Let us mention that, as in the example of
Section 6.1, we can quantify, restrict and compose in the internal variables. Let f i ,
denotes the Boolean function computed by P after the elimination
of . In order to describe f i and its FBDD we consider
the Shannon decomposition of f i in which can be illustrated
by the tree cited in Figure 18.
Figure
17.
Definition of the type - .
\Omega \Omega \Omega AE
J-
BBBN\Omega \Omega \Omega AE
\Theta
\Theta
J-
JJ-
\Omega \Omega \Omega AE
J-\Omega \Omega \Omega AE
\Omega \Omega \Omega AE
J-
\Omega \Omega \Omega AE
J-
Figure
18.
Visualization of the cofactors of the Shannon expansion of f i in the
If wt i
The function f (k)
i can be described easily in the following way. We partition the
set of input variables x 0 denotes the constant
groups of 2 m\Gammai variables
With wt i (x) we choose the appropriate group. k is the offset we need to determine
the output variable. k depends only on w
To extract from C an FBDD PC for its output we do the following. We start with
. Then we construct an FBDD of type - for wm\Gamma1 and eliminate wm\Gamma1 from P
by means of the compose operation. In the second step, by means of the compose
operation, we eliminate in the same manner wm\Gamma2 . At the last step we eliminate
What we get is an FBDD PC of type - HWB for the circuit C. In more detail, we
begin with P construct successively FBDD's for the functions
Under the assumption that we use for each
single synthesis step the programming techniques (hash table, hash-based cash etc.)
presented in [BRB90] then in usual practical implementations the expected space
complexity for extracting PC equals the size of the maximal reduced FBDD derived
for one of the above mentioned functions. Since C computes HWB, and since PC
is a reduced FBDD of type - HWB it is equal to PHWB and, hence, of quadratic size.
That is why the space complexity of the verification has to be at least quadratic.
What about the FBDD-sizes of the functions f i and w i ? In order to estimate the
FBDD-size of the functions f i in the above process it is sufficient to design small
FBDD's of type - for the f (k)
. Using the tree of Figure 18 the size of the resulting
at most a factor of n larger than the size of
such - HWBFBDD for f (k)
Let us mention some useful properties of - HWB . If we exclude the last two levels
(the sink t and its predecessors) each node of - HWB can be labeled by a function G i:j
or H i:j . The source is labelled by H 1:n . We claim that the number of ones we have
tested on any path from H 1:n to H i:j or G i:j equals respectively. Hence, we
can consider - HWB to be a counting schema. The proof of this claim follows easily
by induction on can be a 0-successor, a 1-successor of H i;j+1 or
G i\Gamma1;j , respectively. If the induction hypothesis holds for H i;j+1 and G i\Gamma1;j then it
is true for H i;j and G i;j , too.
An easy consequence of the above mentioned counting property of - HWB is that each
symmetric Boolean function including the functions w i can be realized by means of
FBDD's of type - HWB of at most quadratic size. Now we are going to design an
- HWBFBDD for the f (k)
. Without loss of generality let
us take the, with respect to - , uniquely determined complete decision
tree of f (0)
i of type - HWB . We consider the restrictions of f (0)
i computed on level l.
The number of ones tested on a path from the source to a node on level l labeled by x s
has at most two values. If w is the number of ones for u and v where
then u and v correspond to the same restriction if and only if x w div 2 m\Gammai\Gamma1 has the
same value on the source-to-u and on the source-to-v paths. Altogether the nodes
on level l labeled by x s compute at most four different restrictions. Hence, the
number of restriction on level l is at most 4n and, hence, the size of the - HWBFBDD
for f (0)
i is at most 4n 2 . Analogously we obtain the same upper bound for f (k)
any k. That is why the FBDD of f i has size at most 4n 3 . We remark only that the
size can be further reduced if all similarities between the - HWBFBDD's of f (k)
be merged together.
Altogether it is shown that C can be verified in terms of FBDD's of type - HWB with
low degree polynomial space and time. Since, for principal reasons, this is impossible
in terms of OBDD's this example makes the power of the FBDD-concept evident.
Conclusions
In the paper we extend the feasible manipulation in terms of OBDD's to FBDD's.
In detail we have shown that these
ffl FBDD's provides much more (sometimes even exponentially more) efficient
representations for Boolean functions as OBDD's do (Section 3),
reduced FBDD's of a fixed type provide canonical representations (Theo-
rem 4), and
ffl basic tasks of Boolean manipulation such as performing a Boolean synthesis
step, testing equivalence, satisfiability or tautology can be performed similar
efficiently in terms of FBDD's as in terms of OBDD's (Theorem 6, Corollary
5).
Instead of giving experimental evidence we prove formally for each problem size n
that the benchmark circuit for the hidden weighted bit function proposed in [Bry91]
can be verified in terms of at most cubic size FBDD's (Section 5.2) while it was shown
by Bryant that verification in terms of OBDD's in any case needs exponential space
complexity.
What spellt out to be more difficult to do in terms of FBDD's than in terms of
OBDD's are operations that are based on restrictions (e.g. variable quantification or
composition). However, we have characterized (Section 5) some situations that frequently
occure in practical applications for which these operations can be performed
efficently in terms of FBDD's, too.
An open and interesting problem is to develop heuristics for creation good types.
Of course, this is an extension of the problem of determining good variable orders
for OBDD applications. Since there is much greater freedom to define types than
to define orders this problem seems to be very trickery. However, all what is know
about OBDD's can be used if working with FBDD's. For example, a useful strategy
to work with FBDD's is to work with OBDD's (as extremely easy structed
FBDD's) as long as the OBDD's under consideration fit into the computer. Only
when OBDD's became to large then one should (try to) work with the more efficient
and sophisticated FBDD's.
--R
Binary Decision Diagrams
Boolean Satisfiability and Equivalence Checking Using General Binary decision Diagrams
Synthesis for Testability: Binary decision Diagrams
Equivalence of Free Boolean Graphs Can Be Decided Probabilistically in Polynomial Time
Efficient Implementation of a BDD Package
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Applications to Integer Multipli- cation
Symbolic Boolean Manipulation with Ordered Binary decision Diagrams
Symbolic Model Checking: states and beyond
Sequential Circuit Verification Using Symbolic Model Checking
Efficient Verification of Multiplier and Other Different Functions Using IBDDs
Heuristics to Compute Variable Orderings for Efficient Manipulation of Ordered Binary decision Diagrams
Verifying Temporal Properties of Sequential Machines without Building Their State Diagrams
Verifikation digitaler Systeme
The Complexity of Equivalence and Containment for Free Single Program Schemes
Evaluation and Improvements of Boolean Comparison Method Based on Binary decision Diagrams
A Method for Symbolic Verification of Sunchronous Circuits
Automatic and Semi-Automatic Verification of Switch-Level Circuits with Temporal Logic and Binary decision Diagrams
Finding the Optimal Variable Ordering for Binary decision Diagrams
Computers and Intractability
Analysis and Manipulation of Boolean Functions in Terms of decision Graphs
Efficient Analysis and Manipulation of OBDD's Can Be Extended to Read-once-only Branching Programs
Frontiers of Feasible and Probabilistic Feasible Boolean Manipulation with Branching Programs
Combinational Logic Verification with FBDD's
Functional Partitioning for Verification and Related Problems
Extended BDD's
Using if-then-else DAGs for Multi-Level Logic Minimization
PLATO: A Tool for Computation of Exact Signal Probabilities
Modified Branching Programs and Their Computational Power
Branching Programs - An Efficient Data Structure for Computer-Aided Circuit Design
Logic Verification Using Binary decision Diagrams in a Logic Synthesis Environ- ment
Graph Driven BDDs - A New Data Structure for Boolean Functions
The Complexity of Boolean Functions
--TR
Graph-based algorithms for Boolean function manipulation
Modified branching programs and their computational power
Finding the Optimal Variable Ordering for Binary Decision Diagrams
Using if-then-else DAGs for multi-level logic minimization
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication
Efficient implementation of a BDD package
Sequential circuit verification using symbolic model checking
Shared binary decision diagram with attributed edges for efficient Boolean function manipulation
Heuristics to compute variable orderings for efficient manipulation of ordered binary decision diagrams
Symbolic Boolean manipulation with ordered binary-decision diagrams
Boolean Satisfiability and Equivalence Checking Using General Binary Decision Diagrams
The Complexity of Equivalence and Containment for Free Single Variable Program Schemes
Synthesis for Testability
Frontiers of Feasible and Probabilistic Feasible Boolean Manipulation with Branching Programs
Analysis and Manipulation of Boolean Functions in Terms of Decision Graphs
Verifying Temporal Properties of Sequential Machines Without Building their State Diagrams
Gate-Delay-Fault Testability Properties of Multiplexor-Based Networks
--CTR
J. Jain , K. Mohanram , D. Moundanos , I. Wegener , Y. Lu, Analysis of composition complexity and how to obtain smaller canonical graphs, Proceedings of the 37th conference on Design automation, p.681-686, June 05-09, 2000, Los Angeles, California, United States
Chunghee Kim , Luciano Lavagno , Alberto Sangiovanni-Vincentelli, Free MDD-based software optimization techniques for embedded systems, Proceedings of the conference on Design, automation and test in Europe, p.14-19, March 27-30, 2000, Paris, France
Sieling , Ingo Wegener, A Comparison of Free BDDs and Transformed BDDs, Formal Methods in System Design, v.19 n.3, p.223-236, November 2001
Olaf Schrer , Ingo Wegener, The Theory of Zero-Suppressed BDDs and the Number of Knights Tours, Formal Methods in System Design, v.13 n.3, p.235-253, Nov. 1998
Wolfgang Gnther , Rolf Drechsler, Efficient manipulation algorithms for linearly transformed BDDs, Proceedings of the 1999 IEEE/ACM international conference on Computer-aided design, p.50-54, November 07-11, 1999, San Jose, California, United States
R. Drechsler , A. Sarabi , M. Theobald , B. Becker , M. A. Perkowski, Efficient representation and manipulation of switching functions based on ordered Kronecker functional decision diagrams, Proceedings of the 31st annual conference on Design automation, p.415-419, June 06-10, 1994, San Diego, California, United States
Sieling, The complexity of minimizing and learning OBDDs and FBDDs, Discrete Applied Mathematics, v.122 n.1-3, p.263-282, 15 October 2002
Wolfgang Gnther , Rolf Drechsler, Minimization of free BDDs, Integration, the VLSI Journal, v.32 n.1-3, p.41-59, November 2002
Wolfgang Gnther , Rolf Drechsler, Efficient Minimization and Manipulation of Linearly Transformed Binary Decision Diagrams, IEEE Transactions on Computers, v.52 n.9, p.1196-1209, September
Jayram S. Thathachar, On separating the read-k-times branching program hierarchy, Proceedings of the thirtieth annual ACM symposium on Theory of computing, p.653-662, May 24-26, 1998, Dallas, Texas, United States
Christoph Meinel , Anna Slobodov, A Unifying Theoretical Background for Some Bdd-based Data Structures, Formal Methods in System Design, v.11 n.3, p.223-237, Oct. 1997
Beate Bollig, A very simple function that requires exponential size nondeterministic graph-driven read-once branching programs, Information Processing Letters, v.86 n.3, p.143-148, 16 May
Bogdan J. Falkowski , Chip-Hong Chang, Forward and Inverse Transformations Between Haar Spectra and Ordered Binary Decision Diagrams of Boolean Functions, IEEE Transactions on Computers, v.46 n.11, p.1272-1279, November 1997
Jawahar Jain , William Adams , Masahiro Fujita, Sampling schemes for computing OBDD variable orderings, Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design, p.631-638, November 08-12, 1998, San Jose, California, United States
Henrik Reif Andersen , Henrik Hulgaard, Boolean expression diagrams, Information and Computation, v.179 n.2, p.194-212, December 15, 2002
Amit Narayan , Jawahar Jain , M. Fujita , A. Sangiovanni-Vincentelli, Partitioned ROBDDsa compact, canonical and efficiently manipulable representation for Boolean functions, Proceedings of the 1996 IEEE/ACM international conference on Computer-aided design, p.547-554, November 10-14, 1996, San Jose, California, United States
Randal E. Bryant, Binary decision diagrams and beyond: enabling technologies for formal verification, Proceedings of the 1995 IEEE/ACM international conference on Computer-aided design, p.236-243, November 05-09, 1995, San Jose, California, United States
Amit Narayan , Adrian J. Isles , Jawahar Jain , Robert K. Brayton , Alberto L. Sangiovanni-Vincentelli, Reachability analysis using partitioned-ROBDDs, Proceedings of the 1997 IEEE/ACM international conference on Computer-aided design, p.388-393, November 09-13, 1997, San Jose, California, United States
Adnan Darwiche, A compiler for deterministic, decomposable negation normal form, Eighteenth national conference on Artificial intelligence, p.627-634, July 28-August 01, 2002, Edmonton, Alberta, Canada
Stephen Ponzio, A lower bound for integer multiplication with read-once branching programs, Proceedings of the twenty-seventh annual ACM symposium on Theory of computing, p.130-139, May 29-June 01, 1995, Las Vegas, Nevada, United States
Beate Bollig , Stephan Waack , Philipp Woelfel, Parity graph-driven read-once branching programs and an exponential lower bound for integer multiplication, Theoretical Computer Science, v.362 n.1, p.86-99, 11 October 2006
Ingo Wegener, BDDs: design, analysis, complexity, and applications, Discrete Applied Mathematics, v.138 n.1-2, p.229-251, 29 March 2004
Adnan Darwiche, Decomposable negation normal form, Journal of the ACM (JACM), v.48 n.4, p.608-647, July 2001
Rina Dechter , Robert Mateescu, AND/OR search spaces for graphical models, Artificial Intelligence, v.171 n.2-3, p.73-106, February, 2007 | tautology test;data structure;satisfiability test;OBDD;circuit design;total computation;equivalence test;data structures;canonical representations;boolean manipulation;boolean functions;logic design;boolean function manipulation |
627003 | Buffered Banks in Multiprocessor Systems. | AbstractA memory design based on logical banks is analyzed for shared memory multiprocessor systems. In this design, each physical bank is replaced by a logical bank consisting of a fast register and subbanks of slower memory. The subbanks are buffered by input and output queues which substantially reduce the effective cycle time when the reference rate is below saturation. The principal contribution of this work is the development of a simple analytical model which leads to scaling relationships among the efficiency, the bank cycle time, the number of processors, the size of the buffers, and the granularity of the banks. These scaling relationships imply that if the interconnection network has sufficient bandwidth to support efficient access using high-speed memory, then lower-speed memory can be substituted with little additional interconnection cost. The scaling relationships are shown to hold for a full datapath vector simulation based on the Cray Y-MP architecture. The model is used to develop design criteria for a system which supports 192 independent reference streams, and the performance of this system is evaluated by simulation over a range of loading conditions. | Introduction
The gap between memory speed and processor
request rate is increasing rapidly in high
performance systems. This gap is due to a
decrease in processor cycle time, the use of
superscalar and other multiple issue mecha-
nisms, the increase in the number of processors
in shared memory systems, and the demands
of gigabit per second network commu-
nication. In addition, designers have sought
to replace expensive SRAM memories with
cheaper, slower DRAMS in order support dramatically
increased main memory sizes at a
reasonable cost.
In the face of these demands, several manufacturers
have introduced more complex circuitry
on their DRAM chips in order to reduce
the effective memory access time. Mit-
subishi, for example, has introduced a proprietary
cached DRAM. This chip has a small
SRAM which reduces the memory access time
if the reference is contained in the SRAM
[9]. Another cached DRAM has been developed
by Rambus [6]. Other approaches include
synchronous DRAM technology [14] and
enhanced DRAM technology [4]. Pipelined
memories have also been proposed
[11]. The effect of such memory hierarchies
in high-performance memory systems has not
been extensively studied.
In an ordinary interleaved memory, the
memory cycle time is the minimum time
required between successive references to a
memory module. The cycle time regulates
how quickly a processor can fill the memory
pipeline. Conflicts due to bad reference patterns
can cause the processor to block. The
latency is the time it takes for a read request
to navigate the memory pipeline and return a
value to the processor.
In hierarchical memory systems, such as
those which contain caches at the bank level,
the memory cycle time and the latency are
no longer constant. Caching can reduce both
the effective cycle time and the latency. This
paper explores buffering as an alternative or
as a supplement to caching at the chip level.
The proposed design is based on a buffering
scheme called logical bank buffering in which
physical banks are subdivided and buffered as
described in Section II. The principal contribution
of this paper is the development of a simple
model and the derivation of scaling relationships
among the efficiency, the bank cycle
time, the number of processors, the size of the
buffers, and the granularity of the banks. The
goal of the logical bank design is to provide
a mechanism for using large, slower memories
with a moderate number of high performance
processors while maintaining current operating
efficiency.
A second contribution of this work is the full
data-path simulation with register feedback
for a realistic interconnection network. High
performance machines, such as the Cray Y-
MP, have a separate interconnection network
for read return values. When simple memory
banks are replaced by a memory hierar-
chy, references arrive at the return network at
unpredictable times. Under moderate loading,
the resulting contention does not appear to be
a problem. Several approaches for equalizing
performance of reads and writes under heavy
loading are examined.
Buffering has been proposed by a number
of authors as a possible solution to the problem
of memory conflicts. A simulation study
by Briggs [2] showed that buffering at the processor
level in pipelined multiprocessors can
improve memory bandwidth provided that the
average request rate does not exceed the memory
service time. Smith and Taylor [20] explored
the effects of interconnection network
buffering in a realistic simulation model. The
simulations in this paper were based on a simi-
lar, but simpler interconnection network. The
buffering in this paper is at the memory modules
rather than within the interconnection
network, and the emphasis of the simulations
is on the verification of the scaling relationships
Other proposals to reduce memory conflicts
have been made. Skewing and related techniques
[7,8,12,13] have been shown to be effective
in reducing intraprocessor conflicts. A
recent simulation study by Sohi [21] explores
skewing and input and output buffering for
single reference streams consisting of vectors of
length 1024 with fixed strides. Skewing techniques
are not as effective for the situation considered
in this paper where conflicts between
processors are the main cause of performance
degradation. Skewing can be used in conjunction
with logical banks to reduce intraprocessor
contention.
This study uses memory efficiency and
throughput as its primary measures of memory
performance. The efficiency of a memory
system is defined as the ratio:
Total memory requests
The total number of memory requests includes
those requests which are denied because of a
conflict such as a bank conflict. It is assumed
that when such a conflict occurs, the processor
attempts the reference on the next cycle. This
memory efficiency is measured from the view-point
of the processor. It indicates the degree
to which processors will be able to successfully
issue memory references. The efficiency is essentially
PA , the probability of acceptance, as
calculated by Briggs and Davidson [3] in their
models of L-M memories without buffering.
Following Sohi [21], the throughput is defined
as the ratio:
vector references in a conflict-free system
Actual time for vector references
Sohi argues that this ratio is the appropriate
throughput measure when comparing memory
designs in a vector processing environment.
The throughput is the fraction of the optimal
rate at which entire vectors are delivered
through the system. The vector element read
latency is defined as the time between the first
attempt to access a vector element and the
availability of that element at the vector register
The efficiency depends on the number of
processors, the number of banks, the bank cycle
time, and the load. Unbuffered designs for
multiprocessor machines give a quadratic relationship
between memory speed and number
of banks for fixed performance [1]. If the memory
cycle time is doubled relative to the processor
speed, the interconnection costs must
be quadrupled to maintain the same memory
performance. The proposed design is a two-tier
system. The results show that if the inter-connection
network bandwidth is sufficient to
support the processors using high-speed mem-
ory, then lower-speed memory with buffers can
be substituted for little additional interconnection
cost or performance degradation.
Section II describes the logical bank design.
An analytical model for writes is developed in
Section III and is shown to be in reasonable
agreement with random reference simulations
in Section IV. The relationship between the
efficiency and system parameters such as the
bank cycle time, number of banks and number
of processors is then analyzed. Several design
criteria are developed which are applied
in later sections to vector systems.
Section V introduces a simulation model
which uses synthetically generated references
for a vector multiprocessor system similar to
the Cray Y-MP. The model incorporates a
full data-path simulation including return conflicts
and register feedback as recommended
in the simulation study by Smith and Taylor
[20]. The vector results are compared with
model predictions under moderate processor
loading in Section VI, and it is shown that
even a small number of buffer slots can result
in significant gains in performance. In Section
VII the design criteria are applied to a 64-
processor system (192 independent reference
streams) under heavy processor loading for a
range of stride distributions. Performance for
writes is excellent, but there is degradation for
reads. Several approaches for reducing this
degradation are examined including subbank
output buffering, additional lines, optimal ar-
bitration, port handshaking, port-line buffers,
and increased return bandwidth. It is found
that only the last alternative completely eliminates
the degradation. A final discussion and
conclusions are presented in Section VIII.
II. Logical Banks
A logical bank [16] consists of a fast register,
the logical bank register (LBR), and a number
of subbanks each with a queue of pending
requests as shown in Figure 1. The memory
within the logical bank is divided into equal
subbanks and addressed using the standard
interleaving techniques so that consecutive addresses
go to consecutive subbanks. A reference
to the logical bank can be gated into the
LBR in T la cycles if the register is free. T la is
the logical bank access time. If there is room
in the queue for the specified subbank, the reference
is then routed to the queue. Otherwise,
the LBR remains busy until a slot is available.
Only one reference to a logical bank can occur
during an interval T l . T l is called the logical
bank cycle time. It is the minimum time interval
between successful references to a logical
bank. The interval may be longer if the LBR
is waiting for a queue slot. If reference streams
attempt to access the same logical bank while
the LBR is busy, a logical bank conflict occurs
and all but one reference is delayed. In
the model and all of the simulations discussed
later, it is assumed that T la .
The parameters which define a multiprocessor
system with shared memory organized
into logical banks are shown in Table 1.
The default values are used in later simulations
except where otherwise indicated. For
single-port processors the number of reference
streams, n, is also equal to the number of pro-
cessors. In the vector simulations, the processors
are allowed to have multiple ports so
there can be more reference streams than pro-
cessors. Reference streams are assumed to be
either read or write. Read streams are more
difficult to handle because values must be returned
to the processor. The return fan-in net-work
for reads requires additional hardware for
arbitration because read values do not arrive
at the fan-in network at a predictable time.
The return values must include tag bits indicating
the destination. This hardware is also
required if cached DRAMs are used since the
effective access time is no longer constant in
that case either. In fact, cached DRAMS can
be used in conjunction with logical banks to reduce
the effective physical subbank cycle time,
little additional hardware.
Logical banks were introduced by Seznec
and Jegou to support the Data Synchronized
Pipeline Architecture (DSPA) [19]. Their design
includes a reordering unit so that data
flows out of the logical bank in chronological
order. In the scheme proposed in this paper,
a reordering unit is not required at the logical
bank, because reordering occurs at the processor
Buffering is distinct from caching in that
there is no miss penalty and no overhead for
cache management. The proposed circuitry
would take up a very small chip area if it
were incorporated on a chip. Alternatively, it
could be built as an interface between off-the-
shelf memory chips and the system interconnection
network. It is particularly appropriate
in situations in which average utilization
is below maximum capacity, but where there
are periods of maximal loading. In addition
to reducing average access time, logical banks
can smooth the type of bursty memory traffic
which is typical of highly vectorized programs
[17].
III. A Model for Random Writes
A model for the efficiency of logical bank
memories is now derived. In later sections the
throughput and latency are related to the ef-
ficiency. It is assumed that a reference stream
can initiate at most one reference per clock cycle
and that when a reference attempt fails, it
is retried by that reference stream on the following
cycle. As long as the LBR is available,
the processor sees a memory consisting of logical
banks with a cycle time of T l , the logical
bank cycle time. The efficiency in this case is
given by E l . This efficiency is determined by
the interreference stream conflicts at the logical
bank level. When the queues are full, the
memory behaves almost as though there were
no logical banks. The effective memory cycle
time in this case is T
T d is the minimum delay incurred in transferring
a reference from the queue to the subbank
and T c is the physical memory cycle time. The
efficiency in this case is denoted by E p .
A simple probabilistic argument shows that
if the probability of a successful reference is E,
the expected number of attempts per successful
reference is:X
E)
is the average number of cycles that it takes
a reference stream to initiate a reference from
the viewpoint of the processor. In contrast,
the average reference time from the viewpoint
of the physical memory is directly related to
the bank cycle time and other delays.
Let P be the probability that the logical
bank register (LBR) is available when a reference
is first initiated. A successful reference
will take 1
attempts with conditional
probability P and 1
attempts with probability
. The effective efficiency is then a
weighted average of the two cases depending
on the probability that there are slots available
in the appropriate subbank queue. The
average number of cycles for a successful reference
can then be estimated by:E
where E is the combined or effective efficiency.
This relationship can be written as:
This expression for the effective efficiency will
be called the logical bank model in the remainder
of the paper.
The probability, P , that the LBR is not full
can be estimated by considering each logical
bank as a system of k independent queues under
the M/D/1/B queuing discipline. This
queuing model has an exponential arrival rate,
deterministic service time, one server, and a finite
queue. For fixed queue size, the distribution
of the number of references in the queue
depends on the parameter ae = -T p where -
is the average arrival rate and T p is the effective
queue service time. - can be estimated as
where q is the probability that a free
stream initiates a reference, n is the number
of independent reference streams, and b is the
number of physical subbanks (kl). A simple
simulation is used to compute a table of probabilities
for a given value of ae and queue size
m.
Once the probabilities that the individual
queues are free have been determined, the
value of P for the entire logical bank can be
estimated as follows. If there are k subbanks
per logical bank, the LBR will be busy if any
of the k queues has m+1 slots filled, the extra
slot being from the LBR itself. Thus, if f is
the probability that a queue of size m + 1 is
not full, then the probability that the LBR is
This method is used to calculate P
for the graphs given later.
Estimates for E p and E l will now be de-
rived. The situation in which there are no logical
banks has been analyzed for random accesses
by Bailey [1] for systems with n single-port
processors and b memory banks. Let T
be the memory bank cycle time. Each processor
can be modeled as a Markov chain on
the state in which the processor is waiting for
a bank which will be busy for i more cycles
and s 0 denotes the state in which a processor
derived a steady
state expression for the efficiency in such as
system as:
Here q represents the probability that a free
processor will attempt a reference on the current
clock cycle. For relevant values of T , n,
and b, the efficiency is dominated by the expression
in the square root:
r
The efficiency is inversely proportional to T
and drops off fairly rapidly. If the bank cycle
time is doubled, the number of banks must be
increased by a factor of four to maintain the
same efficiency. The Bailey model can be used
to estimate E p by using T
the bank cycle time:
The logical bank efficiency, E l , can also be
estimated using the Bailey model as
l is the number of logical
banks in the system. T l , the logical bank cycle
time, is assumed to be one for much of the discussion
in this paper. Due to the assumptions
made in the derivation of the Bailey model, it
performs poorly when there are a small number
of processors or when the bank cycle time
is near one. Unfortunately the effective efficiency
is very sensitive to the value of E l when
P is close to one, so another model will be developed
This model will
be called the direct model and is derived below
using Markov chains in a manner similar
to that used by Bailey.
Assume that each reference stream is in one
of three states: the free state (1), a state in
which it is making a successful reference (2), or
the state in which it is attempting a reference
which is unsuccessful (3). Let:
probability that a given free stream
will attempt a reference
probability that the stream is in the
probability that the stream is making
a successful reference
probability that the stream is making
an unsuccessful reference
probability that a reference attempt
will be successful
The following probability conservation equation
holds:
The matrix, \Gamma, of state transition probabilities
is given in Table 2. \Gamma i;j , the entry in the i-th
row and the j-th column, represents the
conditional probability that the next state is i
given that the current state is j.
A reference attempt will be successful if no
higher priority stream is making a reference
to the same bank. On the average half of the
remaining streams will have a higher priority
than a given stream. Since only one higher
priority stream can make a successful reference
to a bank, the probability that one of the
streams is making a
successful reference to one of l banks may be
estimated as:
2l
and so ffi is given by:
where:
2l
be the vector of a priori
probabilities of the three states. The steady
state probabilities, which can be obtained by
the relationship -
are then given by:
These equations plus the conservation equation
can be used to obtain an expression for
The direct model is in better agreement with
the simulations when T will be
used to estimate E l for the remainder of the
paper.
IV. Predictions of the Model
Buffering can produce fairly dramatic improvements
in efficiency provided that the
memory system is not close to saturation.
In this section, the logical bank model is
compared with model simulations for random
references. The excellent agreement of this
comparison validates the model and suggests
relationships between the design parameters
which are necessary for achieving a particular
level of efficiency. In the following sections a
vector simulation model is compared with the
random-reference model, and the relationships
suggested by the analytical model are tested.
Consider a multiprocessor system which has
independent reference streams and a shared
memory consisting of 256 banks. (These parameters
represent an eight-processor Cray Y-MP
with three ports per processor and a maximal
memory configuration.) The performance
of this system is now compared with that of an
augmented system in which each physical bank
is replaced by a logical bank consisting of a single
subbank with a queue size of two. This case
corresponds to adding buffering at the physical
bank level without adding any additional logical
bank structure other than the buffers and
the LBR. The reference streams are assumed
to generate writes only and T
In
Figure
2 the efficiency for random reference
streams is plotted versus subbank cycle
time. Following Bailey [1] a reference rate
of is selected to give a base operating
efficiency in the unbuffered case of :67. The
logical bank model agrees well with results
from scalar simulations for operating efficiencies
above :6. A significant improvement in
performance is observed with buffering. When
the bank cycle time is 18, the simulation shows
an efficiency of :22 without buffering and an
efficiency of :66 with buffering. (The memory
efficiency of a real Cray Y-MP is higher than
the predicted :67, because selected buffering
mechanisms are incorporated at various stages
in the Cray Y-MP interconnection network as
described by Smith and Taylor [20].)
In
Figure
3 the efficiency is plotted versus
the number of reference streams. Again there
is excellent agreement between the model predictions
and those of random reference simula-
tions. The base efficiency of :67 can be maintained
with as many as 96 reference streams
when buffering is introduced at the bank level.
The logical bank model overestimates the efficiency
near saturation because the M/D/1/B
queuing model assumes that references are
thrown away when the queues are full. The
processor attempts to initiate a reference on
the next cycle with a certain probability q ! 1.
In the real system and in the simulation the
reference is retained and tried again on the
next cycle.
A simple analysis is now presented which
shows that queue sizes which are quite small
can give substantial improvements in performance
Table
3 shows the probability that the
number of items in the queue is less than the
queue size, m, for different values of m and
different system loadings. If ae = :5, the probability
that a queue will have fewer than three
slots (two queue slots plus the LBR) filled
is :9731. This value is indicative that small
queues will suffice. The estimate may not be
completely accurate near saturation, because
references which are not fulfilled are thrown
away in the model. Hence, the infinite queue
case is now considered.
In the infinite queue model, references are
never blocked, but are always queued. The
expected queue size for each subbank in the
infinite queue case is [10]:
Ex(Queue
For ae ! :5, the expected queue size is less
than :25 for each subbank. Table 4 shows that
the probability that a queue contains no more
than x items in the queue for the infinite queue
case. The probability that two or fewer slots
are filled is :947 for This probability is
on the borderline of reasonable performance.
The probability of having four or fewer elements
in the queue is :9957. A subbank input
queue size of four should be adequate to handle
most references (P - 1), and the effective
efficiency will be E l .
For fixed load q, E l is constant for constant:
2l
Furthermore,
is obtained by noting that E l is a decreasing
function of both q and ffl. The condition ffl !
:1 means there should be at least five times
as many logical banks as there are reference
streams for efficient performance.
The buffered and unbuffered cases can be
compared in the case where there is one sub-bank
per logical bank so that
the effect of buffering on efficiency when
held constant at :5 as the sub-bank
cycle time and the number of banks are
both increased. In this paper it is assumed
that will be decreasing
since ffl only depends on the number
of processors and the number of banks. When
the queue size is four, the probability of a logical
bank hit is :9957 so the efficiency is approximately
l . Since E l is independent of T c
and is an increasing function of l = b, the logical
bank model predicts that the efficiency will
actually increase slowly as the bank cycle time
and number of banks are increased with ae held
fixed at :5. Thus, for a fixed reference rate, q,
a doubling of T c , can be compensated for by
doubling the number of banks or by halving
the number of processors (reference streams).
This is in contrast to a system without logical
banks where T c
b must remain fixed to
maintain the same efficiency. In systems without
logical banks one would have to quadruple
the number of banks in order to compensate
for doubling the bank cycle time. When the
above argument is applied to the same system
with a queue size of two, the probability of a
logical bank hit is at least :947. The efficiency
is now a weighted average of the relatively constant
logical bank efficiency, E l , and the unbuffered
efficiency, E p . (The latter efficiency
drops off rapidly with bank cycle time.)
To confirm these relationships in the models
with and without logical banks, the load
q is fixed at :4, and the number of reference
streams is fixed at 24. In Figure 4 the efficiency
is plotted versus the subbank cycle time
when the number of banks is varied so that ae
is held constant at :5. The logical bank model
maintains an almost constant efficiency as the
bank cycle time is increased as predicted for
queue size of four. The system with queue
size two shows a slight fall-off. The efficiency
is initially lower than the asymptotic value because
when the bank cycle time is small and ae
is held at :5, there are so few banks that logical
bank conflicts become significant. When
the Bailey model is run for the same parameter
values, the efficiency drops dramatically as
predicted by the model. Similar scaling relationships
can be derived when the bank cycle
time is fixed and the number of banks and the
number of reference streams are varied.
One can use the relationship between ae and
E to determine design parameters required to
achieve a specified level of performance. The
Cray Y-MP has eight processors and three
ports per processor (24 independent reference
streams), a bank cycle time of five, and 256
physical banks. If a maximum reference rate
of
1+ffl . With a queue size of four, the probability
of a logical bank hit is nearly one. The
efficiency can be simply estimated from the
previous expression for ffl. When there are 256
logical banks (one subbank per logical bank),
:92. The logical bank model
predicts that buffering with four queue slots
will result in a high efficiency. The unbuffered
efficiency is predicted by the Bailey model for
these parameters to be :56. These model predictions
are tested in Section VII for a vector
simulation.
The results of the logical bank model can
be summarized as follows. For a fully loaded
system consisting of n reference
streams, l logical banks, a logical bank cycle
time of one, and subbank input queue size of
four, the efficiency is greater than :90 provided
ae -
and
For a queue size of two, ae should be chosen to
be less than :2.
Notice that the first relationship depends
on the total number of subbanks, b, while the
second relationship depends on the number of
logical banks, l. The per processor interconnection
costs depend on l. As long as there
are enough logical banks to adequately field
requests from the processors, an increase in
bank cycle time can be compensated for by
an increase in the number of subbanks without
a significant increase in the interconnection
costs. There is a point, however, at which
the data bus arbitration scheme will not be
able to handle read return traffic. This point
is discussed more fully in Section VII.
An increase of T l above one has the effect of
lowering the overall efficiency, but the curves
have the same shape. Design parameters can
be determined in this case by using the Bailey
model to estimate E l when
The efficiency now depends on the parameter
An efficiency of :90 can be
obtained provided that
V. Vector Simulation Model
In order to test the performance of the logical
bank organization and the predictions of
the logical bank model, a simulation study
based on the Cray Y-MP architecture inter-connection
network was developed. The Cray
Y-MP architecture was selected because its
highly pipelined interconnection network can
provide an effective T l = 1. This is accomplished
by having references issue immediately
to the interconnection network and block later
if conflicts should arise. A complete data path
simulation of this system with processor register
feedback was performed with reference
streams which were generated randomly under
realistic assumptions. The simulation includes
ports, lines, sections, and subsections
as described below.
The vector simulation model assumes there
are n p processors each with p ports. Each processor
can initiate up to p memory operations
on a cycle. These ports are assumed to generate
independent reference streams (n
Each port is designated either as a read stream
or a write stream.
The interconnection model is a simplified
version of the network described by Smith
and Taylor [20]. Each processor has four
lines which are direct connections to particular
sections of memory. The ports from a
particular processor access memory through
a crossbar connection to the processor's four
lines. The section number is determined by
the lowest two bits of the address, so consecutive
references are directed to different
sections of memory. Each section is divided
into eight subsections and the individual sub-sections
are further subdivided in banks. In
the case of the Cray Y-MP which has 256
banks, each subsection contains eight banks.
Following the notation of Smith and Taylor,
this interconnection network is denoted by:
8 processors !4 \Theta 4!8 \Theta 8 !1 \Theta 8 ! 256 memory banks.
In simulations of systems with n p processors
and l logical banks, the number of sub-sections
is fixed at eight and the number
of banks per subsection is increased. The
interconnection can then be described by:
np processors !4 \Theta 4!np \Theta 8!1 \Theta l=8!l memory banks.
In the Cray Y-MP, a processor can access
a particular subsection once every T c cycles
where T c is the physical bank cycle time. This
means that when a processor accesses a memory
bank, the processor is blocked from issuing
additional references to the entire subsection
containing this bank for the full bank cycle
time. Such a conflict is called a subsection conflict
and, like the section conflict, is strictly an
intraprocessor conflict. References from different
processors to the same subsection can proceed
without conflict provided that they are
addressed to banks which are not already in
use.
The simulation for the logical banks is based
on the conflict scheme described above. When
a particular memory location is referenced, the
line, subsection, logical bank, and subbank
numbers are calculated. If the line is free, it
is reserved for T r cycles and the subsection is
checked. If the subsection is free, it is reserved
for T s cycles, and the logical bank is checked.
If the logical bank register (LBR) is free, it is
reserved for T l cycles and the reference is initi-
ated. The reference generates a hold and fails
to issue if a conflict occurs at any level.
Once a reference has occupied the LBR for
l cycles, it can be moved to the appropriate
subbank queue if that queue is not full.
The reference must spend T d cycles in the
queue before it can be processed by the physical
memory. It is assumed that the reference
must occupy the subbank for at least T c cycles
before the subbank can accept another refer-
ence. If the operation is a write, the subbank
is free to accept another reference after T c cy-
cles. Reads are complicated by the return trip
to the processor as now described.
A vector read reference is not considered to
be completed until all of the element values
have arrived at the processor. Read data values
must be routed from the physical memory
bank to the appropriate processor vector reg-
ister. Additional conflicts may occur because
more than one value may become available on
a particular cycle. Each logical bank has a
single output latch. If the latch is free, the
value is moved from the subbank to the latch
and the subbank is freed. If the latch is busy,
the subbank must wait until the latch is free
before accepting another value. If an output
queue is included for each subbank as shown
in
Figure
1, the value is moved from the sub-bank
to the output queue and blocking of the
subbank due to return conflicts is less likely to
occur.
All of the data values latched for a particular
processor line compete for processing on
the return interconnection network. The real
system has separate forward and return inter-connection
networks. To simplify the simu-
lation, the return interconnection network is
modeled as a pipeline which can accept one
value per section per cycle. The pipeline
length is assumed to be ten which accounts
for the length of both the forward and return
pipelines. When the last value for a vector
read has emerged from the pipeline, the read
is considered to be complete.
The simulation also incorporates the feed-back
loop between the vector registers and
memory. Each processor has a certain number
of vector registers (eight was assumed for
the runs in this paper). When a vector operation
is initiated in the simulation, a free
processor vector register is randomly selected
and reserved for the duration of the operation.
If no register is available, the operation holds
until a register becomes available. The register
reserved for the operation is not freed until
all of the elements of the vector have arrived
at the processor. In contrast a vector write is
considered to be completed when the last element
operation has been issued. The vector
register is freed at that time although the actual
memory value may not be inserted until
sometime later because of buffering.
Priority in the simulation is rotated among
the processors in a circular fashion so that no
processor is favored. This scheme is similar
to the priority scheme used on the Cray X-
MP. The Cray Y-MP uses a fixed subsection
priority scheme which does not lend itself to
modification when the number of processors is
varied. The priority scheme should have little
effect on the results of the simulation.
The simulation generates a representative
reference stream for vectorized code as described
below. All memory operations are assumed
to be vector operations with an associated
stride and length. Gather/scatter operations
are not considered in this simulation.
The stride is a fixed interval between successive
references within a single vector opera-
tion. A stride of one is assumed to be the most
probable with other strides up to a maximum
stride being equally probable. A default probability
of stride one vectors of :75 is used unless
otherwise indicated. The effect of type of load
on performance is examined in Section VII.
The maximum length of the vector operations
is determined by the length of the vector
registers in the processor. When an operation
on a very long vector is required, the compiler
splits it into several vector operations,
all but one of which uses the maximum vector
register length. All possible vector lengths
are assumed to be equally probable, except the
maximumlength is assumed to occur more frequently
The system load is determined by the operation
initiation rate. When a port is free
there is a certain probability, p f , that on the
current cycle a memory operation will be initi-
ated. The value of p f may be different for read
and write ports and is a measure of the system
load. A relationship between p f and the
scalar reference rate q is now derived in order
to compare the vector case to the scalar case
already discussed. Let VL be the maximum allowed
vector length and p l be the probability
of a maximum length reference. The average
length of a vector reference is then:
and :0118. The value used by
Bailey [1] in his calculations of efficiency for
the unbuffered case. A study of interreference
times for vector references for the Perfect Club
Benchmarks run on a Cray Y-MP [18] shows
typical interreference times on the order of 10
to 200 cycles, so a value of appears
to be reasonable.
The parameters used in the simulation are
chosen to be close to the current Cray Y-MP
values and are summarized in Table 5.
The scalar simulations performed in Section
IV were performed by setting
In the unbuffered case, T l is the physical bank
cycle time as seen by the interconnection net-work
The vector data path simulator was written
in C and run on a network of SUN worksta-
tions. Most of the vector simulations done for
this paper were run for one hundred thousand
cycles, although some runs were as long as ten
million cycles. Each run was divided in blocks
of cycles and the statistics were computed over
each block in addition to over the entire run.
The statistics for the longer runs did not vary
significantly from those of the shorter runs.
This lack of variation is not unexpected since
each port on each processor can initiate a reference
on each cycle so there are a large number
of independent reference streams over which
the statistics are averaged.
VI. Results for Moderate Loading
Figure
5 shows a comparison of the efficiencies
as a function of bank cycle time for
the vector simulation and the logical bank
model when the system has eight processors
and three ports per processor. The number
of logical banks is fixed at 256 with one sub-bank
per logical bank and the queue size is four
slots per subbank. Buffering delays the drop in
performance with increasing bank cycle time.
For example, when all vectors have stride one
and the bank cycle time is 20, the efficiency
is :92 with logical bank buffering and
:22 without buffering. The agreement between
the simulation and the logical bank model is
good for subbank cycle times less than 10.
The agreement between model and simulation
is better for loads which have a random
component. The case where three fourths of
the strides are one and the remainder are randomly
distributed (p shown in
Figure
5. In this case, the efficiency is :68
for a subbank cycle time of 20. The over-all
efficiency is slightly below the model, but
the fall-off occurs in roughly the same place
as predicted by the model. The logical bank
model, which corresponds to random reference
streams, falls in between the simulations
for the two different stride distributions. The
agreement between the logical bank model and
the vector simulation is quite good considering
that the vector simulation includes lines, sub-
sections, and register feedback. The dip in efficiency
at bank cycle times which are integral
multiples of four is a real phenomenon which
is preserved over very long simulation runs.
In
Figure
6 the efficiency is plotted versus
the number of reference streams when the sub-bank
cycle time is five. The remaining parameters
are the same as in Figure 5. The
number of reference streams could be quadrupled
from 24 (eight processors) to 96 (32 pro-
cessors) while still maintaining an efficiency of
:67.
The analysis of Section IV for random references
predicts that the efficiency will be high
and will increase slightly as the number of
banks and the bank cycle time are increased
keeping n=b constant at a value - :5.
This scaling relationship holds in the case of
vector references as well. In Figure 7 the number
of reference streams is fixed at 24 (eight
processors) and the number of subbanks per
logical bank is one. The number of banks is
varied linearly with the bank cycle time in order
to keep ae constant at :5. The condition
satisfied when the number of logical
banks is greater than 120 which is the case
provided that T c ? 3 for ae held constant at :5.
For comparison, a simulation with the same
set of parameters was run without buffering
and the efficiency dropped dramatically when
the subbank cycle time was increased. The
runs are shown for two stride distributions (p s
The previous vector runs were performed
with no read ports. In an unbuffered memory
there is no difference in memory efficiency between
reads and writes. Because buffering introduces
unpredictable delays, more than one
result can become available on a particular cycle
for the interconnection network for a particular
section. When conflicts of this type
arise, some of the references are delayed which
in turn causes subbanks to block. Return
conflicts can thus cause an effective increase
in bank cycle time and a corresponding drop
in efficiency. The addition of a single output
queue slot at each subbank eliminates the
difference in performance between reads and
writes for moderate loading. However, as the
load is increased (either through increasing the
initiation rate or the percentage of stride one
vectors), the port return bandwidth may be
insufficient to handle the load. This problem
is addressed in the next section.
The analytical model derived in Section
III predicts the efficiency of memory writes.
Other possible indicators of performance include
throughput and latency. The through-put
defined in Section I is the fraction of the
optimal rate at which entire vectors are delivered
through the system [21]. The vector
element read latency is defined as the time between
the first attempt to access a vector element
and the availability of that element at
the vector register.
Figure
8 compares the efficiency, through-
put, and read latency when the load is varied.
A bank cycle time of 20 was chosen because
it is near the knee of the curves in Figure 5.
The parameters are the same as in that figure
except that the bank cycle time is fixed
and the initiation rate is varied. Probability
of OP (p f ) refers to the probability that
a vector operation will be initiated by a free
port. The value corresponds to the
value in Figure 5. The throughput is slightly
higher than the efficiency in the unbuffered
case and slightly lower than the efficiency in
the buffered case. The buffered throughput is
still considerably better than the unbuffered
throughput.
In the unbuffered case, the read latency is
just a constant plus the number of attempts
it takes to issue the request. As mentioned
in Section III, the number of attempts is just
the reciprocal of the efficiency. In fact, in the
unbuffered case the read latency curve in Figure
8 can be predicted to better than .3 percent
from the efficiency curve. When
the unbuffered read element latency is 36 and
the buffered read element latency is 51. If the
efficiency is at least moderately good, this indicates
that the last element of a vector read
is delayed about 15 cycles over what it would
be without buffering. With buffering, the read
latency is affected by return conflicts, and so
is dependent on the return scheme used. Different
return schemes are discussed in the next
section.
Since writes do not require a return path,
the write element latency is directly related to
the number of attempts to issue the element
operation. The write latency curves (not
shown) can be predicted from the efficiency
curve to within a few percent for both the
buffered and the unbuffered case.
VII. Results for Maximal Loading
The extreme case where each port attempts
a memory reference on each cycle is now con-
sidered. First an analysis is done for writes.
The logical bank model is used to pick design
parameters for a 64-processor system. The
performance for reads is then analyzed and improvements
in the return interconnection net-work
are considered to equalize the performance
of reads and writes.
The logical bank model can be used as a
guide in picking design parameters for a 64-
processor Cray Y-MP which minimizes the per
processor interconnection cost, while achieving
an efficiency of at least :90. Assuming a load
of 1:0, the condition ffl ! :1 gives
l ? 960. Assuming a bank cycle time of five,
the condition ae ! :5 gives b ? 1920. The number
of logical and subbanks should be powers
of two. Thus for 64 processors, the configuration
which operates with minimum per processor
interconnection cost and high efficiency
has 1024 logical banks and 2048 physical sub-
banks. If ae is approximately :5, a queue size
of four is required to have a :99 probability of
available in the queue according to
Table
4. If the queue size is two, four subbanks
per logical bank are required to bring ae down
to :3.
Figure
9 shows the performance of these designs
as a function of the percentage of stride
one vectors. The case of two subbanks per
logical bank with a queue size of four is indistinguishable
from the case of four subbanks
per logical bank with a queue size of two as
predicted by the logical bank model. The
case of 2048 banks with a queue size of two is
shown for comparison. The logical bank model
is based on the assumption of random references
and does not account for the presence of
bad strides. Buffering has been shown to reduce
the effect of bad strides under moderate
loading [7], and the designs here do not preclude
the use of address mapping to alleviate
intraprocessor conflicts [5,8]
Reads are more difficult to handle because
hot spots can develop on the return lines as
shown in Figure 10. The performance difference
between reads and writes as a function of
stride is quite dramatic. Even more surprising
is the fact that the performance for reads
actually drops as the percentage of stride one
vectors in the load is increased. The drop occurs
because the arbitration method used for
the return in the simulations is a simple round
robin priority scheme on the logical banks.
When a reference for a particular port is de-
layed, it causes all of the banks waiting for that
port to be delayed. When the system is operating
at a sustained maximal initiation rate,
the ports can never catch up.
Various solutions for solving the hotspot
problem have been examined including additional
output buffering at the physical sub-
banks, additional lines, optimal arbitration,
port handshaking, port-line queues, and additional
return ports. It was found that additional
lines alone do not solve the problem,
while queue depths of or more at each sub-bank
are required to make a significant difference
in efficiency.
In optimal arbitration, the logical banks are
examined in round robin succession, but if the
port to which a reference is made is already in
use, that reference does not block the line and
another reference can be issued. Port hand-shaking
is a control mechanism by which a particular
port is blocked from initiating a reference
if there was a return conflict for that port
on the previous cycle. Both methods improve
performance, but they do not bring read performance
up to write performance when most
of the vector strides are one.
Since the drop in read performance appeared
to be due to insufficient return port
bandwidth, two alternative approaches were
developed to increase the bandwidth. The first
approach involved adding port-line queues. In
this design modification, each port contains a
queue for each line so that each line can deposit
a result at a port on each cycle. The
port services one queue per cycle in round
robin succession. The second approach involved
doubling the number of return ports.
One possible design is to have a return port
for the odd-numbered vector elements and another
return port for the even-numbered vector
elements. An alternative design is to designate
one return port for each pair of return
lines. The second alternative would simplify
the port-line interconnection switches but
would complicate the vector register bus structure
internal to the processors.
Figure
11 compares the port-line buffering
to double-return ports for a variety of
strides at maximal loading. Output buffering
at the subbanks has been eliminated. Port-
line buffering improves efficiency but does not
increase the port bandwidth. The doubling
of the number of return ports brings the read
efficiency in line with the write efficiency.
VIII. Discussion
The main result of this paper can be summarized
as follows. If a shared memory system
has sufficient bandwidth to achieve high efficiency
with fast memory, the replacement of
the physical banks by logical banks will allow
the same efficiency to be achieved using considerably
slower memory without significantly
affecting the interconnection costs. This type
of buffering is particularly useful in vector
multiprocessors because vector memory operations
are naturally pipelined, and increases
in memory latency can be partially amortized
over an entire vector operation.
The logical bank model and the detailed
vector simulations given in this paper show
that the number of logical banks scales with
the number of processors and that the bank
cycle time scales with the number of physical
banks. Consequently slower memory can
be used if the logical banks are divided into
more subbanks. This is in contrast to the unbuffered
case where b=T 2
c n must be constant
for constant efficiency. The change from an
inverse quadratic to an inverse linear dependence
between bank cycle time and the number
of banks is particularly important.
A drawback of memory systems with variable
bank cycle time is that the values become
ready for return at unpredictable times. The
approach of Seznec and Jegou to reorder values
at the bank level does not solve the problem
of values arriving at the processor in the
order issued. The problem can be addressed
in the Cray Y-MP architecture by the addition
of a tag to each return value. The Cray
Y-MP architecture allows three independent
vector memory operations to proceed simulta-
neously. These vector memory operations can
be chained to the vector registers, and vector
registers can in turn be chained to functional
units. Chaining allows the results produced by
one vector operation to be used as input to a
succeeding operation before the first instruction
has completed. The component results
from the first instruction can be used by the
second instruction as they become available.
Pipeline setup can occur before any component
of the previous operation is available [15].
In the current architecture values arrive in order
so each vector register keeps track of the
last value to have arrived. When the values
arrive out of order each vector element must
have a bit indicating whether that value has
arrived. Some additional hardware can be incorporated
to chain forward when the next element
has arrived. The relative order among
different registers is assured by the existing
reservation and issue mechanisms.
It is well known [1] that the memory performance
of shared memory vector processors is
strongly dependent on the type of load which
is generated. Since the choice of load distribution
affects the results, it is desirable to
test the design against realistic loading con-
ditions. Address-trace collection methods [22]
are useful for generating statistical information
about the load, but the information collected
from these types of investigations is difficult
to use directly in testing new designs because
the efficiency depends not only on the
actual addresses, but on the exact time the
references were issued. Because of these diffi-
culties, the approach taken in this paper has
been to develop guidelines which are applicable
over a range of load distributions. The
larger the number of reference streams, the
less serious the impact of the details of one
reference stream on the overall efficiency.
Buffering greatly reduces the dependence
of memory efficiency on the type of load for
writes as illustrated by the runs in the two
previous sections. Most of the dependence on
load occurred because of return conflicts for
reads. A number of alternative designs were
evaluated in an effort to reduce the performance
degradation due to return conflicts. It
was found that doubling the number of return
ports eliminated the difference between reads
and writes over a range of loads.
Acknowledgment
This work was supported by Cray Research
Inc. Computational support was provided by
the University of Texas Center for High Performance
Computing and the National Science
Foundation ILI Program, Grant USE-0950407.
--R
"Vector computer memory bank contention,"
"Effects of buffered memory requests in multiprocessor systems,"
"Or- ganization of semiconductor memories for parallel-pipelined processors,"
"Enhanced dynamic RAM,"
"A simulation study of the Cray X-MP memory sys- tem,"
"A fast path to one memory,"
"Vec- tor access performance in parallel memories using a skewed storage scheme,"
"Address transformations to increase memory performance,"
"Dynamic RAM as secondary cache,"
The Art of Computer Systems Performance Analysis
"A 12-MHz data cycle 4-Mb DRAM with pipeline operation,"
"The prime memory system for array access,"
"Scrambled storage for parallel memory systems,"
"Synchronous dynamic RAM,"
The Cray X-MP/Model 24: A Case Study in Pipelined Architecture and Vector Pro- cessing
"Bus conflicts for logical memory banks on a Cray Y-MP type processor system,"
"Dynamic behavior of memory reference streams for the Perfect Club benchmarks,"
"Charac- terization of memory loads for vectorized programs."
"Optimizing memory throughput in a tightly coupled multiprocessor,"
"Accurate modeling of interconnection networks in vector supercomputers,"
"High-bandwidth interleaved memories for vector processors - a simulation study,"
"Address tracing for parallel ma- chines,"
--TR
--CTR
Hua Lin , Wayne Wolf, Co-design of interleaved memory systems, Proceedings of the eighth international workshop on Hardware/software codesign, p.46-50, May 2000, San Diego, California, United States
A. M. del Corral , J. M. Llaberia, Minimizing Conflicts Between Vector Streams in Interleaved Memory Systems, IEEE Transactions on Computers, v.48 n.4, p.449-456, April 1999
Toni Juan , Juan J. Navarro , Olivier Temam, Data caches for superscalar processors, Proceedings of the 11th international conference on Supercomputing, p.60-67, July 07-11, 1997, Vienna, Austria
Anna M. del Corral , Jose M. Llaberia, Increasing the effective bandwidth of complex memory systems in multivector processors, Proceedings of the 1996 ACM/IEEE conference on Supercomputing (CDROM), p.26-es, January 01-01, 1996, Pittsburgh, Pennsylvania, United States | logical memory banks;buffered memories;vector processors;Cray Y-MP;memory conflicts |
627005 | Flexible and Adaptable Buffer Management Techniques for Database Management Systems. | AbstractThe problem of buffer management in database management systems is concerned with the efficient main memory allocation and management for answering database queries. Previous works on buffer allocation are based either exclusively on the availability of buffers at runtime or on the access patterns of queries. In this paper, we first propose a unified approach for buffer allocation in which both of these considerations are taken into account. Our approach is based on the notion of marginal gains which specify the expected reduction in page faults by allocating extra buffers to a query. Then, we extend this approach to support adaptable buffer allocation. An adaptable buffer allocation algorithm automatically optimizes itself for the specific query workload. To achieve this adaptability, we propose using run-time information, such as the load of the system, in buffer allocation decisions. Our approach is to use a simple queuing model to predict whether a buffer allocation will improve the performance of the system. Thus, this paper provides a more theoretical basis for buffer allocation. Simulation results show that our methods based on marginal gains and our predictive methods consistently outperform existing allocation strategies. In addition, the predictive methods have the added advantage of adjusting their allocation to changing workloads. | Introduction
In relational database management systems, the buffer manager is responsible for all the operations
on buffers, including load control. That is, when buffers become available, the manager needs to
decide whether to activate a query from the waiting queue and how many buffers to allocate to
that query. Figure 1 outlines the major components involved in this issue of buffer allocation. The
buffer pool area is a common resource and all queries - queries currently running and queries in
the waiting queue - compete for the buffers. Like in any competitive environment, the principle
of supply and demand, as well as protection against starvation and unfairness must be employed.
Hence, in principle, the number of buffers assigned to a query should be determined based on the
following factors:
1. the demand factor - the space requirement of the query as determined by the access pattern
of the query (shown as path (1) in Figure 1),
2. the buffer availability factor - the number of available buffers at runtime (shown as path (2)
in
Figure
1), and
3. the dynamic load factor - the characteristics of the queries currently in the system (shown as
path (3) in Figure 1).
Based on these factors, previous proposals on buffer allocation can be classified into the following
groups, as summarized in Table 1.
Allocation algorithms in the first group consider only the buffer availability factor. They include
variations of First-In-First-Out (FIFO), Random, Least-Recently-Used (LRU), Clock, and
Working-Set[6, 10, 15]. However, as they focus on adapting memory management techniques used
in operating systems to database systems, they fail to take advantage of the specific access patterns
exhibited by relational database queries, and their performance is not satisfactory[3].
Allocation strategies in the second group consider exclusively the demand factor, or more specifically
the access patterns of queries. They include the proposal by Kaplan[8] on the implementation
of INGRES[16], the Hot-Set model designed by Sacca and Schkolnick[13, 14], and the strategy used
by Cornell and Yu[5] in the integration of buffer management with query optimization. This approach
of buffer allocation is culminated in the work of Chou and DeWitt[3]. They introduce the2output
queries
buffer
manager
CPU
and
buffer pool
Figure
1: Buffer Manager and Related Components
access patterns availability of dynamic
of queries (demand) buffers at runtime workload
FIFO, Random, LRU, etc. - p -
Hot-Set, DBMIN p -
Flexible algorithms proposed here p p -
Adaptable algorithms proposed here p p p
Table
1: Classification of Buffer Allocation Algorithms
notion of a locality set of a query, i.e. the number of buffers needed by a query without causing
many page faults. They propose the DBMIN algorithm that makes allocation equal to the size of
the locality set. DBMIN also allows different local replacement policies. Simulation results in [2, 3]
show that DBMIN outperforms the Hot-Set strategy and the algorithms referred to in the first
group.
While the strength of DBMIN and other algorithms referred to in the second group lies in their
consideration of the access patterns of queries, their weakness arises from their oblivion of runtime
conditions, such as the availability of buffers. This imposes heavy penalties on the performance
of the whole system. This deficiency leads us to study and propose a unified approach in buffer
allocation which simultaneously takes into account the access patterns of queries and the availability
of buffers at runtime. The objective is to provide the best possible use of buffers so as to maximize
the number of page hits. The basis of this approach is the notion of marginal gains which specify
the expected number of page hits that would be obtained in allocating extra buffers to a query. As
we shall see later, simulation results show that allocation algorithms based on marginal gains gives
better performance than DBMIN.
However, one characteristic common to all the above algorithms is that they are static in nature,
and cannot adapt to changes in system loads and the mix of queries using the system. To rectify
the situation, in the second half of this paper, we propose a new family of buffer management
techniques that are adaptable to the workload of the system. The basic idea of our approach is
to use predictors to predict the effect a buffer allocation decision will have on the performance of
the system. These predictions are based not only on the availability of buffers at runtime and
the characteristics of the particular query, but also on the dynamic workload of the system. Two
predictors are considered in this paper: throughput and effective disk utilization. Simulation results
show that buffer allocation algorithms based on these two predictors perform better than existing
ones.
In Section 2 we present mathematical models and derive formulas for computing the expected
number of page faults for different types of database references. Then we introduce in Section 3
the notion of marginal gains, and present flexible buffer allocation algorithms based on marginal
gains. In Section 4 we introduce the predictors and present the policies for adaptable allocation
algorithms. Finally, we present in Section 5 simulation results that compare the performance of
our algorithms with DBMIN.
Mathematical Models for Relational Database References
In this section we first review the taxonomy proposed by Chou and DeWitt[2, 3] for classifying
reference patterns exhibited by relational database queries. We analyze in detail the major types
of references, and present mathematical models and formulas calculating the expected number of
page faults using a given number of buffers. These models help to provide formulas for computing
marginal gains and predictive estimates in Sections 3 and 4.
2.1 Types of Reference Patterns
In [2, 3] Chou and DeWitt show how page references of relational database queries can be decomposed
into sequences of simple and regular access patterns. Here we focus on three major types of
references: random, sequential and looping. A random reference consists of a sequence of random
page accesses. A selection using a non-clustered index is one example. The following definitions
this type of references.
reference Ref of length k to a relation is a sequence ! P
of the relation to be read in the given order. 2
random reference R k;N of length k to a relation of size N is a reference ! P
such that for all 1 uniformly distributed over the set of all pages of the accessed
relation, and P i is independent of P j for i 6= j. 2
In a sequential reference, such as in a selection using a clustered index, pages are referenced
and processed one after another without repetition.
Definition 3 A sequential reference S k;N of length k to a relation of size N is a reference !
such that for all 1
When a sequential reference is performed repeatedly, such as in a nested loop join, the reference
is called a looping reference.
looping reference L k;t of length k is a reference ! P such that for some
t, and ii) P t. The subsequence
called the loop, and t is called the length of the loop. 2
In the following, for these three types of references, we give formulas for computing the expected
number of page faults using a given number of buffers s. Table 2 summarizes the symbols used in
this section.
denote the expected number of page faults caused by a reference
using s buffers, where Ref can be L k;t ; R k;N or S k;N . 2
Symbols Definitions
k length of a reference
s number of buffers
f number of page faults
N number of pages in the accessed relation
t length of loop in a looping reference
Lk;t a looping reference of length k and loop length t
Rk;N a random reference of length k and relation size N
Sk;N a sequential reference of length k and relation size N
expected number of faults for reference Ref with s buffers
Table
2: Summary of Symbols and Definitions
2.2 Random References
Throughout this section, we use P (f; k; s; N) to denote the probability that there are f faults in k
accesses to a relation of size N using s buffers, where s 1 and 0 f k. Thus for a random
reference, the expected number of page faults is given by:
To model a random reference, we set up a Markov chain in the following way. A state in the
Markov chain is of the form [f; k] indicating that there are f faults in k accesses for f k. In
setting up the transitions from states to states, there are two cases to deal with. In the first case,
the number f of faults does not exceed the number s of allocated buffers. Thus, there must be f
distinct pages kept in the buffers after f faults. Now consider a state [f; k] in the chain. There
are two possibilities to have f faults in k accesses. If the last access does not cause a page fault,
that is with a probability f=N , then there must be f faults in accesses. In other words,
there is an arc from state [f; k \Gamma 1] to state [f; k] with a transition probability of f=N . The other
arc to state [f; k] is from state [f \Gamma with a transition probability of This
corresponds to the case when there are (f \Gamma 1) faults in accesses, and the last page accessed
is not one of the (f \Gamma 1) pages being kept in the buffers. Hence, the case for f s is summarized
by the following recurrence equation:
In the second case, the number f of faults exceeds the number s of allocated buffers. Local
replacement must have taken place, and there are always s pages kept in the buffers. Note however
that since the reference is random, the choice of local replacement policies is irrelevant. The
analysis for the case when f ? s is almost identical to the case when f s except that the
transition probabilities must be changed to the following: s=N for accessing a page already in the
buffers, and (N \Gamma s)=N otherwise. Hence, the situation for f ? s is summarized by the following
recurrence equation:
In addition to the recurrence Equations 2 and 3, the base case is P(0; 0; s; N)= 1 for all s 1.
Then, the expected number of page faults Ef(R can be computed according to Equation 1.
Except for the case where we do not have a simple closed form formula for Ef(R k;N ; s).
Fortunately, the formula below gives very close approximations to the actual values:
is the expected number of page accesses
that fill all the s buffers. Thus, the top row of the formula corresponds to the case where none
of the buffers that have been filled needs to be replaced. This first case uses Cardenas' formula[1]
which calculates the expected number of distinct pages accessed after k random pages have been
selected out of N possible ones with replacement. More accurate results may be obtained with Yao's
which assumes no replacement. All these formulas make the uniformity assumption;
its effects are discussed in[4]. The second row corresponds to the case when local replacement has
occurred. Then, s faults have been generated to fill the s buffers (which take k 0 page accesses on
the average); for the remaining the chance of finding the page in the buffer pool
is s=N .
2.3 Sequential References
Recall from Definition 3 that each page in a sequential reference S k;N is accessed only once. Thus,
the probability of a page being re-referenced is 0. Hence, a sequential reference can be viewed as a
degenerate random reference, and the following formula is obvious:
2.4 Looping References
Recall from condition (i) of Definition 4 that within a loop, a looping reference L k;t is strictly
sequential. Thus, based on Equation 5, t page faults are generated in the first iteration of the loop.
Then there are two cases. Firstly, if the number s of allocated buffers is not less than the length t
of the loop, all pages in the loop are retained in buffers, and no more page faults are generated in
the remainder of the reference. The choice of a local replacement policy is irrelevant in this case.
In the second case, if the number s of allocated buffers is less than the length t of the loop, the
local replacement policy plays a major role in determining the number of page faults generated by
a looping reference. Among all local replacement policies, it is not difficult to see that for a looping
reference L k;t , MRU replacement requires the fewest number of faults. The key observation is that
for a looping reference, MRU is identical to the policy which looks ahead and keeps the pages that
will be used in the most immediate future (cf. the table in the example below). Then a well-known
result by Mattson et al[11] for optimal page replacement in operating systems can be applied to
show the optimality of MRU. Thus, in this paper we only present the analysis for MRU, which is
best explained by an example.
Example 1 Consider a looping reference with the loop ! a; b; c; d; e ?. Suppose buffers are
available for the reference. The following table summarizes the situation under MRU.
a b c d e a b c d e a b c d e a b c d e a b c d e
a b c d e a b c d e a b c d e a b c d e a b c d e
a b b b e a a a d e e e c d d d b c c c a b b b
a a a b e e e a d d d e c c c d b b b c a a a
The first row of the table indicates the numbers of page accesses. The second row shows the
order the pages are accessed for five iterations of the loop. If a page hit occurs, the access is marked
with an asterisk. The last three rows of the table indicate the pages kept in the buffers after that
page access, with the page most frequently used in the top row.
This example demonstrates a few important properties of MRU. First note that there are
five "mini-cycles" of length four which may not align with the iterations of the loop. They are
separated by vertical lines in the table above. These mini-cycles also follow a cyclic pattern,
namely the twenty-sixth access of the table will be exactly the same as the sixth access, and so on.
Furthermore, within each mini-cycle, there are two "resident" pages - those that are not swapped
out in that mini-cycle. For instance, for the first mini-cycle, the resident pages are a and e. Note
that these resident pages are the pages that begin the next mini-cycle, avoiding page faults for
those accesses; this property is exactly the reason why MRU is optimal. 2
In general, given a loop of length t, the mini-cycles are of length In other words, in
iterations of the loop, there are t different mini-cycles. Furthermore, these mini-cycles recur every
iterations of the loop. Then in each mini-cycle, there are resident pages. Thus, there
are in each mini-cycle. Hence, on the average, there are in
each iteration of the loop. Thus, the equation below follows immediately:
Marginal Gains and Flexible Allocation Methods: MG-x-y
In this section we first review DBMIN. Then we introduce the notion of marginal gains. Finally, we
propose flexible buffer allocation algorithms MG-x-y that are designed to maximize total marginal
gains and utilization of buffers.
3.1 Generic Load Control and DBMIN
In order to classify and study various allocation methods, we break down the problem of load
control into two. That is, during load control, a buffer manager determines whether a waiting
reference can be activated, and decides how many buffers to allocate to this reference. Throughout
this paper, we use the term admission policy to refer to the first decision and the term allocation
policy to refer to the second one. Once the admission and allocation policies are chosen, a buffer
allocation algorithm adopting the First-Come-First-Serve policy can be outlined as follows.
Algorithm 1 (Generic) Whenever buffers are released by a newly completed query, or whenever
a query enters in an empty queue, perform the following:
1. Use the given admission policy to determine whether the query Q at the head of the waiting
queue can be activated.
2. If this is feasible, use the allocation policy to decide the number s of buffers that Q should
have. Notice that only Q can write on these buffers which are returned to the buffer pool
after the termination of Q. Then activate Q and go back to step 1.
3. Otherwise, halt and all queries must wait for more buffers to be released. 2
Note that for all the allocation algorithms considered in this paper, DBMIN and our proposed
methods alike, if a query consists of more than one reference, it is given a number of buffers that
is equal to the sum of buffers allocated to each relation accessed by the query. The allocation to
each relation is determined by the reference pattern as described in the previous section, and each
relation uses its own allocated buffers throughout. See [2] for a more detailed discussion. In ongoing
work, we study how to allocate buffers on a per query basis. Before we describe DBMIN using the
general framework outlined in Algorithm 1, let us define a few symbols that are used throughout
the rest of this paper. We use the term A to denote the number of available buffers, and the terms
s min and s max to denote respectively the minimum and maximum numbers of buffers that a buffer
allocation algorithm is willing to assign to a reference.
For DBMIN, the admission policy is simply to activate a query whenever the specified number
of buffers are available, that is s min A. As for the allocation policy, it depends on the type of the
reference. For a looping reference, the locality set size is the total number of pages of the loop [2,
pp. 52]. Since DBMIN requires the entire locality set be allocated [2, pp. 50], i.e. s
where t is the length of the loop 1 . As for a random reference, it is proposed in [2, 3] that a
random reference may be allocated 1 or b yao buffers where b yao is the Yao estimate on the average
number of pages referenced in a series of random record accesses[18]. In practice, the Yao estimates
are usually too high for allocation. For example, for a blocking factor of 5, the Yao estimate of
accessing 100 records of a 1000-record relation is 82 pages. Thus, DBMIN almost always allocates
1 buffer to a random reference, i.e. s As a preview, some of our algorithms may
also make use of the Yao estimate. But a very important difference is that unlike DBMIN which
allocates either 1 or 82 buffers in this example, our algorithms may allocate any buffer within the
In [2], Chou remarks that MRU is the best replacement policy for a looping reference under sub-optimal allocation.
However, as far as we know, no method is proposed in [2, 3] to allocate sub-optimally.
range 1 and 82, depending on conditions such as buffer availability and dynamic workload. Finally,
for a sequential reference, DBMIN specifies s
Note that while DBMIN improves on traditional algorithms like Working-Set, LRU, etc., it is not
flexible enough to make full use of available buffers. This inflexibility is illustrated by the fact that
the range [s degenerates to a point. In other words, DBMIN does not allow sub-optimal
allocations to looping references, and not allow random references the luxury of being allocated
many buffers even when those buffers are available. These problems lead us to the development of
the notion of marginal gains and flexible buffer allocation algorithms MG-x-y to be discussed next.
3.2 Marginal Gains
The concepts of marginal gain and marginal utility have been widely used in ecomonics theory
since the 18th century[9]. Here we apply the approach to database buffer allocation.
Definition 6 For s 2, the marginal gain of a reference Ref to use s buffers is defined as:
where Ref can be L k;t ; R k;N and S k;N . 2
For a given reference Ref , the marginal gain value mg(Ref; s) specifies the expected number of
extra page hits that would be obtained by increasing the number of allocated buffers from
to s. Note that these values take into account the reference patterns and the availability of buffers
simultaneously. In essence, the marginal gain values specify quantitatively how efficiently a reference
uses its buffers. Moreover, this quantification is at a granularity level finer than the locality set sizes
used in DBMIN. Thus, while DBMIN can only allocate on a per locality-set-size basis, allocation
algorithms based on marginal gains can be more flexible and allocate on a per buffer basis. Below
we analyze how the marginal gain values for different types of references vary with the number of
buffers. This analysis is crucial in designing the flexible algorithms to be presented.
For a looping reference L k;t , Equation 6 dictates that for any allocation s ! t, extra page hits
would be obtained by allocating more and more buffers to the reference, until the loop can be fully
accommodated in the buffers. The allocation is the optimal allocation that generates the
fewest page faults. Furthermore, any allocation s ? t is certainly wasteful, as the extra buffers
are not used. The graph for looping references in Figure 2 summarizes the situation. The typical
marginal gain values of looping references are in the order of magnitude of O(10) or O(10 2 ). For
example, if a reference goes through a loop of 50 pages 20 times, the marginal gain value for all
buffers s 50 is 19.4.
Similarly, based on Equations 2, 3 and 4, it is easy to check that the marginal gain values
of random references are positive, but are strictly decreasing as the number of allocated buffers
s increases, as shown in Figure 2. Eventually, the marginal gain value becomes zero, when the
allocation exceeds the number of accesses or the number of pages in the accessed relation. Note
that, unlike DBMIN, a buffer allocation algorithm based on marginal gains may allocate the idle
mg
looping L k;t-
mg
s
random R k;N-
mg
ssequential S k;N
Figure
2: Typical Curves of Marginal Gain Values
buffers to the random reference, as long as the marginal gain values of the reference indicate that
there are benefits to allocate more buffers to the reference. In fact, even if the number of idle
buffers exceeds the Yao estimate, it may still be beneficial to have an allocation beyond the Yao
estimate. It is however worth pointing out that the marginal gain values of a random reference
are normally lower than those of a looping reference. The highest marginal gain value of a random
reference is typically in the order of magnitude of O(1) or O(10 \Gamma1 ). For example, for the random
reference discussed earlier (i.e. accessing 100 records from 200 pages) , the highest marginal gain
value is about 0.5.
Finally, as shown in Equation 5, the marginal gain values of sequential references are always
zero, indicating that there is no benefit to allocate more than one buffer to such references (cf.
Figure
2).
3.3 MG-x-y
As we have shown above, the marginal gain values of a reference quantify the benefits of allocating
extra buffers to the reference. Thus, in a system where queries compete for a fixed number of
buffers, the marginal gain values provide a basis for a buffer manager to decide which queries
should get more buffers than others. Ideally, given N free buffers, the best allocation is the one
that does not exceed N and that maximizes the total marginal gain values of queries in the waiting
queue. However, such an optimization will be too expensive and complicated for buffer allocation
purposes. Furthermore, to ensure fairness, we favor buffer allocation on a First-Come-First-Serve
basis. In the following we present a class MG-x-y of allocation algorithms that achieve high marginal
gain values, maximizes buffer utilization, and are fair and easy to compute. It follows the generic
framework outlined in Algorithm 1. Like DBMIN, the allocation policy of MG-x-y presented below
allocates on a per reference basis.
Allocation Policy 1 (MG-x-y) Let R be the reference at the head of the waiting queue, and
A ? 0 be the number of available buffers. Moreover, let x and y be the parameters of MG-x-y to
be explained in detail shortly.
Case 1: R is a looping reference L k;t .
1. If the number A of available buffers exceeds the length t of the loop (i.e. A ?
buffers to the reference.
2. Otherwise, if the number of available buffers is too low (i.e. A ! (x% t)), allocate no buffers
to this reference.
3. Otherwise (i.e. A (x% t)), give all A buffers to the reference R.
Case 2: R is a random reference R k;N .
1. As long as the marginal gain values of R are positive, allocate to R as many buffers as possible,
but not exceeding the number A of available buffers and y (i.e. allocation minimum (A; y)).
Case 3: R is a sequential reference S k;N .
1. Allocate 1 buffer. 2
MG-x-y has two parameters, x and y. The x parameter is used to determine allocations for
looping references. As described in Case 1 above, MG-x-y first checks to see if the number of
available buffers exceeds the length of the loop of the looping reference. Recall from the previous
section and Figure 2 that the allocation which accommodates the whole loop minimizes page faults
and corresponds to the highest total marginal gain values of the reference. Thus, if there are enough
buffers, then like DBMIN, MG-x-y gives the optimal allocation. However, if there are not enough
buffers, MG-x-y checks to determine whether a sub-optimal allocation is beneficial, via the use of
parameter x.
In general, the response time of a query has two components: the waiting time and the processing
time, where the former is the time from the arrival of the query to the time the query is activated,
and the latter is the time from activation to completion. The processing time is minimized with
the optimal allocation. But to obtain the optimal allocation, the waiting time may become too
long. On the other hand, while a sub-optimal allocation may result in longer processing time, it
may at the end give a response time shorter than the optimal allocation, if the reduction in waiting
time more than offsets the increase in processing time. Hence, in trying to achieve this fine balance
between waiting time and processing time, MG-x-y uses the heuristic that a sub-optimal allocation
is only allowed if the total marginal gain values of that allocation is not too "far" away from the
optimal. This requirement translates to the condition shown in Case 1 that a sub-optimal allocation
must be at least x% of the optimal one.
In constrast to DBMIN, MG-x-y may allocate extra buffers to a random reference, as long as
those extra buffers are justified by the marginal gain values of the reference. However, there is a
pitfall simply considering only the marginal gain values of the random reference. As an example,
suppose a random reference is followed by a looping reference in the waiting queue. In situations
where buffers are scarce, giving one more buffer to the random reference implies that there is one
fewer buffer to give to the looping reference. But since the marginal gain values of a looping
reference are usually higher than those of a random reference, it is desirable to save the buffer from
allocation allocation policy admission
algorithms looping random sequential policy
predictive methods f(load) t f(load) b yao 1 1 s min A
Table
3: Characteristics of Buffer Allocation Algorithms
the random reference and to allocate the buffer to the looping reference instead. Since MG-x-y
operates on a First-Come-First-Serve basis, MG-x-y uses the heuristic of imposing a maximum on
the number of buffers allocated to a random reference. This is the purpose of the y parameter in
MG-x-y.
The first two rows of Table 3 summarize the similarities and differences between DBMIN and
MG-x-y. Recall from the previous section that s min and s max denote respectively the minimum and
maximum numbers of buffers that a buffer allocation algorithm is willing to assign to a reference.
In fact, it is easy to see that MG-x-y generalizes DBMIN in that MG-100-1 (i.e. x=100%, y=1)
is the same as DBMIN. As we shall see in Section 6, as we allow more flexible values for x and y
than DBMIN, MG-x-y performs considerably better.
Note that to obtain the best performance, the x and y parameters need to be determined
according to the mix of queries to use the system. This may involve experimenting with different
combinations of values of x and y 2 . Clearly, this kind of experimentation is expensive. Moreover,
these optimal values are vulnerable to changes in the mix of queries. Thus, in the next section, we
explore further the idea of flexible buffer allocation, and we develop adaptable allocation algorithms
that dynamically choose the s min and s max values using run-time information. The basis of our
approach is to use a queueing model to give predictions about the performance of the system, and
to make the s min and s max parameters vary according to the state of the queueing model. In the
next section, we describe the proposed queueing model, as well as the ways the model can be used
to perform buffer allocation in a fair (FCFS), robust and adaptable way.
4 Adaptable Buffer Allocation
4.1 Predictive Load Control
As described in the previous section, both DBMIN and MG-x-y are static in nature and their
admission policy is simply: s min A, where s min is a pre-defined constant, for each type of
reference. Here we propose adaptable methods that use dynamic information, so that s min is now
a function of the workload, denoted by f(load) in Table 3. Thus in considering admissions, these
methods not only consider the characteristics of the reference and the number of available buffers,
2 A good starting point is our experience.
Symbols Definitions
A number of available buffers
smin minimum number of buffers assigned to a reference
smax maximum number of buffers assigned to a reference
number of buffers usable by a reference
TP throughput
multiprogramming level
number of active queries
ncq number of concurrent queries (active waiting for buffers)
TC;i CPU load of Ref i
TD;i disk load of Ref i
time for one disk access
time to process one page in main memory
or geometric) average of CPU loads
TD (harmonic or geometric) average of disk loads
ae relative load (disk vs CPU)
UD disk utilization
UD;i disk utilization due to Ref i
EDU effective disk utilization
number of buffers assigned to Ref i
portion of "avoidable" ("wasted") page faults of Ref i
Table
4: Summary of Symbols and Definitions for queueing model
but they also take into account the dynamic workload of the system. More specifically, a waiting
reference is activated with s buffers, if this admission is predicted to improve the performance of the
current state of the system. In more precise notations, suppose Pf denotes a performance measure
(e.g.
\Gamma!
cur represents all the references (i.e. queries) Ref currently in
the system, with \Gamma!
buffers respectively, and Ref is the reference under consideration
for admission. Then s min is the smallest s that will improve the Pf predictor: Pf(
\Gamma!
s new
\Gamma!
s cur ), where
\Gamma!
\Gamma!
cur [Ref , \Gamma!
and the symbol Pf( ~
denotes the performance of the system with ~
R active references and ~s buffer allocations. Thus, the
reference Ref is admitted only if it will not degrade the performance of the system 3 .
In this paper we consider two performance measures or predictors: throughput TP and effective
disk utilization EDU . Before we analyze the above predictors and discuss the motivation behind
our choices, we outline a queueing model that forms the basis of these predictors. At the end of
this section, we discuss how these predictors can be incorporated with various allocation policies
to give different adaptable buffer allocation algorithms. In section 6 we present simulation results
comparing the performance of these adaptable algorithms with MG-x-y and DBMIN.
4.2 Queueing Model
We assume a closed queueing system with two servers: one CPU and one disk. Figure 3 shows the
system, and Table 4 summarizes the symbols used for the queueing model. Within the system, there
are n references (jobs) Ref whose CPU and disk loads are T C;i and TD;i respectively for
3 There is however one exception; see Section 4.4 for a discussion.
queue for
buffers
disk
CPU
Figure
3: Queueing system
Furthermore, Ref i has been allocated s i buffers. Therefore, if every disk access costs
t D (e.g. 30 msec), and the processing of a page after it has been brought in core costs t C (e.g. 2
msec), we have the following equations:
is the number of pages accessed by Ref i , and Ef(Ref can be computed using the
formulas listed in Section 3.
The general solution to such a network can be calculated; see for example [17, pp. 451-452].
It involves an n-class model with each job being in a class of its own. But while it gives accurate
performance measures such as throughput and utilizations, this solution is expensive to compute,
since it requires exponential time on the number of classes. As ease of computation is essential in
load control, we approximate it with a single-class model. We assume that all the jobs come from
one class, with the overall CPU load TC and the overall disk load TD being the averages of the
respective loads of the individual references. TC and TD may be the harmonic or geometric means
depending on the predictors to be introduced in the following.
Before we proceed to propose two performance predictors for allocation, note that in this paper,
we focus on a single-disk system, mainly to show the effectiveness of the proposed buffer allocation
schemes. A multiple disk system would introduce the issue of data placement; once this has been
decided, we could extend our queueing model to have multiple disks. Queueing systems with
multiple servers are studied in [17].
4.3 Predictor TP
Since our ultimate performance measure is the throughput of the system, a natural predictor is to
estimate the throughput directly. In general, there are two ways to try to increase the throughput
of a system: increase the multiprogramming level mpl, or decrease the disk load of the jobs by
allocating more buffers to the jobs. However, these two requirements normally conflict with each
other, as the total number of buffers in a system is fixed. Hence, for our first predictor TP, we
propose the following admission policy:
Admission Policy 1 (TP) Activate the reference if the maximal allocation is possible; otherwise,
activate only if the reference will increase the throughput. 2
In the policy described above, a maximal allocation is one which assigns as many buffers to
the reference as the reference needs and as many as the number of buffers that are available. To
implement the above policy, we provide formulas to compute the throughput. The solution to the
single class model is given in [17]:
UD is the utilization of the disk given by:
ae
where ae is the ratio of the disk load versus the CPU load
To derive the average loads TC and TD , we use the harmonic means of the respective loads. The
reason is that the equations of the queueing systems are based on the concept of "service rate"
which is the inverse of the load. Thus, using the harmonic means of the loads is equivalent to using
the arithmetic means of the rates, i.e.
Notice that the calculation of the throughput requires O(1) operations, if the buffer managers keeps
track of the values TD and TC .
4.4 Predictor EDU
Although very intuitive, using the estimated throughput as the criterion for admission may lead
to some anomalies. Consider the situation when a long sequential reference is at the head of the
waiting queue, while some short, maximally allocated random references are currently running in
the system. Now admitting the sequential reference may decrease the throughput, as it increases
the average disk load per job. However, as the optimal allocation for the sequential reference is
only one buffer, activating the sequential reference is reasonable. Exactly for this reason, Admission
Policy 1 is "patched up" to admit a reference with s max buffers, even if this admission decreases
the throughput.
This anomaly of the throughput as a predictor leads us to the development of our second
predictor - Effective Disk Utilization (EDU). Consider the following point of view of the problem:
There is a queue of jobs (i.e. references), a system with one CPU and one disk, and a buffer pool
that can help decrease the page faults of the jobs. Assuming that the disk is the bottleneck (which
is the case in all our experiments, and is usually the case in practice), a reasonable objective is
to make the disk work as efficiently as possible. There are two sources of inefficient uses of the
disk: (1) the disk is sitting idle because there are very few jobs, or (2) the disk is working on page
requests that could have been avoided if enough buffers had been given to the references causing
the page faults. The following concept captures these observations.
wasten
idle
UD;n
1=nUD
UD
Figure
4: Effective disk utilization
Definition 7 The effective disk utilization EDU is the portion of time that the disk is engaged in
page faults that could not be avoided even if the references are each assigned its optimal number
of buffers (infinite, or, equivalently s opt which is the maximum number of buffers usable by a
reference). 2
Hence, for our second predictor EDU, we use the following admission policy:
Admission Policy 2 (EDU) Activate the reference if it will increase the effective disk utilization.Mathematically, the effective disk utilization is expressed by:
where UD;i represents the disk utilization due to Ref i and w i is the portion of "avoidable" (or
page faults caused by
For practical calculations, we use s opt instead of clearly, s opt is 1, t and b yao for sequential,
looping and random references respectively. Note that the above equation relates the notion of EDU
to marginal gain values introduced in the previous section. The term
can be rewritten as
Ps opt
intuitively represents the
portion of avoidable page faults, can also be regarded as some form of normalized marginal gain
values.
Informally, Equation 12 specifies that at every unit time, the disk serves Ref i for UD;i units of
time. Out of that, Ref i wastes w i UD;i units of time. Summing over all jobs, we get Equation
12.
Figure
4 illustrates the concept of effective disk utilization. The horizontal line corresponds to
a 100% disk utilization; the dotted portion stands for the idle time of the disk, the dashed parts
correspond to the "wasted" disk accesses and the sum of the solid parts corresponds to the effective
disk utilization.
Note that, for I/O bound jobs, every job has approximately an equal share of the total disk
utilization UD , even though the jobs may have different disk loads. Thus, we have the formula:
which simplifies Equation 12 to:
Notice that we have not yet used a single-class approximation. We only need this approximation
to calculate the disk utilization UD . Using the exact n-class model [17], we find out that the
geometric averages give a better approximation to the the disk utilization. Thus, the average CPU
and disk loads are given by:
. Based on these equations,
the disk utilization UD can be computed according to Equations 10 and 11. Like calculating the
TP predictor, the calculation of EDU requires O(1) steps, if the buffer manager keeps track of the
loads and the total "wasted" disk accesses
4.5 Adaptable Buffer Allocation Algorithms
Thus far we have introduced two predictors: TP and EDU. We have presented the admission policies
based on these predictors and provided formulas for computing these predictions. To complete the
design of adaptable buffer allocation algorithms, we propose three allocation policies, which are
rules to determine the number of buffers s to allocate to a reference, once the reference has passed
the admission criterion.
Allocation Policy 2 (Optimistic) Give as many buffers as possible, i.e. s=min(A; s
Allocation Policy 3 (Pessimistic) Allocate as few buffers as necessary to random references
(i.e. s min ), but as many as possible to sequential and looping references. 2
The optimistic policy tends to give allocations as close to optimal as possible. However, it may
allocate too many buffers to random references, even though these extra buffers may otherwise be
useful for other references in the waiting queue. The pessimistic policy is thus designed to deal
with this problem. But a weakness of this policy is that it unfairly penalizes random references. In
particular, if there are abundant buffers available, there is no reason to let the buffers sit idle and
not to allocate these buffers to the random references.
Allocation Policy 4 (2-Pass) Assign tentatively buffers to the first m references in the waiting
queue, following the pessimistic policy. Eventually, either the end of the waiting queue is reached,
or the m+1 -th reference in the waiting queue cannot be admitted. Then perform a second pass
and distribute the remaining buffers equally to the random references that have been admitted
during the first pass. 2
In essence, when the 2-Pass policy makes allocation decisions, not only does it consider the reference
at the head of the waiting queue, but it also takes into account as many references as possible in
the rest of the queue.
query query selec- access path join access path reference type
type operators tivity of selection method of join data pages only
clustered index - S50;500
non-clustered index - R30;15
non-clustered index - R30;150
sequential scan index join non-clustered index on B R100;15
sequential scan index join non-clustered index on B R30;150
clustered index nested loop sequential scan on B L300;15
Table
5: Summary of Query Types
relation A 10,000 tuples
relation C 3,000 tuples
tuple size 182 bytes
page size 4K
Table
Details of Relations
Follwing the generic framework described in Algorithm 1, the three allocation policies can be
used in conjunction with both TP and EDU, giving rise to six potential adaptable buffer allocation
algorithms. As a naming convention, each algorithm is denoted by the pair "predictor-allocation"
where "predictor" is either TP or EDU, and "allocation" is one of: o, p, 2, representing optimistic,
pessimistic and 2-Pass allocation policies respectively. For instance, EDU-o stands for the algorithm
adopting the EDU admission policy and the optimistic allocation policy.
5 Simulation Results
In this section we present simulation results on the performance of MG-x-y and the adaptable
methods in a multiuser environment. As Chou and DeWitt have shown in [2, 3] that DBMIN
performs better than the Hot-Set algorithm, First-In-First-Out, Clock, Least-Recently-Used and
Working-Set, we only compare our algorithms with DBMIN.
5.1 Details of Simulation
In order to make direct comparison with DBMIN, we use the simulation program Chou and DeWitt
used for DBMIN, and we experiment with the same types of queries. Table 5 summarizes the details
of the queries that are chosen to represent varying degrees of demand on CPU, disk and memory
[2, 3].
Table
6 and Table 7 show respectively the details of the relations and the query mixes we
used. In the simulation, the number of concurrent queries varies from 2 to 16 or 24. Each of these
concurrent queries is generated by a query source which cannot generate a new query until the
last query from the same source is completed. Thus, the simulation program simulates a closed
I
S 50;500 R 30;15 R 30;150 R 100;15 R 30;150 L 300;15
Table
7: Summary of Query Mixes
161.001.101.201.301.401.50number of concurrent queries
r
a
r
TP-o, EDU-o
MG-50-6
Figure
5: Relative Throughput: Mix 1 (mainly looping references), no Data Sharing
system 4 . See [2, 3] for more details.
5.2 Effectiveness of Allocations to Looping References
The first mix of queries consists of 70% of queries of type VI (looping references) and 10% each of
queries of types I, II and IV (sequential, random and random references respectively). The purpose
of this mix is to evaluate the performance of MG-x-y and adaptable algorithms in situations where
there are many looping references to be executed. The x parameter of MG-x-y is set to one of
the following: 100, 85, 70 and 50. The y parameter is one of 1, 6, 12 and 15. Figure 5 shows
the throughputs of DBMIN, MG-100-12, MG-50-y's and the adaptable algorithms running with
4 Besides buffer management, concurrency control and transaction management is another important factor affecting
the performance of the whole database system. While the simulation package does not consider transaction
management, see [2] for a discussion on how the transaction and lock manager can be integrated with a buffer manager
using DBMIN. Since our algorithms differ from DBMIN only in load control, the integration proposed there also
applies to a buffer manager using our algorithms.
TP-o, EDU-o, EDU-2
MG-50-6
TP-o, EDU-o, EDU-2
MG-50-y's
number of concurrent queries
r
a
a
e
16405060708090number of concurrent queries
b u
f
r
l
a
Figure
Average Waiting Time and Buffer Utilization: Mix 1
different number of concurrent queries using 35 buffers. The results for MG-70-y's and MG-85-y's
are similar to those for MG-50-y's, and they are omitted for brevity. The results for the pessimistic
approach are typically only slightly better than those for DBMIN, and thus these performance
figures are not plotted in the graphs for brevity. The major reason why the pessimistic approach
gives poor performance is that the approach is being too aggressive in allowing too many queries
to get into the system. Note that to obtain the throughput values, we run our simulation package
repeatedly until the values stabilized. [2] discusses how the simulation package can be used to obtain
results within a specified confidence interval. Figure 5 also includes the throughputs of the "ideal"
algorithm that has infinitely many buffers and can therefore support any number of concurrent
queries requiring any number of buffers. Furthermore, to highlight the increase or decrease relative
to DBMIN, the values are normalized by the values of DBMIN, effectively showing the ratio in
throughput.
Let us focus our attention on the MG-x-y algorithms first. All four MG-50-y algorithms show
considerable improvement when compared with DBMIN. In particular, since the allocations for
random and sequential references are the same for both MG-50-1 and MG-100-1 (i.e. DBMIN),
the improvement exhibited by MG-50-1 relative to MG-100-1 is due solely to the effectiveness of
allocating buffers sub-optimally to looping references, whenever necessary. As the y value increases
from 1 to 15, the throughput increases gradually until y becomes 15. The increase in throughput
can be attributed to the fact that the random queries are benefited by the allocation of more buffers.
But when too many buffers (e.g. are allocated to a random query, some of the buffers are
not used efficiently. Thus, the throughput of MG-50-15 is lower than that of MG-50-12. Finally, the
adaptable algorithms TP-o, EDU-o and EDU-2 perform comparably to the best MG-x-y scheme
which is MG-50-12 in this case.
Note that to a certain extent, the algorithm MG-100-12 represents the algorithm that allocates
28 29
total number of buffers
r
a
r
Figure
7: Relative Throughput vs Total Buffers: Mix 1
buffers to minimize the number of page faults. However, such "optimal" allocations may induce
high waiting time 5 for queries and low buffer utilization and throughput of the system. The two
graphs in Figure 6 demonstrate the situation. The graph on the left shows the average waiting time
of queries. Values are again normalized by the values of DBMIN. The graph on the right shows the
average percentage of buffers utilized.
Thus far, we have seen how the performance of MG-x-y varies with different values of x and
y.
Figure
7 shows how the relative throughput varies with the number of total buffers used in
running this mix of queries with 8 concurrent queries. The graphs for other multiprogramming
levels exihibit similar patterns. Figure 7 shows the situations when sub-optimal allocations are
allowed by MG-50-12, MG-70-12 and MG-85-12. For instance, when the number of total buffers
becomes 30, MG-50-12 allows sub-optimal allocations to looping references, and the throughput of
the system increases significantly when compared with other algorithms. As the total number of
buffers increases, MG-70-12 and MG-85-12 follow MG-50-12 and perform better than DBMIN. This
discrepancy can be explained by considering a looping reference at the head of the waiting queue.
Because DBMIN insists on giving the optimal allocation to this reference (18 in this case), this
reference is blocking other queries from using the buffers. Now when this reference finally manages
to get the optimal number of buffers (i.e. when the total number of buffers becomes 36), DBMIN
performs not too much worse than the others. In this case, the difference in throughput is due
to the effective allocations to random references by the MG-x-12 algorithms. If the graph extends
to higher numbers of total buffers, we expect that a similar pattern of divergence in throughput
5 The waiting time of a query is the time from arrival to activation.
TP-o
241.001.101.201.301.401.50number of concurrent queries
r
a
r
Figure
8: Relative Throughput: Mix 2 (mainly random references), no Data Sharing
appears before every multiple of 18, though the magnitude will probably decrease.
5.3 Effectiveness of Allocations to Random References
The second mix of queries consists of 45% of queries of type II, 45% of queries of type IV (both
random references), and 10% of queries of type I (sequential references). The purpose of this
mix is to evaluate the effectiveness of MG-x-y and the adaptable schemes on allocating buffers to
random references. Since there are no looping references in this mix, the x parameter of MG-x-y
is irrelevant and is simply set to 100. The y parameter is one of the following: 1, 8, 13 and 15.
Figure
8 shows the ratio of throughputs of DBMIN, MG-100-y's and the adaptable algorithms
running with different number of concurrent queries using 35 buffers. As before, the results for
the pessimistic policies are not explicitly included in the figure. For this mix of queries, algorithms
adopting the pessimistic policies behave exactly as DBMIN (i.e. MG-100-1) in allocating one buffer
to each random reference.
Let us focus our attention on the MG-x-y algorithms first. Compared with DBMIN (i.e. MG-
100-1), all three other MG-100-y algorithms show significant increases in throughput. As the y
value increases from 1 to 15, the throughput increases gradually until y becomes 15. The increase
in throughput can be attributed to the fact that the random queries are benefited by the allocation
of more buffers. But as explained in the previous section, when y becomes 15, some of the buffers
allocated to random queries are no longer used efficiently. Thus, the throughput of MG-100-15
drops below that of MG-100-13, and even that of MG-100-8.
As for the adaptable algorithms, EDU-o and TP-o perform comparably to MG-100-13 and the
TP-o, EDU-o, EDU-2
MG-50-12, MG-50-15,
r
a
r
number of concurrent queries
Figure
9: Relative Throughput: Mix 1, full Data Sharing
"ideal" algorithm. But for EDU-2, though better than DBMIN, it does not perform as well as the
others. This is because every time during the first pass of allocations (cf. Allocation Policy 4),
EDU-2 has the tendency of activating many random references. As a result, the number of buffers
per random reference allocated by EDU-2 is lower than that allocated by the other algorithms,
thereby causing more page faults and degrading overall performance.
5.4 Effect of Data Sharing
In the simulations carried out so far, every query can only access data in its own buffers. However,
our algorithms can support sharing of data among queries in exactly the same way as DBMIN.
More specifically, when a page is requested by a query, the algorithm first checks to see if the page
is already in the buffers owned by the query. If not, and if data is allowed to be shared by the
system, the algorithm then tries to find the page from the buffers where the query is allowed to
share. If the page is found, the page is given to the query, without changing the original ownership
of the page. See [2, 3] for more details.
To examine the effect of data sharing on the relative performance of our algorithms relative to
DBMIN, we also run simulations with varying degrees of data sharing. Figure 9 shows the relative
throughputs of DBMIN, MG-50-y's and the adaptable algorithms running the first mix of queries
with buffers, when each query has read access to the buffers of all the other queries, i.e. full
data sharing.
Compared with Figure 5 for the case of no data sharing, Figure 9 indicates that data sharing
favors our algorithms. For other query mixes we have used, the same behaviour occurs. In fact,
(b)
(a)
TP-o
161.001.101.201.30number of concurrent queries
r
a
EDU-o, TP-o
number of concurrent queries
r
a
Figure
10: Switching Mixes: (a) Stage 1 - Mix 4, (b) Stage 2 - Mix 3
this phenomenon is not surprising because sub-optimal allocations to looping references give even
better results if data sharing is allowed. It is obvious that with data sharing, the higher the buffer
utilization, the higher the throughput is likely to be. In other words, the inflexibility of DBMIN in
buffer allocation becomes even more costly than in the case of no data sharing.
5.5 Comparisons with MG-x-y - Adaptability
Among all the simulations we have shown thus far, the adaptable allocation algorithms TP-o, EDU-
perform comparably to the best of MG-x-y. The reason is that we have a fixed mix of
queries, with few types of queries, and we have selected carefully the x and y parameters that are
best suited for this specific mix. But in the simulations described below, we shall see that having
one set of statically chosen values for x and y creates some problems for MG-x-y.
The first problem of MG-x-y is due to the fact that each MG-x-y scheme has only one x and
one y value for all kinds of looping and random references. Consider the situation where there are
two kinds of random references: the first one with a low Yao estimate and high selectivity, and the
other one with a high Yao estimate and low selectivity. For example, consider Query Type II and
respectively. Query Type II (R 30;15 ) has a Yao estimate of 12 and a selectivity of making
random accesses on 15 pages. On the other hand, Query Type V (R 30;150 ) has a Yao estimate of
27 and a selectivity of making random accesses on 150 pages. For a query of the first type, it is
beneficial to allocate as close to the Yao estimate as possible. But for a query of the second type,
it is not worthwhile to allocate many buffers to the query. Thus, for any MG-x-y scheme, using
one y value is not sufficient to handle the diversity of queries. This problem is demonstrated by
running a simulation on the third query mix which consists of the two kinds of random references
mentioned above (Query Type II and query Type V). Figure 10(b) shows the relative throughput
TP-o
DBMIN0.750.850.951.051.151.25simulation time
t a n
t a n e
t0.750.850.951.051.151.25simulation time
t a
t a
t0.750.850.951.051.151.25simulation time
t a n
t a n e
Figure
Mix 4 to Mix 3: Instantaneous Throughput before and after Switching
of running this mix of queries with buffers. When compared with the best result of MG-x-y (i.e.
MG-50-16 in this case), the adaptable algorithms perform better, handling the diversity of queries
more effectively.
The second weakness of MG-x-y is its inability to adjust to changes in query mixes. Figure 10
shows the result of running a simulation that consists of two stages. In the first stage, the query
mix (i.e. mix 4) consists of random references only. As shown in Figure 10(a), the best result
of MG-x-y (i.e. MG-50-18 in this case) performs comparably to the adaptable algorithms. But
when the second stage comes and the query mix changes from mix 4 to mix 3, MG-50-18 cannot
adapt to the changes, as illustrated by Figure 10(b). In contrast, the adaptable algorithms adjust
appropriately.
Figure
shows how the instantaneous throughputs of DBMIN, MG-50-18 and TP-o fluctuate
before and after switching the mixes. The instantaneous throughput values are obtained by
calculating the average throughputs within 10-second windows. The thin line in each graph plots
the fluctuation of the instantaneous throughputs, and the solid line represents the (overall) average
throughput of the mix. indicates the moment of switching mixes. The figure indicates that, at
the time of switching, the instantaneous throughputs of DBMIN fluctuate greatly, eventually tapering
off to a lower average throughput. For MG-50-18, the fluctuation after switching the mixes
is greater than before. As for TP-o and other adaptable schemes, since they are designed to be
sensitive to the characteristics of queries currently running in the system, fluctuation is expected.
5.6
Summary
Our simulation results show that the MG-x-y algorithms are effective in allocating flexibly to
queries. Compared with DBMIN, MG-x-y algorithms give higher throughput, higher buffer utilization
and lower waiting time for queries. The increase in performance is even higher when data
sharing is allowed.
Our simulation results also indicate that adaptable allocation algorithms are more effective and
more flexible than DBMIN, with or without data sharing. They are capable of making allocation
allocation average time taken in average response ratio of load control time
algorithms load control (msec) time (sec) to response time
Table
8: Costs of Running the Algorithms (Mix 2, data sharing)
decisions based on the characteristics of queries, the runtime availability of buffers, and the dynamic
workload. When compared with the MG-x-y algorithms, they are more adaptable to changes, while
behaving as flexibly as the MG-x-y schemes. Moreover, no sensitivity analysis is needed for the
adaptable methods.
The advantages of the adaptable schemes listed above seem to indicate that the adaptable
algorithms should be used in all situations. The only concern is the amount of time they take to
make load control decisions. Table 8 lists the average time a query took in load control and the
average response time of a query, running query mix 2 with 4 concurrent queries (cf. Figure 8).
These figures are obtained by running our simulation package in a UNIX environment in a DEC-
2100 workstation. It is easy to see that the MG-x-y algorithms take much less time to execute than
the adaptable ones. Thus, in situations where query mixes are not expected to change too often,
and where sensitivity analysis can be performed inexpensively to find good values for the x and
y parameters, it is beneficial to use the MG-x-y algorithms instead of the adaptable ones. In any
other case, the adaptable algorithms are more desirable. Even though the computation of these
algorithms take much longer time than the static ones, the extra time is worthwhile. After all, 3
milliseconds (i.e. for the worst case EDU-2) can be more than offset by saving one disk read, and
3 milliseconds constitute less than 1% of the total response time of a query.
As for the two predictors TP and EDU, both of them perform quite well. While EDU is probably
more accurate for a single disk system, TP is more extendible to multi-disk systems, and is slightly
easier to compute (cf. Table 8). As for the allocation policies, the winners are the 2-Pass approach
and the optimistic one. The pessimistic approach generally give poor results. The 2-Pass approach
on the other hand performs well in most situations, with the exception of heavy workloads consisting
primarily of random references. In this case, the 2-Pass policy degenerates to the pessimistic one,
because there is normally no buffers left over to be distributed in the second pass. Another practical
disadvantage of the 2-Pass policy is that it cannot activate queries instantaneously because queries
admitted in the first pass may have to wait for the second pass for additional buffers. Thus, it
is slower than the algorithms that only require one pass. Finally, the optimistic allocation policy
performs very well in most situations. In addition, the optimistic policy is simple, easy to implement
and, unlike the 2-Pass approach, is capable of making instantaneous decisions.
6 Conclusions
The principal contributions reported in this paper are summarized in the following list.
1. We have proposed and studied flexible buffer allocation.
ffl It is a unified approach for buffer allocation in which both the access patterns of queries
and the availability of buffers at runtime are taken into consideration. This is achieved
through the notion of marginal gains which give an effective quantification on how buffers
can be used efficiently.
ffl The MG-x-y allocation algorithms are designed to achieve high total marginal gains and
maximize buffer utilization. Generalizing DBMIN which is the same as MG-100-1, they
can allocate buffers more flexibly.
ffl Simulation results show that flexible buffer allocation is effective and promising, and the
MG-x-y algorithms give higher throughput, higher buffer utilization and lower waiting
time for queries than DBMIN.
2. We have proposed and studied adaptable buffer allocation.
ffl Extending the flexible buffer allocation approach, it incorporates runtime information
in buffer allocation. Based on a simple, but accurate single-class queueing model, it
predicts the impact of each buffer allocation decision.
ffl Two performance predictors - TP and EDU - are proposed. In general, a waiting query
is only activated if its activation does not degrade the performance of the system, as
estimated by the predictors. In addition, three different allocation policies are stud-
ied: optimistic, pessimistic and 2-pass. Combined with the two predictors, six different
adaptable buffer allocation algorithms are considered.
ffl Simulation results indicate that the adaptable algorithms are more effective and flexible
than DBMIN. When compared with the flexible algorithms MG-x-y, the adaptable ones
are capable of adapting to changing workloads, while performing as flexibly as MG-x-y.
Though more costly to compute, the extra time is well paid off. Finally, simulation
results show that both performance predictors TP and EDU perform equally well, and
that the optimistic and 2-pass allocation policies are effective. Taking implementation
complexity into account, TP-o seems to be the best choice.
3. We have set up mathematical models to analyze relational database references. These models
provide formulas to compute marginal gains and the performance predictions based on TP
and EDU.
In ongoing research, we are investigating how to extend our predictors to systems with multiple
disks, and how to set up analytic models for references with data sharing. We are also studying
whether the flexible and predictor approach can be incorporated into the framework proposed by
Cornell and Yu[5], in order to improve the quality of query plans generated by a query optimizer.
Finally, we are interested in deriving formulas for computing marginal gains of more complex queries
like sort-merge joins.
Acknowledgements
. We would like to thank H. Chou and D. DeWitt for allowing us to use
their simulation program for DBMIN so that direct comparison can be made. We would also like
to thank anonymous referees for many valuable suggestions and comments.
--R
Analysis and Performance of Inverted Data Base Structures
Buffer Management of Database Systems
An Evaluation of Buffer Management Strategies for Relational Database Systems
Implication of Certain Assumptions in Data Base Performance Evalua- tion
Integration of Buffer Management and Query Optimization in Relational Database Environment
Principles of Database Buffer Management
Predictive Load Control for Flexible Buffer Allocation
Buffer Management Policies in a Database Environment
History of Marginal Utility Theory
Database Buffer Paging in Virtual Storage Systems
Evaluation Techniques for Storage Hierarchies
Flexible Buffer Allocation based on Marginal Gains
A Mechanism for Managing the Buffer Pool in a Relational Database System using the Hot Set Model
Buffer Management in Relational Database Systems
Performance of a Database Manager in a Virtual Memory System
The Design and Implementation of INGRES
Probability and Statistics with Reliability
Approximating Block Accesses in Database Organizations
--TR
--CTR
Wenguang Wang , Richard B. Bunt, Simulating DB2 buffer pool management, Proceedings of the 2000 conference of the Centre for Advanced Studies on Collaborative research, p.13, November 13-16, 2000, Mississauga, Ontario, Canada
Donghee Lee , Jongmoo Choi , Jong-Hun Kim , Sam H. Noh , Sang Lyul Min , Yookun Cho , Chong Sang Kim, On the existence of a spectrum of policies that subsumes the least recently used (LRU) and least frequently used (LFU) policies, ACM SIGMETRICS Performance Evaluation Review, v.27 n.1, p.134-143, June 1999
Peter Scheuermann , Junho Shim , Radek Vingralek, WATCHMAN: A Data Warehouse Intelligent Cache Manager, Proceedings of the 22th International Conference on Very Large Data Bases, p.51-62, September 03-06, 1996
Jongmoo Choi , Sam H. Noh , Sang Lyul Min , Eun-Yong Ha , Yookun Cho, Design, Implementation, and Performance Evaluation of a Detection-Based Adaptive Block Replacement Scheme, IEEE Transactions on Computers, v.51 n.7, p.793-800, July 2002
database disk buffer management algorithm based on prefetching, Proceedings of the seventh international conference on Information and knowledge management, p.167-174, November 02-07, 1998, Bethesda, Maryland, United States
Jongmoo Choi , Sam H. Noh , Sang Lyul Min , Yookun Cho, An implementation study of a detection-based adaptive block replacement scheme, Proceedings of the Annual Technical Conference on 1999 USENIX Annual Technical Conference, p.18-18, June 06-11, 1999, Monterey, California
D. Lee , J. Choi , J. H. Kim , S. H. Noh , S. L. Min , Y. Cho , C. S. Kim, LRFU: A Spectrum of Policies that Subsumes the Least Recently Used and Least Frequently Used Policies, IEEE Transactions on Computers, v.50 n.12, p.1352-1361, December 2001
Jongmoo Choi , Sam H. Noh , Sang Lyul Min , Yookun Cho, Towards application/file-level characterization of block references: a case for fine-grained buffer management, ACM SIGMETRICS Performance Evaluation Review, v.28 n.1, p.286-295, June 2000
Jong Min Kim , Jongmoo Choi , Jesung Kim , Sam H. Noh , Sang Lyul Min , Yookun Cho , Chong Sang Kim, A low-overhead high-performance unified buffer management scheme that exploits sequential and looping references, Proceedings of the 4th conference on Symposium on Operating System Design & Implementation, p.9-9, October 22-25, 2000, San Diego, California
Ronald P. Doyle , Jeffrey S. Chase , Omer M. Asad , Wei Jin , Amin M. Vahdat, Model-based resource provisioning in a web service utility, Proceedings of the 4th conference on USENIX Symposium on Internet Technologies and Systems, p.5-5, March 26-28, 2003, Seattle, WA
Chris Gniady , Ali R. Butt , Y. Charlie Hu, Program-counter-based pattern classification in buffer caching, Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation, p.27-27, December 06-08, 2004, San Francisco, CA | buffer management;performance analysis;relational databases |
627029 | Distributed Memory Compiler Design For Sparse Problems. | AbstractThis paper addresses the issue of compiling concurrent loop nests in the presence of complicated array references and irregularly distributed arrays. Arrays accessed within loops may contain accesses that make it impossible to precisely determine the reference pattern at compile time. This paper proposes a run time support mechanism that is used effectively by a compiler to generate efficient code in these situations. The compiler accepts as input a Fortran 77 program enhanced with specifications for distributing data, and outputs a message passing program that runs on the nodes of a distributed memory machine. The runtime support for the compiler consists of a library of primitives designed to support irregular patterns of distributed array accesses and irregularly distributed array partitions. A variety of performance results on the Intel iPSC/860 are presented. | Introduction
On modern scalable multicomputers it is widely recognized that, in addition to detecting
and exploiting available parallelism, reducing communication costs is crucial in achieving
good performance. Existing systems such as DINO [34], Fortran D [12], Superb [44],
and communication optimizations only in the presence of regular
array reference patterns within loops, such as message blocking, collective communications
utilization, and message coalescing and aggregation. Parallel loop nests, however, often
contain array references that cannot be analyzed at compile time. Such array references
are classified as irregular.
Methods are described here that deal with parallel loops and loops that contain reduction
type output dependencies. The methods work for loops that do not contain cross-
processor loop-carried dependencies or cross-processor loop-independent dependencies. A
cross-processor dependence is one whose end points cross processors. A loop-carried dependence
involves a write to a location in one iteration, followed by a read to the same
location at a later iteration. A loop-independent dependence involves a write to a location
followed by a read to the same location in the same loop iteration. Data parallelism is
achieved by partitioning arrays across the nodes of the machine and each processor performs
computations on a part of the array. When parallelism is achieved by partitioning
loop iterations between processors, cross processor loop independent dependences will not
occur.
Runtime optimization techniques have been developed that are designed to reduce communication
costs for irregular references in the following ways:
judicious partitioning of data and computational work,
ffl combining element messages into a larger message thereby reducing the number of
messages transmitted, and
eliminating redundant communication of array elements.
To demonstrate that these optimizations can be performed automatically by a compiler,
a prototype compiler called ARF (Arguably Fortran) was developed. ARF accepts a simplified
Fortran 77 program enhanced with specifications for distributing data. It outputs
a program that executes directly on the nodes of a distributed memory machine, in this
case the Intel iPSC/860. The compiler partitions computations and analyzes array references
to classify them as regular or irregular. For irregular references it performs runtime
optimizations to reduce communication costs.
Since the development of ARF a significant amount of work has been done in standardizing
extensions to the Fortran language. The High Performance Fortran Forum (HPFF),
a joint effort between the academic community and industry, has agreed on a preliminary
set of data parallel programming language extensions [16], [20]. It has been heavily influenced
by experimental languages such as Fortran D [12], Vienna Fortran [45], Crystal
[7], [24], [23], [25], Kali [22], DINO [32], and CM Fortran [9]. The HPFF decided to
defer consideration of language extensions targeting irregular problems; over the next few
years, the HPFF plans to consider possible irregular problem language extensions.
1.1 Overview of PARTI and ARF
These runtime optimizations are implemented using PARTI (Parallel Automated Runtime
Toolkit at ICASE) runtime preprocessing procedures which can be embedded by the com-
piler. These procedures (1) support a shared name space, (2) provide the infrastructure
needed to implement non-uniform data mappings efficiently, (3) coordinate interprocessor
data movement, and (4) manage the storage of, and access to, copies of off-processor data.
The compiler consists of two distinct layers. The bottom layer is the library of PARTI
runtime procedures that are designed to support irregular patterns of distributed array
accesses efficiently. The top layer is a compiler that carries out program transformations
by embedding calls to the PARTI primitives in the original program. PARTI procedures
support a variety of operations on globally named distributed array indices. The distributed
arrays can be partitioned in a non-uniform manner where each distributed array element is
assigned to an arbitrary processor. The operations include: off-processor data fetches, data
stores, and accumulations to off-processor memory locations. A multicomputer program
is generated in which all distributed memory accesses are carried out using embedded
procedures.
We emphasize that the goal of this project is not to develop a production quality
compiler, but to demonstrate that run time optimizations can be generated automatically
and efficiently by a compiler. Most of the complexity of this system is in the PARTI
procedures. The PARTI procedures have been developed so that transformations needed
to embed the appropriate primitives can be implemented with relative ease in distributed
memory compilers.
This paper begins with a description of the language that is accepted and how it
relates to Fortran D. An outline of the compiler phases is described in Section 3. Section
4 describes the PARTI run time primitives that have been implemented and incorporated
in the runtime systems employed by the compiler. Section 5 provides details of the code
generation and optimizations performed by the compiler. The compiler is described in the
context of two example code kernels. The kernels are written in ARF and are translated
by the compiler to message passing code. Section 6 reports experimental performance
measurements for the codes compiled by ARF. Section 7 describes the relationship between
this research and other related projects.
The ARF compiler was developed to demonstrate the feasibility of our approach for irregular
problem compilation; consequently, the ARF language extensions are limited in scope.
For the sake of clarity and better understanding we show how our language extensions are
related to a real data-parallel language like Fortran D. We first describe the syntax of the
Fortran D language extensions that provide the same functionality as the ARF extensions.
We then go on to describe the corresponding ARF language extensions.
2.1 Fortran D Language
Fortran D is a version of Fortran 77 enhanced with data decomposition specifications. In
this section, we present a brief description of the features of Fortran D that can be used
to support irregular problems. ALIGN and DECOMPOSITION are two key Fortran D data
decomposition constructs. A DECOMPOSITION is an abstract problem domain. ALIGN is
used to map arrays with respect to a decomposition. ALIGN guarantees that if elements
from different arrays are mapped to the same element of a decomposition, they will reside
on the same processor. The simplest alignment occurs when an array is exactly mapped
onto a decomposition. The DISTRIBUTE statement specifies the mapping of a decomposition
to a physical machine. Distributions can be regular, for example, consider the BLOCK
distribution. If we have n$proc processors and N elements in a decomposition (where
n$proc divides N), BLOCK distribution divides a decomposition into contiguous chunks of
size N/n$proc, assigning one block to each processor. Fortran D also allows user specified
irregular distributions through the use of a mapping array, which itself is typically dis-
tributed. A mapping array contains processor numbers used to specify the processor that
owns each individual decomposition element.
Below is an example that specifies an irregular partitioning in Fortran D:
S5
. set values of map array using a mapping method .
S6 ALIGN x, y with irreg
In this example, arrays x and y are the data arrays and map is the mapping array. The
array map is mapped onto decomposition reg (statement S4). Decomposition reg in turn
is distributed by blocks across the processors (statement S5). Arrays x and y are aligned
with the decomposition irreg (statement S6). Finally, decomposition irreg is irregularly
partitioned across processors using the distributed mapping array map (statement S7). The
result of the above statements is that array elements x(i) and y(i) are assigned to processor
map(i).
It is sometimes convenient to ignore certain array dimensions when mapping an array
to a decomposition. All array elements in the unassigned dimensions are collapsed and
mapped to the same index of the decomposition. For instance, ALIGN z(i,j) with map(j)
means that the the second dimension of z is aligned with map. In this example it means
that we map column j of z to processor map(j).
Fortran D also provides a directive, the on clause [21], to specify a processor which
will execute each iteration of a loop. For instance, if we have n$proc processors
do i=1,n on mod(i,n$proc)
do
assigns loop iterations to processors in a round-robin fashion.
2.2 ARF Extensions
Distributed arrays declared in ARF can either be partitioned between processors in a
regular manner (e.g. equal sized blocks of contiguous array elements assigned to each
processor) or in an irregular manner. ARF extensions explicitly specify how each array is
partitioned between processors; ARF does not make use of decomposition statements like
the ones found in Fortran D. ARF arrays are irregularly partitioned across processors using
a distributed mapping array. Below is an ARF code fragment that has the same effect as
the first code fragment presented in the previous section. Irregular distribution is specified
as follows:
distributed regular using block integer map(1000)
. set values of map array using a mapping method .
distributed irregular using map real x(1000),y(1000)
Statement S1 declares an integer array map and states that the array will be distributed
blockwise across processors. Statement S2 declares real arrays x and y and assigns array
elements and y(i) to processor map(i).
ARF arrays can only be distributed in a single dimension; the distributed dimension
must be the last declared array dimension. For instance, the ARF statement:
distributed irregular using map real x(10,1000)
would assign column i of x to processor map(i).
ARF also contains an on clause, for example:
distributed do i=1,n on partition
means that work associated with iteration i is to be carried out on processor partition(i).
3 Compiler Support for Irregular Computations
Compile time analysis can make it possible to generate highly efficient code when a compiler
can gather enough information through analysis of the source code. In order to generate
efficient code, a compiler needs to have tractable representations of array subscript func-
tions. It also needs tractable representations of how data and computational work are to
be partitioned [18], [15], [19], [35].
For instance, consider the Fortran D example:
ALIGN x,y with blocks
S5 DO i=1,750
Assume that each processor is responsible for computing values of data it owns (i.e.
the owner computes rule [18]). If we have 4 processors, each processor will own contiguous
chunks of 250 elements of arrays x and y. Since the subscript function for x(i) is the identity,
the owner computes rule implies that each of three processors will execute 250 iterations of
loop S5. In this example, it is clear by inspection that non-local accesses occur during each
processor's last two loop iterations. In addition, it is easy to determine which non-local data
must be obtained. For instance, the processor responsible for loop iterations 1 through 250
will need the first two values of y stored on the processor responsible for loop iterations 251
through 500. A variety of researchers [18], [15] have implemented techniques to generate
optimized calls to message passing routines given compile-time information about array
subscript functions, array distribution and the distribution of loop iterations.
This paper deals with situations where compile time analysis fails because crucial information
is not available until a program executes. There are a variety of applications in
which array subscript functions cannot be known at compile time. In many cases, these
subscript functions are given by integer arrays; consider the reference y(ia(i)) in the code
fragment below.
ALIGN x,y,ia with blocks
S5
. ia gets assigned values at runtime .
S6 do i=1,750
S8 end do
Compile time analysis is difficult to do when we have irregular array distributions or
irregular partitions of loop iterations. In the example below, it is impossible to predict at
compile time the data that needs to be communicated because the distribution of x and y
are not known until runtime.
S5
. set values of map array using a mapping method .
S6 ALIGN x, y with irreg
S8 do i=1,750
do
The ARF compiler is able to handle parallel loops (marked by distributed do) in
which array references have subscripts that are given by functions of:
1. the loop index,
2. scalars that are not redefined in the loop body.
3. arrays indexed by just the loop index.
Examples of such index functions are: (assuming that i is the index of a distributed do
that ia could be distributed in a regular or an
irregular manner. The ARF compiler cannot, in general, handle loops in which reference
patterns are not of this simple form. For instance, the compiler presented here could not
deal with the following loop:
distribute do i= 1,100 on partition
S5 end do
S6 end do
One difficulty arises in the reference x(col(j)) in statement S4. The values of the
subscript array col(j) are computed in statement S3. Statement S3 in turn lies within a
loop S2 whose upper bound is itself determined by the values taken on by array num. Das
et. al. [11] describes program slicing techniques that can be used to extend the methods
described here to a broader set of constructs.
Except for one special case, the ARF compiler is unable to handle loops with loop
carried dependencies. The special case involves accumulation type dependencies. The
decision to include this special case greatly expands the number of irregular application
codes to which these methods apply. The ARF compiler is able to recognize accumulations
to an indirectly addressed array as shown in the following example.
distribute do partition
do
The commutative and associative property of the "+" operator allows the ARF compiler
to postpone all accumulations to the distributed array x until the end of the loop
computation.
3.1 The Inspectors/Executors
Inspectors and executors perform optimizations to reduce communication costs for non-local
accesses arising from irregular array references. Each processor pre-computes which
data it will have to send or receive. Communication volume can be reduced by pre-fetching
a single copy of each off-processor datum, even if it is referenced several times. The number
of messages can be reduced by pre-fetching large quantities of off-processor data in a single
message.
3.2 Inspector
The inspector loop carries out the preprocessing needed to reduce the volume of communication
and the number of messages transmitted. Figure 1 illustrates how the inspector is
generated by the ARF compiler for a parallel loop. Hash tables, called hashed-caches, are
used for temporary storage. Run time primitives initialize the hashed caches, store and
retrieve data from them and flush the hashed caches when appropriate. During program
execution, a hash table records off-processor fetches and stores them enabling the user to
recognize when more than one reference is being made to the same off-processor distributed
array element. This way only one copy of that element must be fetched or stored.
During the inspector phase, we carry out a set of interprocessor communications that
allows us to anticipate exactly which send and receive communication calls each processor
must execute before and after executing the loop.
To carry out the inspector loop described above, we must be able to find the owner of
each distributed array element. Regular distributions comprise those that require simple
functions to compute the processor and local offset of a particular array element. For
example, if a one dimensional array is distributed in a block manner, a simple function can
be used to compute the processor and local offset of a particular array element. On the
other hand, irregular distributions are those where we attempt to partition in a way that
balances the following two objectives:
1. to have each processor perform approximately the same amount of work, and
2. to minimize communication overhead.
Foreach processor P
ffl Generate a clone of the partitioned loop nest
ffl Insert code to perform the following:
Foreach rhs irregular array references:
generate list of off-processor data to be fetched
Foreach lhs irregular array reference:
generate list of data to be stored off-processor
Exchange messages with other processors to determine copies
of non-local data to be sent and received during executor phase
Figure
1: Simplified Inspector for a single loop nest
Typically, it is not possible to express the resulting array partitions in a simple way. By
allowing an arbitrary assignment of distributed array elements to processors, we take on
the additional burden of maintaining a data structure that describes the partitioning. The
size of this data structure must be the same as the size of the the irregularly distributed
array. We call this data structure a distributed translation table. Distributed translation
tables are partitioned between processors in a simple manner (described in Section 4.3).
Distributed translation tables are accessed during the inspector phase to determine where
each data element resides.
Once the preprocessing is completed, every processor knows exactly which non-local
data elements it needs to send to and receive from the other processors. Once finished, we
are in a position to carry out the necessary communication and computation.
3.3 Executor
The loop is transformed into an executor loop. Figure 2 outlines the steps involved (the
nature of the distributed array distribution does not affect the executor). The initial
data exchange phase follows the plan established by the inspector. When a processor
obtains copies of non-local distributed array elements, the copies are written into the
processor's hashed cache. Once the communication phase is over, each processor carries
out its computation. Each processor uses locally stored portions of distributed arrays
along with non-local distributed array elements stored in the hashed cache. When the
Insert code before loop to
ffl communicate local data to be referenced by other
processor
ffl receive non local data to be referenced locally
Insert code inside loop to
ffl obtain non local data from hashed cache
ffl store non local writes to hashed cache
Insert code after loop to
ffl update off-processor stores
Figure
2: Executor for a single loop nest
computational phase is finished, distributed array elements to be stored off-processor are
obtained from the hashed cache and sent to the appropriate off-processor locations. In the
next section, we describe the details of the PARTI run time primitives that may be invoked
during the inspector and executor phases.
4 PARTI primitives
The PARTI run time primitives can be divided into three categories; primitives that may
be invoked during the inspector phase, the executor phase, or both the inspector and
executor phase. The scheduler primitive, invoked during the inspector phase, determines
the send and receive calls that are needed during the executor phase. These calls may be
to scatter data, gather data or perform reduction operations during the executor phase.
The distributed translation table mentioned earlier is used during the inspector phase. The
hashed cache primitives are used during both the inspector and executor phases. This next
section describes the details of the scheduler, distributed translation table, scatter, gather,
reduction, and the hashed cached primitives.
4.1 The Scheduler Primitive
We will use a simple example to illustrate the preprocessing carried out by the scheduler.
Assume we have a distributed array a that is partitioned among three processors in an
irregular fashion as depicted in Figure 3 and a loop computation such that the access
local array a 0
offsets 2142Global array a
Processors P(3)
Figure
3: Mapping of a Global Array to Processors
pattern of array a is as shown in Figure 4. Each processor stores its elements of distributed
array a in a local array a. Thus processor P 1
needs to fetch array element a(3) or element
a 0
(2) of the local array from processor P 2
and processors P 2
and P 3
need to fetch a(4) or
element a 0
(2) of the local array from P 1
. Recall that the task of the scheduler is to anticipate
exactly which send and receive communications must be carried out by each processor. The
scheduler first determines how many messages each processor will have to send and receive
during the data exchange that takes place in the executor phase. To gather this information
each processor needs to know the total number of processors executing the code. Defined
on each processor P i is an array nmsgs
i . Each processor sets its value of nmsgs
i (j) to 1
if it needs data from processor j or to 0 if it does not. The scheduler then updates nmsgs
on each processor with the element-by-element sum nmsgs i (j) /
(j). This
operation uses a fan-in tree to find the sums. At the end of the fan-in, on all processors,
the entries of nmsgs are identical. The value nmsgs(j) is equal to the number of messages
that processor P j
must send during the exchange phase. In our example scenario, we see
that at the end of the fan in, the value of nmsgs on each processor will be [2,1,0] (Figure
5). Thus P 1
is able to determine that it needs to send data to two other (as yet unspecified)
processors,
needs to send data to one processor and P 3
does not need to send any data.
At this point, each processor transmits to the appropriate processor, a list of required
array elements. This list contains the local offsets of the global array elements. In our
Irregular access pattern of array a21
local array a 0
Global array a
Processors P(3)
Figure
4: Irregular Access Pattern
example,
sends a message to P 2
requesting element 2 of the local array a 0
and P 3
send a message to P 1
requesting element 2 of the local array a 0
. Each processor now has
the information required to set up the send and receive messages that are needed to carry
out the scheduled communications (Figure 6).
The schedule generated by the scheduler can be reused. A schedule can also be used
to carry out identical patterns of data exchange on several different identically distributed
arrays. The same schedule can be reused to carry out a particular pattern of data exchange
on a single distributed array, and any of the data exchange primitives can make use of a
given schedule.
4.2 Data Exchange Primitives
Data exchangers can be called by each processor to:
ffl gather data from other processors,
ffl scatter data to other processors, or
ffl perform global reduction operations.
These exchangers use state information stored by the scheduler. As described in the previous
section, the scheduler determines the send and receive calls needed to carry out data
all processors
distributed to
sum tree
Output from
tree
Input to sum
data from
needs
data from
needs
data from
needs
Figure
5: Computing the number of Send Messages
exchanges. The scheduler is not given any information about memory locations - it involves
only processors and local indices.
When a processor P calls a data exchanger, it passes to the exchanger routine the
starting address of the first local array element in its memory. We call this address A P .
The exchanger routines use A P as base address to read or write distributed array elements.
4.3 The Translation Table
We allow users to assign globally numbered distributed array elements to processors in an
irregular pattern, using a distributed translation table. Recall that the scheduler and the
data exchangers deal with indices of arrays that are local to each processor. The translation
primitives, however, assume that distributed array elements have been assigned global
indices.
The procedure build-translation-table constructs the distributed translation table. Each
processor passes to build-translation-table a set of globally numbered indices for which it
will be responsible. The distributed translation table may be striped or blocked across
the processors. With a striped translation table, the translation table entry for global
Data sent by the processors: local array a '
Messages sent by the processors2Send
Receiving Processors P(3)
Processors P(3)
Figure
Final Message Pattern
index i is stored on processor i mod P , where P represents the number of processors. In a
blocked translation table, translation table entries are partitioned into a number of equal
sized ranges of contiguous integers; these ranges are placed in consecutively numbered
processors.
Dereference accesses the distributed translation table constructed in
build-translation-table. For a given distributed array, dereference is passed a set of global
indices that need to be accessed in the distributed memory. Dereference returns the processors
and memory locations where the specified global indices are stored.
We will illustrate these primitives using a simple two processor example where Processor
is assigned indices 1 and 4, and Processor P 2
is assigned indices 2 and 3. In this example,
we assume that the translation table is partitioned between the two processors by blocks.
We depict the translation table data structure in Table 1. Each entry of the translation
table assigns a processor and a local array index to each globally indexed distributed array
element. In our example, translation table information about global indices 1 and 2 is
stored in Processor 1, while information about global indices 3 and 4 is stored in Processor
2.
To continue our example, assume that both processors use the dereference primitive
to find assigned processors and local indices corresponding to particular global distributed
Table
1: Translation Table Entries
global assigned local
index processor index
Processor 1
Processor 2
Table
2: Results obtained from Dereference
processor global assigned local
number index processor index
array indices. In Table 2 we depict the results obtained when Processor 1 dereferences
global indices 1 and 3, and Processor 2 dereferences global indices 2, 3 and 4.
4.4 The Hashed Cache
The usefulness of the PARTI primitives described in Section 4 can be enhanced by coupling
these primitives with hash tables. The hash table records the numerical value associated
with each distributed array element. The hash table also records the processor and local
index associated with the element.
Dereference uses the hash table to reduce the volume of interprocessor communication.
Recall that dereference returns the processor assignments and the memory locations that
correspond to a given list of distributed array indices. Each distributed array index may
appear several times in lists passed to dereference. The hash table is used to remove these
duplicates.
Lists of off-processor distributed array elements passed to the scheduler may contain
multiple references to the same element. The scheduler uses the hash table to identify
unique off-processor data references.
The data exchange procedures use hash tables to store copies of off-processor distributed
array elements. The gather-exchanger fetches copies of off-processor distributed
array elements and places the values in a hash table. Similarly, the scatter-exchanger obtains
copies of off-processor distributed array elements from a hash table and writes the
values obtained into a specified local array element on a designated processor. Primitives
to support accumulations to non-local memory use hash tables in the same way as the
scatter-exchanger.
PARTI supplies a number of other primitives that support reading from, as well as
writing and accumulating to, hash tables. When off-processor accumulations must be
performed, we first carry out all possible accumulations to copies of distributed array
elements in the hash table, then we perform an accumulation data exchange.
We use a hash function that for a hashed cache of size 2 k , masks the lower k bits of the
key. The key is formed by concatenating the processor-local index pair that corresponds
to a distributed array reference.
4.5 Summary of the PARTI primitives
In this section we summarize the PARTI primitives that we have described and present an
example of how they are used. We consider the following PARTI procedure calls:
ttable build translation table(distribution,mapping,num elements
call dereference(ttable id,global indices, processors,local indices,num indices)
call setup hashed-cache(hashed-cache, processors, local indices)
call scheduler(id,n,hashed-cache,local indices,processors)
call gather-exchanger(id,hashed-cache,local-array).
In this example, a processor P arranges to obtain copies of specified off-processor data
elements, and these copies are placed in the hash table hashed-cache.
All processors call the build translation table function with the data mapping. This
function returns a pointer to a structure which stores the data layout. P calls the dereference
function to find the local addresses corresponding to the global indices it requires. The
dereference call returns the processor number and local address corresponding to each of
the global indices. P calls the function setup hashed-cache with the information returned
by dereference to allocate the hashed table. P passes to scheduler a list of off-processor local
array indices. The scheduler will build a schedule that will make it possible for P to obtain
data elements. P will obtain data element i, 1 - i - n from processor processors(i),
local index local indices(i). A previously allocated hash table hashed-cache is used
to eliminate duplicate off-processor indices. In most irregular problems, the data access
pattern in loops is such that the same data point is referenced multiple times. Partitioning
of such loops cause duplicate off-processor references. The scheduler returns an integer id
which will be used by the subsequent call to gather-exchanger.
Each processor then calls gather-exchanger. On each processor, the gather-exchanger
primitive is passed a pointer to the schedule (id), generated by the previous call to the
scheduler, a pointer to the allocated hash table (hashed-cache) and the base address
of its portion of the array local-array. After the execution of the gather-exchanger
call, copies of the off-processor elements from array local-array reside in the hash table
hashed-cache.
5 The ARF Compiler
The ARF compiler transforms the source program into a single program multiple data
(SPMD) form. Data distribution specifications are used to partition the program and
generate appropriate communication. The compiler incorporates the PARTI primitives to
carry out the computations on each processor efficiently. The kernels presented here have
been coded in Fortran 77, enhanced with ARF data distribution statements, compiled and
run on an iPSC/860. Section 6 presents performance data obtained from both kernels. We
describe a compilation algorithm that is slightly more general than the algorithm actually
used in the ARF compiler. The two algorithms produce equivalent code on the test data
sets.
5.1 Code Generation by the ARF Compiler
This compiler uses distribution specifications to generate code to set up the distributed
translation tables; calls to build translation table are embedded in the sequential code. One
call is generated for each distribution. The translation table pointer for an array is stored
in the symbol table.
If the array is distributed in a regular manner, then the translation table contains a
function, which is evaluated at runtime to find the processor and local index of a particular
datum. If the array is irregularly distributed, for each index both the processor and the
local index is stored explicitly in the distributed translation table.
In order to describe the algorithm used to generate the inspector and executor for a
do
do
Figure
7: Simple Irregular Loop
loop, an s descriptor must be defined:
s descriptor An s descriptor is a tuple which gives the complete description of a subscript
and consists of the following components:
where, for an s descriptor sd,
name of the array indexed by the subscript,
identifies how an array is distributed (BLOCK, CYCLIC,
IRREGULAR, etc.)
Type: the type of reference where the subscript expression is used. It can
be any one of the exchanger types; gather, scatter or accumulation.
List of subscript expression: the expressions used to determine the array in-
dex. For our implementation we assume that only a single dimension is accessed
using the type of index functions shown in Section 3.
In
Figure
7, arrays x, y, ia and ib are all distributed. The arrays ia and ib are used to
index the arrays x and y respectively. At compile time it is not possible to figure out the
indices of x and y that are accessed because they are dependent on the values stored in
the arrays ia and ib. This data access pattern becomes available at runtime.
For the algorithm, it is assumed that the loops do not have cross-processor loop carried
dependencies. Later in this section we will describe how loops that contain reductions are
handled. First, the basic algorithm to produce the inspector and executor for a given loop
is presented.
For any loop l,
ffl Find all array references. For the loop in Figure 7, the array references are x(ia(i))
and y(ib)).
ffl Using these references and the subscript expressions form a list of s descriptors oe SD .
For the loop shown in Figure 7 two s descriptors are generated, one for the reference
x(ia(i)) and the other for y(ib(i)).
After generating the list oe SD , we are ready to generate the inspector and the executor
code. For each sd 2 oe SD ,
ffl Generate a declaration statement for a temporary array temp to store the values that
will be assigned to the subscript corresponding to sd, i.e. sd(4), inside l. Note for the
two s descriptors generated for the example loop the storing of the reference trace in
a temporary array can be skipped and the arrays ia and ib can be used directly to
do the dereferencing.
ffl Generate a clone of loop l, loop l 0 , before l
ffl The body of the loop l 0 consists of a statement that records into temp each value
taken on by the subscript expression sd(4).
ffl Generate a call to dereference passing array temp and the translation table pointer
associated with array sd(1). For the example loop the dereferencing is done with the
arrays ia and ib.
ffl Next generate a call to the scheduler using the arrays PA and LA that are returned
by dereference to form the schedule S.
ffl If gather then a call to the gather-exchanger is generated using schedule S.
At runtime this obtains the off-processor data and puts the data in the hash table
. For the example loop the off-processor y values are gathered. If
then a call to the scatter-exchanger is generated using schedule S. This call to scatter-
exchanger, at runtime, takes the data from the hash table H S and sends it to the
other processors. For the example loop the data values from the array x are scattered.
If accumulation then a call to the scatter-op exchanger is generated using
schedule S. This call to scatter-op exchanger, at runtime, takes the data from the
hash table H S and accumulates it in the other processors.
do
do
Figure
8: Irregular Loop with Staged Indirect Indexing
ffl Replace the subscript expression that indexes the array sd(1) inside the loop l by the
temporary array temp.
The ARF compiler was tailored to recognize an idiom that is used to index distributed
arrays in many irregular codes (see for example Figure 8). A programmer assigns an
expression that would have otherwise been used to subscript an array reference to a scalar
s. The s is then used as a array subscript. In this type of indexing pattern, a scalar s is
defined inside a loop and then it is used to index distributed arrays. More precisely,
ffl A scalar s is defined once each iteration of the loop. The definition of s may be a
function of:
a. The loop index.
b. Scalars that are not defined in the loop body.
c. Arrays indexed by just the loop index.
ffl s is used to index the distributed dimension of distributed arrays in the loop body.
When one carries out forward substitution, subscript expressions in loops written using this
idiom have the properties outlined in Section 3. Note that forward substitution transforms
the example in Figure 8 to the example in Figure 7.
5.2 Optimizations
Two main optimizations are performed. The first optimization reduces the scheduling
overhead by identifying sets of distributed array references that can make use of the same
Optimization Array Distribution Subscript Type
Name Expression
Common Schedule
Elimination Don't Match Match Don't
care care
Common
Exchanger Match Match Match Match
Elimination
Table
3: Optimization Patterns
schedule. The second optimization reduces data transfer costs by identifying distributed
array references that can make use of precisely the same exchanger invocation.
These optimizations are carried out by sorting s descriptors into equivalence classes.
Several distributed array references can share the same schedule as long as all arrays in
question are: 1) identically distributed and 2) have matching subscript expressions. A
set of distributed array references can share the same exchanger call if all references have
identical s descriptors. Table 3 summarizes these conditions.
5.3 ARF Compiler Examples
In this section we present two examples used to demonstrate how the ARF compiler works.
Section 5.3.1 presents how ARF was used to program a distributed memory block sparse
matrix vector multiply kernel. Section 5.3.2 presents an example from computational fluid
dynamics.
5.3.1 Sparse Block Matrix Vector Multiply
Figure
presents an ARF program that carries out a block sparse matrix vector multiply.
This kernel is from an iterative solver produced for a program designed to calculate fluid
flow for geometries defined by an unstructured mesh [40]. The matrix is assumed to have
size 4 by 4 blocks of non-zero entries. Statements S4 and S5 are loops that sweep over the
non-zero entries in each block. It is assumed that the array partition is passed to the
sparse matrix vector multiply kernel after having been generated elsewhere.
Figure
presents specification of the data decomposition for the sparse block matrix
vector multiplication example written in Fortran D. If Fortran D is used to write the example
the only change to Figure 10 is replacement of statements S1 and S2 with statements S1
through S10 from Figure 11. The array map in Figure 11 specifies the mapping of the data
arrays. Of all the data arrays a single dimension is distributed and the rest are compressed.
In
Figure
10 the integer array partition is local to each processor and enumerates a
list of indices assigned to the processor. As mentioned earlier, the current implementation
partitions only one dimension: the last dimension of the array. PARTI primitives, however,
support a broader class of array mappings [6]. Thus partition describes the partitioning
of the last dimension of the arrays declared in statements S1 and S2. The ARF compiler
uses the information in partition to make calls to primitives that initialize the distributed
translation tables. These distributed translation tables are used to describe the mapping
of x, y, cols, ncols and f (statements S1 and S2).
The partitioning of computational work is specified in statement S3 using an on clause.
In this example, the distributed array partition is used to specify the loop iterations to
be carried out on each processor. The reference x(m,cols(j,i)) in S6 may require off-
processor references. ARF consequently generate an inspector to produce a schedule and
a hash table to handle accesses to the distributed array x. A reference to the irregularly
distributed array f occurs in statement S6. Note that distributed array f is irregularly
distributed using array partition and that partition is also used by the on clause
to partition loop iterations in S3. Therefore, it can be deduced that the reference to f
in statement S6 is on-processor; partition specifies how distributed array elements and
loop iterations are to be distributed between processors. A separate partitioning routine
generates partition.
The ARF compiler generates an inspector and an executor to run on each processor.
The code executed on each processor to generate the inspector is shown in Figure 9. The
statement S1 shows the generation of the translation table using the partition array.
Statement S2 shows the dereference call made to figure out the address of the various data
elements. The next two statements in the inspector code generates the data communication
schedule and the hash table structure.
The executor generated by ARF on processor P is depicted in Figure 12. Fortran 90
notation is used where appropriate to enhance readability. Off-processor elements of x are
gathered and placed in hash table H (step I, Figure 12). Values from x are obtained from
H or from local memory (step IIa, Figure 12). Arrays PA and LA are used to distinguish
build translation table using the mapping defined by array partition
call dereference to find processor assignments, PA and local indices, LA for consecutive
references to x(m; cols(j; i)), employing T partition
call setup hashed-cache(hashed-cache, PA, LA)
call scheduler(id,n,hashed-cache,LA,PA)
Figure
9: Inspector generated from ARF for Sparse Block Matrix Vector Multiply
local from off-processor array accesses. In step IIb, we accumulate to y. Note that the
declarations in S1 and S3 in Figure 10 allow the compiler to determine that accumulations
to y are local.
5.3.2 The Fluxroe Kernel
This kernel is taken from a program that computes convective fluxes using a method based
on Roe's approximate Riemann solver [41], [42]; referred to as Fluxroe kernel in this paper.
Fluxroe computes the flux across each edge of an unstructured mesh. Fluxroe accesses
elements of array yold, carries out flux calculations and accumulates results to array y.
As was the case in the sparse block matrix vector multiply kernel, four sections of each
array are distributed and accessed in an identical manner. Figure 13 depicts an outline
of the Fluxroe kernel. The indices of the two vertices that comprise edge i are noted as
To compute the fluxes f lux(k) across the ith edge,
access yold(k; n1) and yold(k; n2), for 1 - k - 4 (part I, Figure 13). Once the fluxes
have been computed, add the newly computed flux values f lux(k) to y(k; n1) and subtract
f lux(k) from y(k; n2) (part III, Figure 13). Note that arrays y and yold are irregularly
distributed using y-partition, and that distributed array node is irregularly distributed
using edge-partition. Since the on clause in the distributed do statement also uses
edge-partition to specify how loop iterations are to be partitioned, no off-processor
references are made to node in part I, Figure 13.
In the inspector, compute a schedule S n1 for the off-processor additions to y(k; n1)
(part IIIa, Figure 13), and a different schedule S n2 for the off-processor subtractions from
distributed irregular using partition real*8 x(4,n), y(4,n),f(4,4,maxcols,n)
distributed irregular using partition integer cols(9,n), ncols(n)
. initialization of local variables .
distributed do partition
do
do
S5 do
distributed enddo
Figure
10: ARF Sparse Block Matrix Vector Multiply
S5 ALIGN map with reg
S7 ALIGN f(i,j,k,l) with map(l)
S8 ALIGN ncols(i) with map(i)
Figure
11: Fortran D Data Distribution Statements for Sparse Block Matrix Vector Mul-
I. call gather-exchanger using schedule S to obtain off-processor elements of x
gather-exchanger places gathered data in hash table H
II. for all rows i assigned to processor P
do
do k= 1,4
IIa. if (PA(count) == P ) then
else
Use PA(count), LA(count) to get vx(1:4) from hashtable H
endif
do m=1,4
IIb.
Figure
12: Executor generated from ARF for Sparse Block Matrix Vector Multiply
distributed irregular using y-partition real*8 yold(4,Number-nodes), y(4,Number-
nodes)
distributed irregular using edge-partition integer node(2,Number-edges)
. initialization of local variables .
distributed do 1,Number-edges on edge-partition
I.
do k=1,4
Ia.
Ib.
II. Calculate flux using Va(k), Vb(k)
III. do k=1,4
IIIa.
IIIb.
distributed enddo
Figure
13: ARF Kernel From Riemann Solver
build translation table using the mapping defined by array y-partition
call dereference to find processor assignments, PA n1 and local indices, LA n1 for consecutive
references to y(k; n1), employing T y\Gammapartition
call dereference to find processor assignments, PA local indices, LA consecutive
references to y(k; n2), employing T y\Gammapartition .
call setup hashed-cache(hashed \Gamma cache n1
S5 call setup hashed-cache(hashed \Gamma cache
S7 call scheduler(id,n,hashed \Gamma cache
Figure
14: Inspector generated from ARF for Fluxroe Kernel
Figure 13). When parallelized, Fluxroe reads, as well as accumulates,
to off-processor distributed array locations. Any of the data exchange primitives can use
the same schedule. Schedule S n1 to gather off-processor references from yold(k; n1) (part
Ia,
Figure
can be used, and schedule S can be used to gather off-processor references
from yold(k; n2) (part Ib, Figure 13).
The inspector code generated by the ARF compiler for the Fluxroe Kernel is shown
in
Figure
14. Statement S1 shows the call to the build translation table function to store
the information of how the array y is partitioned. Statements S2 and S3 are calls to the
dereference function to find the addresses of the various references to the y array. Both
these dereference calls use the translation table setup in Statement S1. Statements S4 and
S5 generates the hash table structure. The last two statements in the code fragment shows
the building of the communication schedules.
Figure
15 outlines the executor produced by ARF on processor P . In Figure 15
Fortran 90 notation is used where appropriate to enhance readability. In step Ia and Ib
two sets of off-processor elements of yold are gathered using schedules S n1 and S n2 . In
step II the appropriate elements of yold are accessed either from local memory or from
the appropriate hash table; and in step III yold values are used to calculate fluxes. If the
newly computed fluxes are to be accumulated to a local element of distributed array y, the
appropriate addition or subtraction is carried out at once ( steps IVa and IVc, Figure 15).
When a flux must be accumulated to an off-processor element of y, accumulate the flux to a
copy of y stored in a hash table (steps IVb and IVd, Figure 15). When all fluxes have been
calculated and all local accumulations completed, call the scatter-add and scatter-subtract
exchangers. These exchangers carry out the needed off-processor accumulations.
The current version of the ARF compiler attempts to minimize the number of schedules
to be computed. A single schedule for all off-processor yold data accesses could have been
produced. Computing a single schedule for all references to yold will lead to a more
efficient executor at the cost of a more expensive inspector.
5.4 Memory Utilization
In this section an overview of some of the memory requirements exacted by the methods
described is given, and suggestions made of some ways in which these requirements can be
reduced. Many sparse and unstructured programs use large integer arrays to determine
reference patterns. In this respect, the kernels depicted here are typical. In Figure 10,
a 9n element integer array cols is used for this purpose; while in Figure 13, a size
array node is employed. The executors depicted in Figure 12 and
Figure
replace cols and node with local arrays that store the processor assignments
and the local indices for references to irregularly distributed arrays. In the kernels in
Figure
the sum of the number of elements used in all processors to store both processor
assignments and local indices is no larger than 18n. In Figure 13 the parallelized code uses
a total of 4 Number \Gamma edges elements.
The amount of additional storage needed for the parallelized code can be reduced in
the following simple manner. The iterations I of a loop are divided into two disjoint sets.
The first set of iterations is I local , where all memory references are locally stored array
elements. The second set is I off\Gammaprocessor
, in which each iteration contains some off-
processor distributed array reference. In this case listing processor assignments for loop
iterations I off\Gammaprocessor is necessary. Since it is frequently possible to map problems so
that most memory references are local to a processor, a substantial memory savings results.
The schemes described thus far would use very large quantities of extra memory when
attempting to handle a loop in which a small number of distributed array elements are
accessed many times. For instance, consider the following loop where f is a function
defined so that 1 - f(i) - 2 for any i.
Ia. call gather-exchanger using schedule Sn1 to obtain first set of off-processor elements of yold
gather-exchanger places data in hash table H n1
Ib. call gather-exchanger using schedule Sn2 , to obtain second set of off-processor elements of
yold
gather-exchanger places data in hash table H
II. for edges i assigned to processor P
do i=1,Number of edges assigned to P
if (PAn1 (count) == P ) then
else
get va(1:4) from hash table H n1
endif
if (PAn2 (count) == P ) then
else
get vb(1:4) from hash table H
endif
III. calculate fluxes flux(1:4) using va(1:4) and vb(1:4)
IV. if (PAn1 (count) == P ) then
IVa. yold(1:4,LA n1
else
IVb. Accumulate flux(1:4) to hash table H n1
endif
if (PAn2 (count) == P ) then
IVc. yold(1:4,LA
else
IVd. Accumulate flux(1:4) to hash table H
endif
Va. Call scatter-add exchanger using schedule S n1
and hash table H n1
Vb. Call scatter-subtract exchanger using schedule S
and hash table H
Figure
15: Executor generated from ARF for Fluxroe Kernel
distributed irregular partition y
do
. y(f(i))
do
The reference pattern of distributed array y is determined by f. At most two distinct
elements of y are referenced in the loop. Loops of this sort can be handled by using a hash
table to store processor and local index assignments for each distinct memory reference. In
this example, each processor would store processor and local index assignments for no more
than two references to distributed array y. There is a performance penalty for using a hash
table to find processor and local index assignments for distributed array elements. After
examining a variety of sparse and unstructured codes, it was decided not to implement the
method described in this section in the ARF compiler. See the analysis in [30] for the time
and space tradeoffs outlined in this section.
6 Experimental Results
This section presents a range of performance data that summarizes the effects of preprocessing
on measures of overall efficiency. Also discussed is the performance effects of problem
irregularity and partitioning. The computational experiments employed the Fluxroe kernel
and the block sparse matrix vector multiply kernel. Both kernels were coded in ARF;
the parallelized benchmark numbers were obtained from programs generated by the ARF
compiler. Note that the syntax accepted by the ARF compiler differs in minor ways from
what was presented in previous sections.
The experiments described in this paper used either a 32 processor iPSC/860 machine
located at ICASE, NASA Langley Research Center or a 128 processor iPSC/860 machine
located at Oak Ridge National Laboratories. Each processor had 8 megabytes of memory.
The Greenhill 1.8.5 Beta version C compiler was used to generate code for the 80860
processors.
6.1 Unstructured Mesh Data
Input data from variety of unstructured meshes were used; including actual unstructured
meshes obtained from aerodynamic simulations and synthetically generated meshes.
Unstructured Meshes from Aerodynamics: Two unstructured meshes generated
from aerodynamic simulations were used.
Mesh A: A 21,672 element mesh generated to carry out an aerodynamic simulation
involving a multi-element airfoil in a landing configuration [28]. This
mesh has 11,143 points.
Mesh B: A 37,741 element mesh generated to simulate a 4.2 % circular arc
airfoil in a channel [14]. This mesh has 19,155 points.
Each mesh point is associated with an (x; y) coordinate in a physical domain. Domain
information was used to partition the mesh in three different ways: strips, orthogonal
binary dissection algorithm [5], [13], and another mesh partitioning algorithm jagged
partitioning [38]. The partitioning of the meshes are done sequentially and mapping
arrays are generated for distribution of the data structures.
Synthetic Mesh from Templates
A finite difference template links K points in a square two dimensional mesh. This
connectivity pattern is distorted incrementally. Random edges are introduced subject
to the constraints in the new mesh; each point still requires information from K other
mesh points.
This mesh generator makes the following assumptions:
1. The problem domain consists of a 2-dimensional square mesh of N points,
2. Each point is initially connected to K neighbors determined by a finite difference
template,
3. With probability q, each mesh link is replaced by a link to a randomly chosen
mesh point.
Note that when q is equal to 0:0, no mesh links are modified and no changes are
introduced. When q is equal to 1:0 a random graph is generated. Two templates
are used. One template connects each point to its four nearest neighbors (K=4); the
other template connects each point to both its four nearest neighbors and to each
of its four diagonal neighbors (K=8). The is referred to as a five
point template and the K=8 template as a nine point template. In the experiments
described in this section, a 256 by 256 point mesh was employed.
6.2 Overall Performance
Data is presented to give an overview of the performance obtained on the iPSC/860 from the
ARF compiler output. A block distributed translation table was used. Table 4 presents
a) the inspector time: time required to carry out the inspector preprocessing phase, b)
computation time: the time required to perform computations in the iterative portion of
the program, and c) the communication time: the time required to exchange messages
within the iterative portion of the program. The inspector time includes the time required
to set up the needed distributed translation table as well as the time required to access
the distributed translation table when carrying out preprocessing. Unstructured Meshes A
and B were partitioned using orthogonal binary dissection. In these experiments, the ratio
of the time required to carry out the inspector to the time required for a single iteration
communication time) ranged from a factor of 0.7 to a factor of 3.6.
Most of the preprocessing time represents set up and use of the distributed translation
table. For instance, consider the block matrix vector multiply on 64 processors using
the 21,672 element mesh. The total preprocessing cost was 122 milliseconds, of which
milliseconds represent work related to the translation table. Parallel efficiency for
a given number of processors P is defined as sequential time divided by the product of
the execution time on P processors times P. The sequential time was measured using a
separate sequential version of the each kernel run on a single node of the iPSC/860. The
algorithm of the sequential code was the same as that of the parallel code. Table 4, under
the column single sweep efficiency, depicts the parallel efficiencies that would have been
obtained with the requirement to preprocess the kernel each time the calculations were
carried out. In reality, preprocessing time can be amortized over multiple mesh sweeps. If
the time required to preprocess the problem in computing parallel efficiencies is neglected,
the second set of parallel efficiency measurements is obtained. The executor efficiency is
presented in Table 4. The executor efficiencies for 64 processors ranged from 0.48 to 0.59,
while the single sweep efficiencies ranged from 0.10 to 0.17.
In the experiments depicted in Table 4, computing time is at least a factor of 2 greater
than the communication time. The executor efficiencies are effected by the fact that the
computations in the parallelized codes are carried out less efficiently than those in the
sequential program. The parallel code spends time accessing the hashed cache. It also
needs to perform more indirections than the sequential program.
Table
4: Performance on different number of processors
nprocs inspector comp comm single sweep executor
time(ms) time(ms) time(ms) efficiency efficiency
Sparse Block Matrix Vector Multiply - Mesh A
Sparse Block Matrix Vector Multiply - Mesh B
Table
5, summarizes the performance of the Fluxroe kernel for meshes with varying
degrees of regularity and for varying mesh mappings. This experiment was conducted using
processors. Table 5 depicts synthetic meshes derived from 5 and 9 point stencils with
probability of edge move q equal to either 0:0 or 0:4. These meshes were mapped by 1-D
strips or by 2-D blocks. As one might expect for the synthetic meshes, the communications
costs increase dramatically for increasing q. These dramatic increases are present because
both the volume of communication required and the number of messages sent per node
are much higher for large q. Preprocessing costs also increased with q but while the
communications costs went up by at least a factor of 16, preprocessing costs went up by
at most a factor of 1.8.
Table
5 summarizes results from Meshes A and B which were partitioned in three ways:
strips, the orthogonal binary dissection algorithm, and jagged partitioning. Both binary
dissection and the jagged partitioning algorithm break the domain into two dimensional
rectangular regions; the two methods produce very similar performance results.
Table
5: Performance on 32 processors with different meshes
nprocs inspector comp comm single sweep executor
time(ms) time(ms) time(ms) efficiency efficiency
5 point template synthetic mesh partitioned into strips
q=0.4 310 293 361 0.25 0.37
5 point template synthetic mesh partitioned into 2-D block
q=0.4 463 291 319 0.23 0.40
9 point template synthetic mesh partitioned into strips
q=0.4 385 620 530 0.31 0.42
9 point template synthetic mesh partitioned into 2-D block
q=0.4 595 624 527 0.28 0.42
Mesh A
binary 134 80 22 0.24 0.57
jagged 135 81 22 0.24 0.56
strips 148 83 26 0.22 0.53
binary 191 136 23 0.28 0.61
jagged 186 137 21 0.28 0.62
strips 219 149 31 0.24 0.54
6.3 Breakdown of Inspector Overhead
Table
6, summarizes the cost of dereferencing and scheduling the Fluxroe kernel on different
numbers of processors using a blocked translation table. A five point template was
used and the mesh was partitioned either into 1-D strips or into 2-D blocks. When the
mesh is partitioned into strips, dereference involves mostly local data accesses since the
domain data and the translation table are identically partitioned. When strip partitioning
is used, translation table initialization does not involve communication. The measurements
presented in Table 6 are defined in the following manner:
ffl Executor time is the computation and communication time required to execute the
it does not include time required for preprocessing,
ffl Table initialization time is the time needed to initialize the distributed translation
table,
ffl Dereference time is the time taken by the dereference PARTI primitive, and
ffl Scheduler time is the time required to produce the communications schedule once the
required processor locations and local indices have been found by dereference.
The majority of the costs incurred by the inspector are due to the translation table
initialization and dereference (see Table 6). For instance, consider the case where 64
processors are used to carry out a sweep over a 2-D block partitioned mesh with a 5 point
template. The translation table initialization and dereference together require 183 % of
the executor time while the generation of the schedule requires only 12 % of the executor
time.
In these problems communication costs comprise a small fraction of the executor time;
consequently the method used to partition the domain does not make a significant performance
impact on executor time. In Table 6, the costs of translation table initialization and
of dereference are both strongly dependent on how the domain is partitioned. 2-D block
partitioning leads to higher translation table related costs. This is almost certainly due to
the increased communication requirements needed for translation table initialization and
dereference. Strip partitioning per se does not necessarily lead to low translation table related
costs. In Table 5 it is noted that strip partitioning actually leads to higher inspector
costs for both Mesh A and Mesh B. The translation table is partitioned so that blocks of
contiguously numbered indices are assigned to each processor. However in Mesh A and
Mesh B, mesh points are not numbered in a regular fashion so the indices corresponding
to a domain strip are not contiguously numbered.
Table
Cost of dereferencing and scheduling on different number of processors
nprocs executor table init dereference schedule
time
5 point template synthetic mesh partitioned into strips
5 point template synthetic mesh partitioned into 2-D blocks
6.4 Cost of translation table
Section 4.3 discussed two straightforward ways to map a distributed translation table
onto processors. Consider the question of how to distribute the translation table so as to
minimize costs associated with translation table access. Table 7 compares the time required
to carry out dereference on blocked and striped translation tables by depicting:
ffl the time required to carry out a particular call to dereference,
ffl the average number of non-local accesses to table entries required by dereference, and
ffl the average number of non-local processors accessed during the call to dereference.
When the results for unstructured Meshes A and B are examined, no consistent performance
difference in the cost required to dereference a blocked or a striped translation
table is seen. Similar numbers of off-processor table entries need to be accessed for either
translation table distribution. Blocked translation tables lead to superior performance
when synthetic meshes are used. For the reasons described in Section 6.3 particularly good
results are obtained when a striped partition with a blocked translation table is used. It
is of interest that the blocked translation table also proved to be superior when synthetic
meshes partitioned in 2-D blocks are used.
Table
7: Cost of dereference on processors
Problem Indirect - Blocked Indirect - Striped
Time Nonlocal Nonlocal Time Nonlocal Nonlocal
(ms) Data Proc (ms) Data Proc
Synthetic: 5 point template, strip partition
q=0.2 157 1045 17 365 2862 31
q=0.4 218 1825 17 368 3350 31
Synthetic: 5 point template, 2-D block partition
q=0.4
Mesh A
binary 97 768 21 96 743 31
jagged
strips
binary
jagged 139 1293 24 130 1263 31
strips 159 1519 31 172 1513 31
6.5 Scheduler and Data Exchanger Performance
To quantify the communications costs incurred by the PARTI scheduler and data exchange
primitives, the time required to carry out the scheduler, gather-exchanger and
scatter-exchanger procedure calls were measured and compared to the hand-coded version
of iPSC/860 supplied sends and receives. The sends and receives communicated the same
amount of data as did the PARTI procedures. An experiment was conducted in which two
processors repeatedly exchanged W single precision words of information. The exchange
was carried out using gather-exchangers, scatter-exchangers and the iPSC/860 supplied
send and receive calls. Table 8 summarizes the results of these experiments. Presented are:
the time (in milliseconds) required to carry out the requisite data exchange using send and
receive messages; the ratio between the time taken by the scheduler and gather-exchanger
PARTI primitive calls and the time taken by the equivalent send and receive calls. The
scatter exchanger calls were also timed, the results of which were virtually identical to that
of the corresponding gather-exchanger call.
The gather-exchanger exceeded no more than 20% of the explicitly coded send/receive
pairs moving W words of information between two processors. The additional overhead
required for scheduler to carry out the data exchange was a factor of 2:1 to 1:0 times the
Table
8: Overheads for PARTI Scheduler and Gather-Exchanger Primitives
Number of Send Gather- Scheduler
Data Receive Exchanger
Elements Time(ms) (ratio) (ratio)
400 1.0 1.1 1.4
900 1.8 1.1 1.3
1600 2.9 1.2 1.3
4.3 1.2 1.1
cost of using explicitly coded send/receive pairs to move W words.
7 Relation to Other Work
Programs designed to carry out a range of irregular computations, [2, 26, 4, 43, 13], including
sparse direct and iterative methods, require many of the optimizations described
in this paper.
Several researchers have developed programming environments that target particular
classes of irregular or adaptive problems. Williams [43] describes a programming environment
(DIME) for calculations with unstructured triangular meshes using distributed
memory machines. Baden [3] has developed a programming environment targeting particle
computations, which provides facilities that support dynamic load balancing. One
of the key distinctions between the present work and that of Baden and Williams is that
PARTI runtime support is designed to be used by compilers to handle parallel loops with
irregular array references. In addition, it can be used by programmers in a wide range of
applications. By contrast, programming environments such as those described by Baden
and Williams are highly customized for use in specific application areas.
There are a variety of compilers targeting distributed memory multiprocessors [44, 8,
33, 31, 1, 39]. With the exception of the Kali project [22], and the PARTI work described
here and in [36, 29, 37], these compilers do not attempt to deal with loops having irregular
references efficiently.
The work described in this paper is also related to schemes to carry out distributed
memory runtime parallelization [29, 27]. These schemes are more ambitious than those
described in this paper, which include mechanisms to carry out runtime partitioning and
parallelization. Chen [27] suggests an optimization similar to one described here. She proposed
reducing scheduling overheads by identifying distributed array references for which
one can employ identical schedules. At this point only hand coding based timing experiments
have been carried out to study the schemes proposed [29, 27].
The prototype compiler described here is able to generate code capable of efficiently
handling kernels with parallel loops containing irregular array references. The procedures
that carry out runtime optimizations are coupled to a distributed memory compiler via
a set of compiler transformations. The compiler described and tested in this paper is
qualitatively different from the efforts cited above in a number of important respects.
Mechanisms have been developed and demonstrated that support irregularly distributed
arrays, making it possible to map data and computational work in an arbitrary manner.
Because irregularly distributed arrays can be supported, it was possible to compare the
performance effects of different problem mappings. Support for arbitrary distributions
was proposed [29, 37] but this is the first implementation of a compiler-based distributed
translation table mechanism for irregular scientific problems.
Many unstructured NASA codes must carry out data accumulations to off-processor
memory locations. One of the demonstration kernels addressed this, and the primitives
and the compiler were designed to handle this situation. This compiler effort is unique in
its ability to carry out irregular patterns of off-processor data accumulations efficiently.
These primitives are augmented with a hash table designed to eliminate duplicate data
accesses. In addition, the hash table manages copies of off-processor array elements. Other
researchers have used different data structures for management of off-processor data copies
[22].
8 Conclusion
This paper described and experimentally characterized a compiler and runtime support
procedures which embody methods that are capable of handling an important class of
irregular problems that arise in scientific computing. After examining a number of complete
NASA codes, two kernels were extracted to demonstrate the methods. Both of these kernels
involved computations over unstructured meshes. Both kernels were coded in ARF, a
dialect of Fortran, and generated code to run on the nodes of the iPSC/860. Detailed
timings were carried out on both kernels using unstructured meshes from aerodynamics,
along with meshes that were generated by using random numbers to incrementally distort
matrices obtained from a fixed finite difference template. This benchmarking suite stressed
the communications capabilities of the iPSC/860 and the PARTI primitives in a variety of
ways.
In the experiments reported in Section 6.2, the ratio of the time required to carry
out all preprocessing to the time required for a single iteration of either kernel ranged
from a factor of 0.7 to a factor of 3.6. In Section 6.3 the majority of the preprocessing
costs arose from the need to support irregularly distributed arrays. In Section 6.5 the
performance of the scheduler and data exchanger PARTI primitives were quantified. The
data-exchangers demonstrated a maximum increase of 20% over the analogous send and
receive calls provided by Intel.
One of the virtues of the layered approach to distributed compiler design is the capture
of a set of critical optimizations in the runtime support primitives. These primitives, and
hence these optimizations, can be migrated to a variety of compilers targeting distributed
memory multiprocessors. It is intended to implement these primitives in the ParaScope
parallel programming environment [17]. In addition, PARTI primitives can, and are, being
used directly by programmers in applications codes [6], [10]. The applications described
in [10] were particularly noteworthy. These applications were explicit and multigrid unstructured
Euler solvers which were employed to compute flows over full aircraft configura-
tions. The explicit unstructured Euler solver achieved a computational rate of 1.5 Gflops
on 512 processors of the Intel Touchstone Delta. The multigrid unstructured Euler solver
achieved a computational rate of 1.2 Gflops on 512 Delta processors. In both cases, the cost
of the inspector's preprocessing was approximately equal to the cost of a single iteration
of the Euler solver, amounting to less than 3 % of the total time.
Most of the complexity in this system is in the PARTI procedures. The PARTI procedures
have been developed so that transformations needed to embed the appropriate
primitives can be implemented with relative ease in distributed memory compilers. The
primitives used to implement the runtime support include communications procedures designed
to support irregular patterns of distributed array access, and procedures to find the
location of irregularly mapped distributed array data using distributed translation tables.
Primitives also support the maintenance of hash tables to store copies of off-processor data.
9
Acknowledgements
We would like to thank Harry Jordan, Bob Voigt and Donna Meisel for their careful editing
of this manuscript. We would also like to thank the Advanced Computing Laboratory at
Oak Ridge National Laboratories and NAS at NASA Ames for providing access to the 128
node Intel iPSC/860 hypercubes.
We wish to thank Dimitri Mavriplis and David Whitaker for supplying unstructured
meshes, and to David Whitaker and P Venkatkrishnan for access to their codes.
--R
PANDORE: A system to manage data distribution
Programming abstractions for dynamically partitioning and coordinating localized scientific calculations running on multiprocessors
An experimental study of methods for parallel preconditioned Krylov methods
A partitioning strategy for pdes across multi- processors
Execution time support for adaptive scientific algorithms on distributed memory architectures
A design methodology for synthesizing parallel algorithms and architec- tures
The Paragon multicomputer environment: A first implementation
CM Fortran reference manual
The design and implementation of a parallel unstructured Euler solver using software primitives
Slicing analysis and indirect access to distributed arrays
Fortran D language specification
Solving Problems on Concurrent Computers
Numerical methods for the computation of inviscid transonic flows with shock waves - a gamm workshop
Updating distributed variables in local computations
High Performance Fortran Forum
Compiler support for machine-independent parallel programming in Fortran D
Compiler optimizations for Fortran D on MIMD distributed-memory machines
Compiling Programs for Nonshared Memory Machines
Compiling global name-space programs for distributed execution
Supporting shared data structures on distributed memory architectures
Generating explicit communication from shared-memory program references
Computational models and task scheduling for parallel sparse Cholesky factorization
Parallelizing loops with indirect array references or pointers
Multigrid solution of the two-dimensional Euler equations on unstructured triangular meshes
Principles of runtime support for parallel processors
A scheme for supporting automatic data migration on multicomputers
Process decomposition through locality of reference
An overview of Dino - a new language for numerical computation on distributed memory multiprocessors
Expressing complex parallel algorithms in Dino
Massive parallelism and process contraction in Dino
The DINO parallel programming language
the crystal runtime system
Performance effects of irregular communications patterns on massively parallel multiprocessors
A Parallelizing Compiler for Distributed Memory Parallel Computers
Parallel preconditioned iterative methods for the compressible navier stokes equations
Solution algorithms for the two-dimensional Euler equations on unstructured meshes
Distributed irregular finite elements
SUPERB: A tool for semi-automatic MIMD/SIMD parallelization
Vienna Fortran - a language specification
--TR
--CTR
Manuel Ujaldon , Emilio L. Zapata, Efficient resolution of sparse indirections in data-parallel compilers, Proceedings of the 9th international conference on Supercomputing, p.117-126, July 03-07, 1995, Barcelona, Spain
Ayon Basumallik , Rudolf Eigenmann, Optimizing irregular shared-memory applications for distributed-memory systems, Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming, March 29-31, 2006, New York, New York, USA
Rong-Guey Chang , Tyng-Ruey Chuang , Jenq Kuen Lee, Efficient support of parallel sparse computation for array intrinsic functions of Fortran 90, Proceedings of the 12th international conference on Supercomputing, p.45-52, July 1998, Melbourne, Australia
Roxana E. Diaconescu, Distributed component architecture for scientific applications, Proceedings of the Fortieth International Conference on Tools Pacific: Objects for internet, mobile and embedded applications, February 01, 2002, Sydney, Australia
Vladimir Kotlyar , Keshav Pingali , Paul Stodghill, Compiling parallel code for sparse matrix applications, Proceedings of the 1997 ACM/IEEE conference on Supercomputing (CDROM), p.1-18, November 15-21, 1997, San Jose, CA
Kevin B. Theobald , Gagan Agrawal , Rishi Kumar , Gerd Heber , Guang R. Gao , Paul Stodghill , Keshav Pingali, Landing CG on EARTH: a case study of fine-grained multithreading on an evolutionary path, Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), p.4-es, November 04-10, 2000, Dallas, Texas, United States
Renato Ferreira , Gagan Agrawal , Joel Saltz, Data parallel language and compiler support for data intensive applications, Parallel Computing, v.28 n.5, p.725-748, May 2002
Gagan Agrawal , Joel Saltz, Interprocedural compilation of irregular applications for distributed memory machines, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.48-es, December 04-08, 1995, San Diego, California, United States
Peizong Lee , Zvi Meir Kedem, Automatic data and computation decomposition on distributed memory parallel computers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.1, p.1-50, January 2002 | compiler;data parallel language;distributed memory;irregular problems;parallel computing |
627083 | Characterizing the Performance of Algorithms for Lock-Free Objects. | AbstractConcurrent access to shared data objects must be regulated by a concurrency control protocol to ensure correctness. Many concurrency control protocols require that a process set a lock on the data it accesses. Recently, there has been considerable interest in lock-free concurrency control algorithms. Lock-free algorithms offer the potential for better system performance because slow or failed processes do not block fast processes. Process slowdowns can occur due to cache line faults, memory and bus contention, page faults, context switching, NUMA architectures, heterogeneous architectures, or differences in operation execution time. Much work has been done to characterize the performance of locking algorithms, but little has been done to characterize the performance of lock-free algorithms. In this paper, we present a performance model for analyzing lock-free algorithms that studies the effects of slowdowns on performance. We find that lock-free algorithms are better than locking algorithms if the slowdowns are transient, but worse if the slowdowns are permanent. One implication of this result is that lock-free concurrent objects are appropriate for UMA architectures, but NUMA architectures require special protocols. | Introduction
Processes (or tasks, threads, etc.) in a concurrent system often access shared objects to coordinate their
activities, whether performing a user computation or maintaining system resources. We regard a shared
object to be a shared data structure and a set of operations on the data structure (in this paper we don't
allow nested calls or inheritance). The processes that access shared data objects must follow a concurrency
control protocol to ensure correct executions. Concurrent access to shared data is often moderated with
locks. A data item is protected by a lock, and a process must acquire the lock before accessing the data item.
The type of lock that a process requests depends on the nature of the shared data access, and different lock
types have different compatibilities and different priorities. For example, read-only access to a data item
can be granted by the acquisition of a shared lock, while read and write access requires an exclusive lock.
Shared locks are compatible with each other, but an exclusive lock is compatible with no other lock.
Locking protocols for concurrent database access are well-known [10]. In addition, locking protocols for
concurrent access to a wide variety of specialized data structures have been proposed. Examples include
binary search trees [33, 37], AVL trees [15], B-trees [8, 53], priority queues [12, 46, 30] and so on. Shasha
and Goodman [54] have developed a framework for proving the correctness of lock-based concurrent search
structure algorithms.
The analytical tools needed to study the performance of lock-based data structure algorithms have been
established [27, 28, 47]. A general analytical model for modeling the performance of lock-based concurrent
data structure algorithms has been developed [29, 28]. The performance of locking protocols also has been
well studied. Tay, Suri, and Goodman [57], and Ryu and Thomasian [52] have developed analytical models
of the performance of Two-phase Locking variants in database systems.
Herlihy has proposed general methods for implementing non-blocking concurrent objects (i.e., concurrent
data structures) [21]. In a non-blocking object, one of the processes that accesses the object is guaranteed
to make progress in its computation within a finite number of steps. A non-blocking algorithm is fault-
tolerant, since a failed process will not make the object unavailable. In addition, fast processes execute
at the expense of slow operations, which (hopefully) improves the performance of the object. A typical
non-blocking algorithm reads the state of the object, computes its modifications, then attempts to commit
its modification. If no conflicting operation has modified the object, the commit is successful, and the
operation is finished. Otherwise, the operation tries again. The operation typically uses the compare-
and-swap [65, 9, 43] atomic read-modify-write instruction to try to commit its modifications (one work
uses the load-locked/store-conditional instruction [22], and several special architecture that support lock-free
algorithms have been developed [23, 56]). While many additional non-blocking and lock-free algorithms have
been proposed, most have this essential form. Herlihy has also proposed methods for wait-free concurrent
objects, in which every operation is guaranteed of completion within a bounded number of steps. We do not
address the performance of wait-free objects in this paper.
Considerable research on lock-free concurrent algorithms has been done lately [25, 22, 58, 2, 23, 56]. The
researchers who work on lock-free algorithms claim that lock-free algorithms can improve the performance of
concurrent systems because fast operations execute at the expense of slow operations. Process "slowdowns"
can occur due to cache line faults, memory and bus contention, page faults, context switching, NUMA
architectures, heterogeneous architectures, or differences in operation execution time. While some work has
been done to measure the performance of lock-free algorithms [22, 23, 45], the performance of lock-free
algorithms relative to that of blocking algorithms has received little study [45]. In this work, we develop a
performance model of lock-free algorithms. Our model studies the effects of both transient and permanent
slowdowns in the speed of operation execution. We find that lock-free algorithms are better than locking
algorithms if the slowdowns are transient, but worse if the slowdowns are permanent. We extend the
explanatory model to a model that accurately predicts the utilization of the shared object.
Algorithms
Herlihy [21] introduced the idea of a non-blocking algorithm for implementing concurrent data structures. A
concurrent algorithm is nonblocking if it is guaranteed that some processor makes progress in its computation
in a finite number of steps. If a process sets a lock and then fails, no process can make progress. Hence, non-blocking
algorithms must avoid conventional locks. Herlihy describes a method for transforming a sequential
implementation of an object into a concurrent, non-blocking implementation. An object is represented by a
pointer to its current instantiation. A process performs an operation on an object by taking a snapshot of
the object, computing the new value of the object in a private but shared workspace (using the sequential
implementation), then committing the update by setting the object pointer to the address of the newly
computed object.
If there is no interference, then the operation should succeed in its commit. If an interfering operation
modified the object, the commit should fail. Since the object is updated by changing the object pointer,
a process should set the object pointer to the address of its updated object only if the object pointer has
the value that the process read in the initial snapshot. This action can be performed atomically by using
the compare-and-swap (CNS) instruction. The CNS instruction is available on the IBM/370, the Cedar, the
BBN, the Motorola 68000 family, and on the Intel 80486. The CNS instruction is equivalent to the atomic
execution of the program in Code 1.
object *point,*old,*new f
if(*point=old)f
*point := new
return(success)
else return(failure)
A typical non-blocking algorithm has the form of Herlihy's small-object protocol, which is shown in
Code 2. In this paper, we are abstracting away the memory management problems that can result in the
A-B-A problem [26].
object access(point,[parameters])
object *point f
object *old object, *new object
old object := snapshot(point)
new object := serial update(old object,[parameters])
if(CNS(point,old object,new
break;
small-object lock-free protocol.
One problem with the protocol in Code 2 is that the entire object must be copied, wasting time and
memory. Herlihy also proposed a large object protocol that more efficiently updates a serial object. The
large-object protocol is similar to the shadow-page technique used to atomically update a disk-resident index.
Often, only the modified portions of the object must be copied and replaced. The large-object protocol has
the same essential form as the small-object protocol.
Herlihy's algorithms serialized access to the shared object. Other researchers propose algorithms that
permit concurrent access to a non-blocking object. Stone [55] proposes a queue that permits concurrent
enqueues and dequeues. An enqueuer that puts a record into an empty queue can block dequeuers, so
we categorize the algorithm as lock-free instead of non-blocking. Stone's algorithm has the performance
characteristics of a non-blocking algorithm. Prakash, Lee, and Johnson [44, 45] give an algorithm for a
non-blocking queue that permits concurrent enqueues and dequeues. Their solution is based on classifying
every possible queue configuration into one of a finite number of states. The current state is defined by an
atomic snapshot of the value of the head pointer, the tail pointer, and the next-record pointer of the tail
record (the authors provide a protocol for taking the atomic snapshot). When an operation executes, it
might find the queue in a valid state. In this case, the operation tries to commit its updates with a decisive
instruction (via a compare-and-swap). If the queue is in an invalid state, the operation takes the queue to a
valid state, then starts again. The execution of the PLJ queue is shown in the program in Code 3.
object access(object instance,[parameters])
object *object instance f
boolean done; obj state object state
done=False
while(not done) f
object state := snapshot(object instance)
if(object state is valid)
compute action object instance
apply action to object instance
done := True
else
cleanup(object instance)
cleanup(object instance)
Code 3 The PLJ Concurrent lock-free protocol.
Valois [59] has developed similar non-blocking algorithms for queues, linked lists, and binary search trees.
Herlihy and Moss [25] present non-blocking algorithms for garbage collection. Anderson and Woll [3] present
wait-free algorithms for the union-find problem.
Turek, Shasha, and Prakash [58] have techniques for transforming concurrent objects implemented with
locks into concurrent non-blocking objects. Every operation keeps its 'program' in a publicly available
location. Instead of setting a lock on a record, a process attempts to make the 'lock' field of the record point
to its own program. If the attempt fails, the blocked process executes the program of the process that holds
the lock until the lock is removed. The contention for setting the lock is similar to the phenomena modeled
in this work.
Some researchers have investigated hybrid techniques that are primarily locking, but can force processes
to release their locks when the process experiences a context switch [2, 11]. These methods use non-locking
algorithms to ensure correctness.
Several architectures that support lock-free algorithms have been proposed [56, 23]. The cache coherence
mechanism allows a processor to reserve several words in shared memory, and informs the processor if a
conflict occurs.
3 Processor Slowdowns
Since the claimed advantage of lock-free algorithms is superior performance in spite of processor slowdowns,
we must examine the possible causes of variations in the time to execute an operation.
The first type of processor slowdowns are 'small' slowdowns. Small slowdowns can be caused by cache
line faults, contention for the memory module, and contention for the bus or interconnection network [13].
Another source of small slowdowns lies in the dependence of the execution time of an operation on the data
in the data structure. For example, a priority queue might be implemented as a sorted list. An enqueue is
slow when the list is big, but fast when the list is small. Lock-free algorithms can take advantage of small
slowdowns by giving temporarily fast operations priority over temporarily slow operations. For example,
a lock free algorithm would give preference to dequeue operations when the priority queue is large, and to
enqueue operations when the priority queue is small, permitting a greater overall throughput.
The second type of processor slowdowns are 'large' slowdowns. These slowdowns are caused by page
faults or by context switches in multitasking parallel computers. If the process holds a critical lock and
experiences a context switch, all processes that compete for the lock are delayed until the lock holding
process regains control of its processor. Many researchers have worked on avoiding the problems caused by
long slowdowns. One approach is to delay the context switch of a process while the process holds a lock
[5, 38, 64]. These authors report a large improvement in efficiency in multitasking parallel processors by
avoiding large slowdowns. However, this approach has several drawbacks. It requires a more complex kernel,
it requires a more complex user/kernel interaction, and it allows a user to grab control of the multiprocessor
by having the processes lock "dummy" semaphores. Alemany and Felton [2] and Bershad [11] have proposed
hybrid schemes that are primarily locking, but which force processes to release their locks on a context switch
(using a technique similar to non-locking protocols to ensure correctness). While these schemes avoid the
possibility of a user grabbing processors, they still require additional kernel complexity and a more complex
user interface. In contrast, lock-free algorithms solve the large slowdown problem without operating system
support.
The types of slowdowns that have been discussed in the literature are transient slowdowns. The cause
of the slowdown is eventually resolved, and after that the process executes its operation as fast as all
other processes in the system. Another type of slowdown is a permanent slowdown, in which a process
that is executing an operation on a shared object is always slower than other processes in the system that
access the object. A permanent slowdown can occur because a processor, and hence all processes executing
on it, executes at a slower rate than other processors in the system. The multiprocessor might contain
heterogeneous CPUs, perhaps due to incremental upgrades. The multiprocessor architecture might be a
Non-Uniform Memory Access (NUMA) architecture, in which some processors can access a memory module
faster than others. In a typical NUMA architecture, the globally shared memory is co-located with the
processors. In addition, the topology of the multicomputer is such that some processors are closer together
than others (for example, in a hierarchical bus or a mesh topology). In a NUMA architecture, the shared
object can be accessed quickly by processors that are close to it, but slowly by processors that are far
from it. A process might experience a permanent slowdown while executing an operation because of the
operation itself. Different operations on a shared object might require different times to compute. For
example, Herlihy [22] observed that enqueues into a priority queue experienced discrimination because they
take longer to compute.
In an earlier work [45], we ran several simulation studies to compare the performance of our non-blocking
queue to that of a lock-based implementation under different conditions. We expected that the non-blocking
queue would perform better than the equivalent lock-based queue if the execution times of the operations
varied considerably. In the simulation studies, the operations arrived in a Poisson stream and were assigned
a processor to execute the operation's program. In our first set of experiments, we assigned a fast processor
90% of the time and a slow processor 10% of the time. Thus, we simulated permanent slowdowns. We were
surprised to find that the locking queue has substantially better performance than the non-blocking queue
when the processors experience permanent slowdowns.
In a second set of experiments, all operations are assigned identical processors, but the processors occasionally
become slow. Thus, we simulated transient slowdowns. Under transient slowdowns, the non-blocking
algorithm has substantially better performance than the locking algorithm.
The key observation is that the performance of lock-free algorithms relative to blocking algorithms depends
on the nature of the slowdown that the processes experience. Lock-free algorithms work well when
transient slowdowns occur, but poorly when permanent slowdowns occur. The models that we develop in
this work will explore this phenomenon.
4 Previous Work
Considerable work has been done to analyze the performance of synchronization methods. Many analyses of
synchronization methods have examined the relative performance of shared memory locks. Mellor-Crummey
and Scott [39] present performance measurements to show the good performance of their algorithm relative to
that of some test-and-set and ticket-based algorithms. Agrawal and Cherian [1] present simulation results and
a simple analytical model to explore the performance of adaptive backoff synchronization schemes. Anderson
presents measurement results of the performance of several spin locks, and suggests a new ticket-based
spin lock. Woest and Goodman [61] present simulation results to compare queue-on-lock-bit synchronization
techniques against test-and-set spin locks, and the Mellor-Crummey and Scott lock. Graunke and Thakkar
[18] present performance measurements of test-and-set and ticket based locks.
Other authors have examined particular aspects of synchronization performance. Lim and Agrawal [36]
examine the performance tradeoffs between spinning and blocking. They present analytical models to derive
the best point for a blocked process to switch from spinning to blocking. Glenn, Pryor, Conroy, and Johnson
[16] present analytical models which show that a thrashing phenomenon can occur due to contention for a
synchronization variable. Anderson, Lazowska, and Levy [6] present some simple queuing models of critical
section access to study thread management schemes. Zahoran, Lazowska, and Eager [64] present a variety
on analytical and simulation models to study the interaction of synchronization and scheduling policies in a
multitasking parallel processor.
Previous analytic studies of multiprocessor synchronization do not address the effects of slowdowns on the
performance of shared objects (the work of Zahoran, Lazowska, and Eager [64] uses simulation to study the
effect of scheduling policies). Furthermore, most spin lock algorithms are of an essentially different nature
than lock-free algorithms. In many algorithms (i.e, ticket locks, the MCS lock, QOLB locks), competition
occurs when the lock is free, and afterwards blocked processes cooperate perform the synchronization. The
lock is granted in an atomic step in test-and-set locks. Hence, the analyses have primarily been queuing
models, or have counted the number of accesses required to obtain the lock. Lock-free algorithms have a
different nature, because a process attempting to perform an operation must complete its operation before
another process performs a conflicting operation. Hence, the synchronization is competitive but non-atomic.
Only two synchronization algorithms have a similar form. In Lamport's ``Fast Mutual Exclusion'' algorithm
[35], processes compete to obtain a lock using only read and write operations. However, the algorithm is
not used in practice and its performance has not been studied by analytical or simulation models. The
test-and-test-and-set lock [50] is similar to lock-free algorithms in that blocked processors receive a signal
that the lock is free (a cache line invalidation), then compete for the lock. The effect of slowdowns on
the test-and-test-and-set lock has never been analyzed, though the methods described in this paper can be
applied. However, the result is not likely to be of great interest because the test-and-test-and-set lock is
not widely used, and the discrimination due to a NUMA architecture is not likely to have a great effect on
system performance.
Considerable work has been done to analyze the performance of concurrent data structure algorithms
[29, 28]. These techniques assume that the algorithm is lock-based, and concentrate on analyzing waiting
times in the lock queues. Since there is no queuing in lock-free algorithms, these techniques do not apply.
Researchers [22] have observed that non-blocking data structure algorithms are similar to to optimistic
concurrency control (OCC) in databases [10]. Optimistic concurrency control is so named because it makes
the optimistic assumption that data conflicts are rare. A transaction accesses data without regard to possible
conflicts. If a data conflict does occur, the transaction is aborted and restarted. Given the relationship
between OCC and non-locking algorithms, we can try to apply performance models developed to analyze
OCC to analyze non-locking algorithms.
Menasce and Nakanishi [40] present a Markov chain model of OCC in which aborted transactions leave,
then reenter the transaction processing system as new transactions. Morris and Wong [41, 42] note that
generating new transactions to replace aborted ones biases the transaction processing system towards executing
short fast transactions. These authors provide an alternative solution method that avoids the bias by
requiring that the transaction that replaces the aborted transaction be identical to the aborted transaction.
Ryu and Thomasian [51] extend this model of OCC to permit a wide variety of execution time distributions
and a variety of OCC execution models. Yu et al. [63, 62] develop approximate models of OCC and locking
concurrency control to evaluate their performance in transaction processing systems.
Of these models, the approach of Ryu and Thomasian is the best suited for application to analyzing
non-locking algorithms. Previous models of a similar nature [40, 41, 42] are not as general. Other analyses
[63, 62] focus on issues such as buffering and resource contention, and assume that data conflicts are rare. In
contrast, the Ryu and Thomasian abstracts away the operating environment and focuses on analyzing the
effects of data conflicts only. Furthermore, the Ryu and Thomasian model produces accurate results when
the rate of data conflict is high.
Our approach is to extend the simple but flexible model of Ryu and Thomasian [51] to analyze lock-free
algorithms. The Ryu-Thomasian model requires that if a transaction is aborted, its execution time
is identical to the first execution. However, we explicitly want to account for variations in the execution
time in our work load model (since lock-free algorithms are intended to be fast in spite of temporarily or
permanently slow processors). Therefore, we start by extending the Ryu-Thomasian performance model to
account for two new workload models. We next apply the performance models to analyze several lock-free
algorithms. We show how the closed-system model of Ryu and Thomasian can be converted into an open
system model. We validate the analytical tools and use them to explore the relative performance of the
algorithms.
5 Model Description
Data access conflicts in OCC are detected by the use of timestamps. Each data granule, g, (the smallest
unit of concurrency control) has an associated timestamp, t(g), which contains the last time that the data
granule was written to. Each transaction, T , keeps track of its read set R(T ) and write set W (T ). We
assume that R(T ) oe W (T ). Every time a new data granule is accessed, the time of access is recorded. If
at the commit point a data granule has a last write time greater than the access time, the transaction is
aborted. Otherwise, the transaction is committed and the last write time of each granule in W (T ) is set to
the current time. The procedure used is shown in Code 4.
read(g,T)
read g into T's local workspace
access time(g)=Global time
for each g 2 R(T )
if access time(g)!t(g)
abort(T)
for each g 2 W (T )
t(g)=Global time
Code 4 OCC validation
As has been noted elsewhere [22], lock-free protocols of the types described in Code 2 and 3 are essentially
similar to the OCC validation described in Code 4. Both types of algorithms read some data values, then
commit if and only if no interfering writes have occurred. Although many of the implementation details
are different (OCC and lock free algorithms detect conflicts with different mechanisms, and an 'abort' in a
makes the operation re-execute the while loop), an analysis that counts conflicts
to calculate the probability of 'committing' applies equally well to both types of algorithms.
Because an operation that executes a non-blocking algorithm acts like a transaction that obeys OCC,
we develop the analytical methods in the context of transactions, then apply the methods to analyzing
operations. Following Ryu and Thomasian, we distinguish between static and dynamic concurrency control.
In static concurrency control, all data items that will be accessed are read when the transaction starts. In
dynamic concurrency control, data items are read as they are needed. We also distinguish between silent
and broadcast concurrency control. The pseudo-code in Code 4 is silent optimistic concurrency control: an
operation doesn't advertise its commit, and transactions that will abort continue to execute. Alternatively,
a transaction can broadcast its commit, so that conflicting transactions can restart immediately [48, 20].
We model the transaction processing system as a closed system in which V transactions each execute
one of C transaction types. When a new transaction enters the system, it is a class c transaction with
probability f c ,
class c transaction is assumed to have an execution time of fi(V )b c (x), where
fi(V ) is the increase in execution time due to resource contention. Factoring out fi(V ) is an example of a
resource contention decomposition approximation [57, 51, 28], which lets us focus on the concurrency control
mechanism, and which allows the analysis to be applied to different computer models. We will assume that
in the analysis (i.e., one processor per operation).
As a transaction T executes, other transactions will commit their executions. If a committing transaction
conflicts with T , then T must be aborted. We denote by \Phi(k; c) the probability that a committing class
transaction conflicts with an executing class c transaction. We model the stochastic process in which
committing transactions conflict with an executing transaction as a Poisson process. Ryu and Thomasian
[51] show that this assumption, which makes the analysis tractable, leads to accurate model predictions
under a wide variety of conditions.
We differentiate between three models depending on the actions that occur when a transaction aborts. In
[51], a transaction samples its execution time when it first enters the system. If the transaction is aborted, it
is executed again with the same execution time as the first execution time. We call this transaction model the
fixed time/fixed class model, or the FF model 1 . The FF model avoids a bias for fast transactions, permitting
a fair comparison to lock-based concurrency control when analyzing transaction processing systems.
The variability of the execution time of a operation could be due to resource contention, to decisions
the operation makes when as it executes, or a combination of both. In these cases, the execution time of a
operation changes when a operation is re-executed after an abort. However, some processors might be slower
than others, and some operations might take longer to compute than others. We introduce the variable
time/fixed class, or VF, model to represent the situation in which processors can experience both transient
and permanent slowdowns. In the VF model, an aborted transaction chooses a new execution time for its
next execution. However, the new operation is still of the same class (i.e, on the same processor and the
same type of operation).
We might want to model a situation in which processors experience only temporary slowdowns (i.e, a
UMA processor and all operations require about the same amount of computation). then fast on the next
execution. In the variable time/variable class, or VV model, a new transaction type is picked to replace an
aborted transaction (possibly a transaction of the same type).
5.1 Model Solution Methods
For a given transaction model, we can solve the system for any of the OCC models in the same way. The
method for solving the system depends on the transaction model: the FF and the VF models use the same
method, but the VV model is solved by using a different method.
5.1.1 Solving the FF and VF Models
The solution method for the FF and VF models involves taking the system utilization U (the portion of
time spent doing useful work) and finding the per-class utilizations U c . The system utilization U is then
computed from the per-class utilizations. Ryu and Thomasian show that the equations can be solved quickly
through iteration.
The mean useful residence time of a class c transaction is denoted by R c
a (V ). A transaction might
be required to restart several times due to data conflicts. The expected time that a transaction spends
executing aborted attempts is denoted by R d
c (V ), and the total residence time of a class c transaction is
1 Most of the results that we present for the FF model have been taken from [51].
a
d (V ). The utilization of a class is the proportion of its expected residence time spent in
an execution that commits: U
a
a
d (V )). The expected residence time of a transaction
(R a (V ), R d (V ), and R(V )) is calculated by taking the expectation of the per-class expected residence times.
The system efficiency, U , is calculated by taking the expectation of the per-class utilizations:
f c R c
a
(1)
In order to calculate the per-class efficiencies, we need to calculate the probability that a transaction
aborts due to a data conflict. We define \Phi(k; c) to be the probability that a class k transaction conflicts
with a class c transaction. We know the proportions of committing transactions, so we can calculate the
probability that a committing transaction conflicts with a class c transaction \Phi c by:
We can calculate the rate at which a committing transactions conflict with a class c transaction, fl c , by
setting fl c to be the proportion of committing transactions that conflict with a class c transaction:
where b is the expected execution time of all transactions.
Given the system utilization, we can calculate the per-class conflict rate. From the per-class conflict rate,
we can calculate the per-class utilizations, and from the per-class utilizations, we can calculate the system
utilization. The output system utilization is a decreasing function of the input system utilization. In the
FF model, the utilization is bounded by 1, so the unique root in [0::1] can be found using a binary search
iteration. In the VF model, it is possible for the utilization to be greater than 1 (because of the bias towards
fast executions), so the root finder must use one of the standard nonlinear equation solution methods [7].
5.1.2 Solving The VV Model
In the VV transaction model, when a transaction aborts, it leaves the system and a new transaction enters.
As a result, the proportion of committing class c transactions is no longer f c , and instead depends on the
probability that a class c transaction commits, p c , and the average execution time of a class c transaction.
The solution method for the VV model is based on iteratively finding a root for the vector ~p.
In order to calculate the conflict rate, we need to know the proportion of transactions S k that are executing
a class k transaction. When a process is executing a class k transaction, it executes for an expected b k seconds.
If one was to observe a very large number of transaction executions, say M , then a class k transaction would
be executed about Mf k times. Thus, the observation period would take
seconds, during which
a class k transaction would be executed for Mf k b k seconds. By the theory of alternating renewal processes
[49], we have
If the process is executing a class k transaction, it will finish at rate 1=b k . When the transaction completes,
it will commit at rate p k , and if it commits, it will conflict with a class c transaction with probability \Phi(k; c).
Therefore,
Given the probability that transactions of each transaction class commits, ~
c , we can calculate conflict
rate fl c for each transaction class. Given the conflict rate for a transaction class fl c , we can calculate the
probability that the transaction will commit p c .
Unlike the case with the FF and the VF models, for the VV model, we need to iterate on a vector. We
make use of a property of the system of equations to find a rapidly converging iterative solution: if F is
the transformation F (~p old new , then F (p
the vector relation - refers to component-wise comparison. In other words, the Jacobian of F is strictly
nonpositive. The algorithm that we use to find a solution of the VV calculates the i th value of p c to be
6 Analysis
In this section, we present the calculations needed for solve the systems discussed in the previous section.
For each of the four types of optimistic concurrency control, we present the calculation for each of the three
transaction models.
6.1 Analysis of Silent/Static OCC
In this section, we examine the simplest OCC scheme. In the silent/static scheme, transactions access their
entire data sets when they start their executions, and detect conflicts when they attempt to commit.
6.1.1 Fixed Time/Fixed Class
In [51], if a transaction executes for t seconds, then aborts, it will execute for t seconds when it restarts. If
an operation requires t seconds, the probability that it will be commit is e \Gammafl t , since we assume that conflicts
form a Poisson process. Therefore, the number of times that a class c transaction with running time t must
execute has the distribution
and has mean e fl c t . A class c transaction with running time t therefore has a mean residence time of te
and class c transactions have a running time of
R 1te
c is the first derivative of the Laplace transform of b c (t) [32]. Finally, the per-class utilization can
be calculated for the iteration to be
a
We note that b c (t) must be o(t \Gamma1 e \Gammafl c t ) for the integral to converge.
6.1.2 Variable time / Fixed Class
In the variable time/fixed class model, every time a class c transaction executes its running time is sampled
from b c (t). Therefore, the unconditional probability that the operation commits is:
The number of times that the operation executes has a geometric distribution, so an operation will
execute 1=p c times. The first 1=p c \Gamma 1 times the operation executes, it will be unsuccessful. Knowing that
the operation is unsuccessful tells us that it probably required somewhat longer than average to execute,
since slow operations are more likely to be aborted. Similarly, successful operations are likely to be faster.
In particular, an operation will be successful only if it reaches its commit point before a conflict occurs, and
will be unsuccessful only if a conflict occurs before it reaches its commit point. The distributions of the
execution times of the successful and unsuccessful operations are calculated by taking order statistics [14]:
where K s and K f are normalizing constants computed by
'Z 1e \Gammafl c t b c (t)dt
c and b f
c are the expected values of b s
c (t) and b f
c (t), respectively, then the expected time to complete
a class c operation is
c (12)
We observe that we only need to calculate b s
c , because
so that by combining (11) and (13) we get:
Therefore, we find that
c =b c
We note that in the variable time model, the only restriction on the distributions b c (t) is that they have
finite means.
6.1.3 Variable Time / Variable Class
For the silent/static VV model, we calculate the conflict rate from formula (4) and the probability that a
class c transaction commits from formula (8).
6.2 Analysis of Static/Broadcast OCC
In static/broadcast OCC, transactions access their entire data sets when they start execution, and abort
whenever a conflicting transaction commits.
6.2.1 Fixed/Fixed
The probability that a transaction restarts is calculated in the same way as in the silent/static model, given
the same conflict rate. The wasted time per transaction now has a truncated exponential distribution:
As a result,
6.2.2 Variable/Fixed
The probability that a transaction commits, p c , and the expected execution time of transactions that commit
c are calculated in the same way as in the silent/static model. The execution time of the aborted transactions
is different, since a transaction will abort after t if some other transaction conflicts with it t seconds after it
starts, and it has not yet committed:
c
\Theta fl c e \Gammafl c t
where
R 1fl c e \Gammafl c t (1\GammaB c (t))dt
R 1e \Gammafl c t
R tb c (-)d-dt
Since a conflict aborts a transaction early, we can not make use of equation (13) to simplify equation (11).
Instead, we must actually calculate the expected values b s
c and b f
c
ds B c (s)=s
Putting these formulae into equation (11) for R c (V ), we find that
and,
We note that if b c (t) has an exponential distribution, then U This relation can be used to directly
solve a system where all execution times are exponentially distributed, or to simplify the calculations when
some execution time distributions are exponentially distributed and some are not.
6.2.3 Variable/Variable
In the silent/static case, a class k transaction executes for an expected b k seconds. In the broadcast/static
case, a transaction terminates early if it is aborted. The average amount of time that a transaction spends
executing a class k transaction, - b k , is the weighted average of the execution time depending on whether or
not the transaction commits. By using equations (17) and (18), we find that:
Therefore, the proportion of time that a process spends executing a class k transaction is
and the conflict rate of a class c transaction is
Given a conflict rate fl c , we calculate p c by using equation (8).
6.3 Analysis of Silent/Dynamic
In dynamic optimistic concurrency control, a transaction accesses data items as they are needed. A class
c transaction that requests n c data items has n c phases. As the transaction accesses more data items,
it acquires a higher conflict rate. We redefine the conflict function \Phi to model the different phases of the
transactions. If a class k transaction commits, it conflicts with a class c transaction in stage i with probability
\Phi(k; c; i). The probability that a committing transaction conflicts with a class c transaction in stage i is:
The conflict rate for a class c transaction in stage i is:
The amount of time that a class c transaction spends in stage i has the distribution b c;i (t) with mean
c;i , and the average time to execute the transaction is b
6.3.1 Fixed/Fixed
As a transaction moves through different stages, it encounters different conflict rates. The conflict rate for
a class c transaction is a vector:
~
Similarly, the execution time of a class c transaction is a vector
from the distribution with density b c;i (x). The probability that a class c transaction aborts is therefore
By taking expectations over the times for the processing stages, Ryu and Thomasian find that
Y
c (\Gammafl c;i )
6.3.2 Variable/Fixed
We use the same transaction model as in the Fixed/Fixed case. A transaction will commit only it completes
every stage without conflict. We define p c;i to be the probability that a class c transaction completes the i th
stage without a conflict. We can calculate p c;i by using formula (8), and substituting B c;i for B and fl c;i for
. Given the p c;i , we can calculate p c by
As in the case of silent/static concurrency control, the unconditional expected time spent executing a
class c transaction is b c , so that
Variable/Variable
For the VV model, we use formula (4), appropriately modified to calculate the conflict rates, and formula (26)
to calculate p c .
6.4 Dynamic/Broadcast
6.4.1 Fixed/Fixed
The analysis of dynamic/broadcast concurrency control under the fixed/fixed model uses a combination of
the previously discussed techniques. Ryu and Thomasian show that
R c
d2
R c
c (0)
\Theta
R c
6.4.2 Variable/Fixed
We can use formula (26) to calculate p c . For each processing phase, we can use formulae (17) and (18) to
calculate b s
c;i and b f
c;i . If a transaction commits, then it successfully completed each phase, so that
c;i (28)
If a transaction fails to commit, then it might have failed at any one of the stages. We define
c to be the probability that a transaction aborts, and q c;i to be the probability that a transaction
aborts at stage i, given that it aborts. A transaction that aborts at stage i must have successfully completed
the previous stages, and a transaction aborts at exactly one of the stages, so
If a transaction aborts at stage i, then its expected execution time is:
c;j
Therefore, b f
c is the unconditional expected execution time:
c;iA
We then use formulae (28) and (29) in formula (11) to find R c (V ).
Variable/Variable
We use formula (26) to calculate p c , and formulae (28) and (29) in formulae (21) and (23) to calculate the
conflict rate.
7 Model Validation and Experiments
We wrote an OCC simulator to validate our analytical models. A parameterized number of transactions
executed concurrently, and committing transactions conflicted with other transactions depending on a sample
from \Phi. We ran the simulation for 10,000 transaction executions, then reported statistics on throughput,
execution time, and commit probabilities.
Ryu and Thomasian have already validated the F/F model, so we present a validation only of the V/F
and V/V models (we also simulated the F/F model, and found close agreement between the simulation and
analysis). In our first validation study, we modeled a system with a single transaction type. If there is only
one transaction type, the V/F and the V/V models are the same, so we present results for the V/F model
only (we also ran simulations and analytical calculations for the V/V model, and obtained nearly identical
results). We calculated \Phi by assuming that the transactions randomly accessed data items from a database
that contained data items, and that transactions with overlapping data sets conflict. Ryu and
Thomasian provide the following formula for the probability that two access sets of size n and m overlap in
a database with N data items:
We report the probability that a transaction commits for a variety of access set sizes and degrees of
concurrency in Table 1. The execution times in the static concurrency control experiments and the phase
execution times in the dynamic concurrency control experiments were exponentially distributed. The experiments
show close agreement between analytical and simulation results, though the calculations are least
accurate for the dynamic concurrency control when the level of conflict is high.
We also performed a validation study for a system with two transaction classes. The first transaction
class accesses four data items, and the second accesses eight data items. To save space, we reports results
for Dynamic/Broadcast OCC only, it being the least accurate of the models. Table 2 reports simulation and
analytical results for the V/F and the V/V transaction models for a variety of degrees of concurrency. In
these experiments, f close agreement between the simulation and the analytical
predictions.
Static/Silent Static/Broadcast
access set size 4
ana .9444 .6366 .4586 .9414 .5272 .2797
ana
ana .7755 .3481 .2241 .7281 .1567 .0608
Dynamic/Silent Dynamic/Broadcast
ana .9704 .7189 .4879 .9700 .6967 .4325
ana
ana .8554 .3703 .1910 .8479 .3078 .1319
Table
1: Validation study of the V/F model. p c is reported for a single transaction class and exponentially
distributed execution times.
Varying/Fixed Varying/Varying
analytical simulation analytical simulation
class 1 class 2 class 1 class 2 class 1 class 2 class 1 class 2
Table
2: Validation study for Dynamic/Broadcast OCC and two transaction classes. p c is reported. Execution
phase times are exponentially distributed.
8 Analysis of Nonblocking Data Structures
In this section, we apply the analytical framework to model the performance of non-blocking data structures
and explore several performance implications. Our analytical framework can be used to model non-blocking
data structure algorithms that have the basic form described in section 2 in Codes 2 and 3. While some
non-blocking algorithms use a different mechanism [24, 17, 34], most of the recently proposed methods
[45, 58, 19, 55, 59, 60, 56, 23] are similar to these techniques.
8.1 Atomic Snapshot
We examine first the algorithms similar to Code 2 in which taking the snapshot consists performing one
read (i.e., reading the pointer to the object). This approach is used by Herlihy [21], is a step in Turek's
algorithms [58] and is an approximation to the algorithms proposed by Prakash et al. [45], Valois [59, 60],
and Harathi and Johnson [19].
We want to model both transient and permanent slowdowns. The V/F model accounts for transient and
permanent slowdowns, and the V/V model permits transient slowdowns only. We are modeling algorithms
in which the snapshot is performed atomically, so the operations execute SS transactions.
In Herlihy's algorithms, every operation conflicts with every other, so In our experiments, we use
two transaction classes to model the fast and slow processors. The first transaction class models the fast
processors. Its execution time is chosen uniformly randomly in [:8; 1:2], and f :9. The execution time of
the second transaction class, which represents the slow processors, is chosen uniformly randomly in [8; 12],
We plot the throughput of the nonblocking queue for the permanent and transient slowdown models
(VF and VV) against increasing V in Figure 1. For comparison, we also plot the throughput of the locking
algorithm algorithm, which is a constant 1=1:9. The nonblocking queue in the permanent slowdown
model has a lower throughput than the locking queue, in spite of the preference shown towards fast executions.
This phenomena occurs because of the extremely long times required for the completion of the operations
executed on the slow processors. These running times are shown in Figure 2. The throughput of the transient
slowdown model increases with increasing V , and is considerably greater than that of the locking queue.
These model predictions are in agreement with our simulation results [45].
The Ryu and Thomasian models assume a closed system and calculate the throughput and response time
as a function of the the number of competing operations. Access to a shared data structure can be better
modeled as an open system, in which operations arrive, receive service, then depart. We can use the results
from the closed-system model to approximate the performance measures of an open system. The throughput
values for the closed system are used for the state-dependent service rates in a flow-equivalent server [31].
The steps to compute open system response times in the FF and the VF transaction models are:
1. For calculate the per-class and average response times.
2. Model the number of jobs in the system as a finite-buffer queue. Use the average response times (across
all transaction types) as the state-dependent service times. Given the arrival rate -, calculate the state
occupancy probabilities.
3. Use the state occupancy probabilities to weight the per-class response times and compute the average
response time by taking the sum.
In the VV model, per-class execution times aren't meaningful. Instead, one calculates the average trans-action
execution time. The expected probability that a VV transaction commits is:
A transaction re-executes until it commits. Thus, the number of executions has a geometric distribution,
with expected value 1=P VV
c . Therefore, the expected time to execute a transaction is
c
Using the parameters from the previous experiment, we plot the response time of the single-snapshot
algorithm under the permanent and the transient slowdown processor models against an increasing arrival
rate in Figure 3. We also report the results of a simulation for both of the processor models. The chart
shows that the VV analytical model accurately predicts response times of the transient slowdown model,
but that the VF model is overly optimistic. Figure 4 compares analytical and simulation predictions of the
probability that the system is idle for both processor models. Here we can see again that the VV model
makes accurate predictions, while the VF model is too optimistic. We include in Figure 3 a plot of the
response time of an equivalent locking algorithm (modeled by a M/G/1 queue [32]). The locking algorithm
has a considerably better response time than the non-blocking algorithm under the permanent slowdown
model. The non-blocking algorithm under the transient slowdown model has a similar response time under
a light load, but a lower response time under a heavy load.
In observing the simulations, we noticed that the response time of operations that are alone in the system
when they complete is close to response times when there are two operations in the system. This occurs
because the jobs that complete when they are alone in the system are often slow jobs that had been forced to
restart several times. We therefore make an approximation (which we call VF approx) to the flow-equivalent
by setting the service rate when there is one operation in the system to that when there are two jobs in the
system. The predictions made by this approximation for the VF model are labeled VF approx in Figures 3
and 4. The VF approx makes poor predictions of response times, but accurate predictions of the system
utilization.
To test the robustness of our models in the face of different service time distributions, we ran the
experiments with the permanent slowdown processor model where the service time distributions have an
exponential distribution. The results of these experiments are shown in Figures 5 and 6. These figures also
show that the VF model is too optimistic, and that the VF approx model makes poor predictions of the
response times but good predictions of the system utilization.
8.2 Composite Snapshot
Several non-blocking algorithms take a snapshot of several variables to determine the state of the data
structure [45, 59, 60, 19, 22]. While taking an atomic composite snapshot requires a more complex algorithm,
it reduces the amount of copying needed to perform an operation, which improves performance. In addition,
architectures that support lock-free algorithms have been proposed [23, 56]. These architectures allow a
process to reserve several words of shared memory, and inform the processor if a conflicting write occurs.
Code 5, taken from [45], shows a typical protocol to take an atomic snapshot for an algorithm that
implements a non-blocking queue. The nonblocking queue needs to determine the simultaneous values of the
three variables in order to determine the state of the queue. We call the three variables A, B, and C, and the
protocol reads their simultaneous values into my A, my B, and my C.
repeat
my A=A
repeat
my B=B
my C=C
Code 5 Composite snapshot.
During the time that an operation is taking a snapshot, a modification to the data structure can cause
the snapshot to fail. Further, as the snapshot is taken, different modifications can cause the snapshot to
fail. Thus, while the snapshot is in progress, the operation uses DB optimistic concurrency control. After
the snapshot is successfully taken, the operation calculates its update, then attempts to commit its update.
The operation will not abort during the time that it calculates its update, so this stage of the operation uses
SS optimistic concurrency control.
Since the optimistic concurrency control used for composite-snapshot non-blocking algorithms is a variation
of the DB concurrency control, we use the methods similar to those discussed in section 6.4 to calculate
the execution times and the probability of success. The last stage in the calculation will not terminate early
when a conflicting commits. Therefore, the value of b f
c;n c +1 in (29) should be calculated using the method
described in section 6.1.2:
b f;SS
We assume that an operation is equally likely to be an enqueue or a dequeue operation, and that the
queue is usually full. In this case, when an enqueue operation commits, it kills all other enqueue operations,
and the same applies to the dequeue operations. Therefore, one operation kills another upon commit with
probability 1/2. We start counting the operation's execution from the point when it executes the statement
my A=A. The first stage ends when the first until statement is executed, and requires 4 instructions. The
second stage ends when the second until statement is executed, and requires 1 instruction. The third stage
ends when the operation tries to commit its operation, and requires 8 instructions. Fast processors require a
time uniformly randomly chosen in [:8; 1:2] to execute the instructions in a stage, and slow processors require
a time uniformly randomly chosen between in [8; 12]. That is, the time to execute a stage is the number of
instructions in the stage multiplied by a sample uniformly randomly selected from [lo; hi].
The results of the experiments are shown in Figures 7 and 8. These figures show the response times and
idle probability, respectively. Again we draw the conclusions that the VV model makes accurate predictions,
that the VF model is too optimistic, and that the VF approx model makes poor predictions of response
times but good predictions of the idle probability.
9 Conclusion
In this work we present a model for analyzing the performance of a large class of non-locking algorithms. This
model is an extension of the Ryu and Thomasian model of Optimistic concurrency control. Our extensions
allow operations to resample their execution time if they abort (VF transaction model), and also to change
that change their operation class (VV transaction model). We validate our models in a closed system under
a variety of concurrency control models.
We next apply the analytical tools to compare the performance of non-locking and locking algorithms
for shared objects. We use two processor models. In the permanent slowdown model, the execution speed
of the processor is fixed, modulo small variations. In the transient slowdown model, the execution speed
of a processor changes between executions. We use the VF transaction model for the permanent slowdown
processor model and the VV transaction model for the transient slowdown processor model. Permanent
slowdowns can occur due to NUMA architectures, heterogeneous architectures, or differences in operation
execution time. Transient slowdowns can occur due to cache line faults, memory and bus contention, page
faults, context switching, or data-dependent operation execution times.
We compared the performance of the non-locking and the locking algorithms in a closed system, and
found that non-locking algorithms in the variable speed model have significantly better throughput than
the locking algorithm, but that non-locking algorithms in the permanent slowdown model have significantly
worse throughput. While the closed system model does not give direct performance results for a real system,
it indicates the relative performance of the algorithms and it provides a bound on the rate at which operations
can execute.
We extend the closed system model to an open system by using a flow-equivalent approximation. The
analytical results of this approximation show the same performance ranking with respect to response times
as exists in the closed system. Further, the VV model is slightly pessimistic, while the VF model is very
optimistic, making us more confident in our performance ranking. We describe a further approximation that
lets us accurately calculate the utilization of the concurrent object in the VF model. The analytical models
are accurate enough to be useful in predicting the impact of a non-locking concurrent object on system
performance.
This work indicates that non-locking algorithms have the potential to provide better performance than
locking algorithms when the processors executing the operations experience transient slowdowns only. Thus,
lock-free algorithms are appropriate on UMA architectures when all operations on the data require about the
same processing time. However, our work shows that lock-free algorithms have poor performance when the
processors can experience permanent slowdowns. Slow processors receive significant discrimination, reducing
overall throughput. Thus, lock-free algorithms are not appropriate on heterogeneous or NUMA architectures,
or when some types of operations require significantly more computation than others. In these cases, non-blocking
algorithms must incorporate a fairness mechanism to provide good performance. Approachs to such
mechanisms are described in [2, 11].
--R
Adaptive backoff synchronization techniques.
Performance issues in non-blocking synchronization on shared memory multiprocessors
The performance of spin lock alternatives for shared memory multiprocessors.
Scheduler activations: Effective kernel support for the user-level management of parallelism
The performance implications of thread management alternatives for shared memory multiprocessors.
An Introduction to Numerical Analysis.
Concurrency of operations on B-trees
Concurrency Control and Recovery in Database Systems.
Practical considerations for non-blocking concurrent objects
Simultaneous update of priority structures.
Models of access delays in multiprocessor memories.
Order Statistics.
Concurrent search and insertion in AVL trees.
A bistability throughput phenomenon in a shared-memory mimd machine
Coordinating large numbers of processors.
Synchronization mechanisms for shared-memory multiprocessors
A priority synchronization algorithm for multiprocessors.
Observations on optimistic concurrency control schemes.
A methodology for implementing highly concurrent data structures.
A methodology for implementing highly concurrent data objects.
Transactional memory: Architectural support for lock-free data structures
Axioms for concurrent objects.
Watson Research Center.
Approximate analysis of reader and writer access to a shared resource.
The Performance of Concurrent Data Structure Algorithms.
The performance of concurrent data structure algorithms.
Concurrent operations on priority queues.
Introduction to Computer System Performance Evaluation.
Queueing Systems
Concurrent manipulation of binary search trees.
Specifying concurrent program modules.
A fast mutual exclusion algorithm.
Waiting algorithms for synchronization in large-scale multiprocessors
Concurrency control in a dynamic search structure.
A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors
Algorithms for scalable synchronization on shared-memory multiprocessors
Optimistic vs. pessimistic concurrency control mechanisms in database management systems.
Performance of concurrency control algorithms with non-exclusive access
Performance analysis of locking and optimistic concurrency control algorithms.
Concurrent access of priority queues.
Performance analysis of concurrent-read exclusive-write
Experiments with transaction processing on a multiprocessor.
Stochastic Processes.
Dynamic decentralized cache schemes for mimd parallel processors.
Performance analysis of centralized database with optimistic concurrency control.
Analysis of database performance with dynamic locking.
Concurrent operations on B
Concurrent search structure algorithms.
A simple and correct shared-queue algorithm using compare-and-swap
Multiple reservations and the oklahoma update.
Locking performance in centralized databases.
Locking without blocking: Making lock based concurrent data structure algorithms nonblocking.
Analysis of a lock-free queue
Concurrent dictionaries without locks.
An analysis of synchronization mechanisms in shared-memory mul- tiprocessors
On modeling database concurrency control.
Modeling and analysis of a time-stamp history based certification protocol for concurrency control
The effect of scheduling discipline on spin overhead in shared memory parallel systems.
--TR
--CTR
Salvatore T. March , Charles A. Wood , Gove N. Allen, Research Frontiers in Object Technology, Information Systems Frontiers, v.1 n.1, p.51-74, July 1999 | performance modeling;parallel processing;nonblocking;lock-free;synchronization |
627095 | Zero-Aliasing for Modeled Faults. | AbstractWhen using built-in self-test (BIST) for testing VLSI circuits the circuit response to an input test sequence, which may consist of thousands to millions of bits, is compacted into a signature which consists of only tens of bits. Usually a linear feedback shift register (LFSR) is used for response compaction via polynomial division. The compacting function is a many-to-one function and as a result some erroneous responses may be mapped to the same signature as the good response. This is known as aliasing.In this paper we deal with the selection of a feedback polynomial for the compacting LFSR, such that an erroneous response resulting from any modeled fault is mapped to a signature that is different from that for the good response. Such LFSRs are called zero-aliasing LFSRs. Only zero-aliasing LFSRs with primitive or irreducible feedback polynomials are considered due to their suitability for BIST test pattern generation.Upper bounds are derived for the least degree irreducible and primitive zero-aliasing LFSR polynomials. These bounds show that in all practical test applications such a polynomial will be of degree less than 53. Expected bounds are derived and show that when the number of faults is less than 106, then this degree is at most 21.Procedures to find irreducible and primitive zero-aliasing LFSR polynomials of: 1) the smallest degree and 2) a pre-specified degree; are presented. A low-complexity procedure to find a zero-aliasing LFSR polynomial is also presented. The worst case as well as expected time complexities of all these procedures are derived. Experimental results are presented for practical problem sizes to demonstrate the applicability of the proposed procedures. | Introduction
Built-In Self-Test (BIST) is the capability of a circuit to test itself. The idea behind BIST is to
create pattern generators (PGs) to generate test patterns for the circuit and response analyzers
(RAs) to compact the circuit response to the inputs that are applied. The circuit response,
which may consist of thousands to millions of bits, is compacted into a signature which consists
of only tens of bits. The compacting function is a many-to-one function and as a result some
erroneous responses might be mapped to the same signature as the good response. This is known
as aliasing.
When all erroneous responses are mapped to a different signature than the good response,
we have zero-aliasing. There are two previous schemes to achieve zero-aliasing, that take into
account all possible error sequences. The first is by Gupta et al. [7] [14]. In this scheme the RA
is a linear feedback shift register (LFSR) and the compacting function is polynomial division of
the good response by the feedback polynomial. The scheme requires the quotient of the good
response to be periodic. This is achieved by proper selection of the LFSR feedback polynomial
once the good response is known. They give a bound of n=2 on the length of the required
register, for a test sequence of length n. The second scheme, due to Chakrabarty and Hayes [5],
uses non-linear logic to detect any error in the response. The number of memory cells in their
RA is dlog ne but they have no bound on the extra logic required to implement their scheme.
The major difference between our scheme and the aforementioned zero-aliasing schemes is
that we target a specific set of possible faults and try to achieve zero-aliasing for the error
sequences resulting only from these modeled faults. We do not try to recognize all possible error
sequences, mainly because most of them will never occur. The fault model lets us focus on the
probable error sequences. As a result, we use less hardware than the aforementioned schemes.
A previous method for finding zero-aliasing feedback polynomials for modeled faults was
presented by Pomeranz et al. [13]. Different heuristics for finding a zero-aliasing polynomial are
suggested. These heuristics do not necessarily find a minimum degree zero-aliasing polynomial,
nor do they necessarily find an irreducible or primitive polynomial, which is very important if
the register is also to function as a PG. In this work we present upper bounds on the minimum
degree irreducible and primitive zero-aliasing polynomials and provide algorithms to find such
minimum degree polynomials.
The PGs and RAs are usually implemented by reconfiguring existing registers. Some registers
are configured as PGs to generate tests for some blocks of logic and reconfigured as RAs to test
other blocks of logic. When the same LFSR feedback polynomial serves both purposes, the
overhead of a reconfigurable design is saved. In such a scheme a LFSR is used as a PG and a
multiple input shift register (MISR) is used as a RA. An example of a MISR-based RA is shown
in
Figure
1. The register is configured as a shift register where the input to each cell is an XOR
function of the previous cell, an output bit of the circuit under test (CUT) and, depending on the
linear feedback function, a feedback bit. Number the cells of a k stage MISR D
with the feedback coming out of cell D k\Gamma1 . The feedback function is represented as a polynomial
and the feedback feeds cell D i iff f 1. The feedback polynomial of
the MISR in Figure 1 is 1. The difference between a LFSR and a MISR are
the extra inputs connected to the outputs of the CUT. If both the PG (LFSR) and the RA
(MISR) use the same feedback polynomial, then the overhead of reconfigurable polynomials is
saved. In a previous paper [11] we showed how to select the feedback polynomial for a PG; in
this paper we deal with selecting the feedback polynomial for a RA. Since a k-stage PG with
a primitive feedback polynomial generates all non-zero k-tuples as opposed to a PG with an
irreducible feedback polynomial, we prefer primitive zero-aliasing polynomials, even though it
takes more effort to find them.
The compacting function of a MISR is polynomial division over GF [2]. The effective output
polynomial is divided by the feedback polynomial. The signature is the remainder of the division.
If the CUT has k outputs, it has k output sequences. Denote these sequences by
. If the input sequence is of length n, then each can be viewed as a polynomial
O
is the output value of the i-th output at time j. The effective
polynomial is then
O l x l :
Our objective is to select a feedback polynomial for the compacting MISR, given a set of
modeled faults, such that an erroneous response resulting from any modeled fault is mapped to
a different signature than the signature of the good response.
For a CUT with few outputs, the available register might be too short to achieve zero-aliasing.
In this case we need to lengthen the register by adding flip-flops. To keep the hardware overhead
at a minimum, we want to add as few flip-flops as possible, hence we are interested in a feedback
polynomial of smallest degree that achieves our objective. When a register is to serve both as
a PG and a RA, it is advantageous to have the feedback polynomial of the same degree as the
available register, hence we are interested in a feedback polynomial of a pre-specified degree. At
times, we might want to find a feedback polynomial fast, even if the resulting MISR requires
extra flip-flops over the optimum.
We assume the following test scenario. The input sequence to the CUT has been designed
so that the effective output polynomial due to any target fault is different from the effective
polynomial of the good response, i.e. all the error polynomials are non-zero. Let r be the
effective polynomial of the good response, then the effective polynomial due to fault i can be
represent as r +h i . By the linearity of the remaindering operation, we get a different remainder
for this erroneous polynomial iff h i is not divisible by the feedback polynomial. We assume we
are given the error polynomials for each of the target faults.
The problem we deal with in this paper is the following: given a set of polynomials
find a polynomial that is relatively prime to all the polynomials of H. Such
a polynomial will be referred to as a non-factor of H. If a non-factor is used as the feedback
polynomial for the compacting MISR, zero-aliasing is achieved for the set of target faults. In
particular, for irreducible and primitive feedback polynomials we present (1) upper bounds on the
smallest degree zero-aliasing MISR; (2) procedures for selecting a zero-aliasing MISR with the
smallest degree; (3) procedures for determining whether a zero-aliasing LFSR of a pre-specified
degree exists, and if so, finding one; and (4) procedures for fast selection of a zero-aliasing MISR.
We analyze the worst case as well as expected time complexity of the proposed procedures.
A note on notation. When using logarithmic notation, ln x will denote the natural logarithm
of x and log x will denote the base 2 logarithm of x. The polynomials fh i g represent the error
polynomials. The degree of h i is represented by d i . The product of the polynomials in H is
denoted by h, and the degree of h is d h . For each h i , the product of the distinct, degree j,
irreducible factors of h i is denoted by g i;j , with d i;j being the degree of g i;j . The product, over
all i, of the polynomials g i;j is denoted by g j . The non-factor we seek will be referred to as a
with d a representing the degree of a.
The rest of this paper is organized as follows. In Section 2 we establish upper bounds on
the degree of a non-factor. In Section 3 we review polynomial operations over GF [2] and their
complexities. Section 4 presents procedures for finding a non-factor of smallest degree for the
set H. Section 5 presents procedures for finding a non-factor of a pre-specified degree and for
finding a non-factor fast. We also discuss the effectiveness of conducting an exhaustive search for
a least degree non-factor. Section 6 presents some experimental data. We conclude in Section
7.
2 Bounds on the least degree non-factor of a set of polynomial
Consider the following problem.
Problem 1: Let H be a set of jHj polynomials h
Give an upper
bound s(d h ) on the degree of an irreducible polynomial and an upper bound p(d h ) on the degree
of a primitive polynomial that does not divide h, i.e. there exists an irreducible (primitive)
polynomial of degree at most s(d h ) (p(d h )) that does not divide h.
Similarly, let es(H) (ep(H)) be the expected degree of an irreducible (primitive) polynomial
that is a non-factor of H.
The bounds s(d) and p(d) will be referred to as the worst case bounds while the bounds
es(H) and ep(H) will be referred to as the expected bounds. We first establish the worst case
bounds and then proceed with the expected bounds.
2.1 The worst case bounds
For the bound on s(d) we follow [10]. Let I 2 (j) denote the number of irreducible polynomials
of degree j over GF [2]. The degree of the product of all irreducible polynomials of degree j is
(j). Let s(d) denote the least integer such that
be the product of
all the irreducible polynomials of degree less than or equal to s(d). The degree of Q s(d) is greater
than d. Replacing d with d h , Q s(d h ) has at least one root that is not a root of h, hence Q s(d h )
has at least one irreducible factor that is not a factor of h. Thus, s(d h ) is an upper bound on
the degree of an irreducible polynomial that is relatively prime to all the polynomials in the set
H. The following lemma provides a bound on s(d h ).
We turn to find the bound on p(d). The number of primitive polynomials of degree m over
GF [2] ism
where OE(q) is the Euler function denoting the number of integers less than and relatively prime
to q and ([12, p. 37])
l
Y
where the p i 's are all the distinct prime factors of q.
Lemma 2: [16, p. 173]
for all q - 3 with the only exception being (the product of the first nine
primes), for which 5is replaced by 2:50637.
Lemma 3: For q ?
2:08 log log q
Proof: We first prove the case for q ? 65. By Lemma 2
2:08 log log q
For Equation (1) can be verified directly.
To help us derive the bound on p(d) we introduce the value - (t). Let - (t) denote the least
integer such that the ratio between - (t) times the number of primitive polynomials of degree
- (t) and t times the number of irreducible polynomials of degree t is greater than 1, i.e.
Lemma 4: [10, Lemma 3, p. 293] For t - 3
2:
Lemma 5: For log log 2te:
Proof: By the definition of - (t), it can be verified that the expression in the Lemma is not
valid for but is valid for 2. We now prove the case for t ? 2.
For q - 4, the function q
2:08 log log q is an increasing function. Hence
2:08 log log(q) ?
2:08 log log
Also,
2:08 log log(q
2:08 log log q
thus, since 1 ? 1
2:08 log log q ,
2:08 log log(q \Gamma 1)
2:08 log log q
be the least integer such that 2 - 0
2:08 log log 2 - 0 (t)
. By (2)
2:08 log log(2 - 0
By Equation (1), for - (t) ? 5, we get
and due to Lemma 4
Thus, by the definition of - (t), we have that - 0 (t) - (t). To bound - (t) from above, we solve
By definition, - 0 (t) must satisfy
2:08 log - 0 (t)
By setting - 0 log log 2te, we have
log log(2t)e
and for t ? 2
Thus, for t ? 2, - 0 log log 2te satisfies (3), hence
log log 2te:
Lemma p(d) denote the least integer such that
d, then for d ?
Proof: By the definition of - (t) and Lemma 5
log(2dlog(d+1)e)e
By Lemma 1 and the definition of s(d)
Example 1: In Table 1 the values of OE(2 (the degree of the product of all the primitive
polynomials of degree m) and
(the degree of the product of all the primitive
polynomials of degree 2 - i - m) are tabulated for As long as d is less than the
maximum value in the table, p(d) can be obtained from the table, instead of using Lemma 6.
For example, if the number of modeled faults in the CUT is and the length of the
test sequence is . The degree of the product of all primitive polynomials
with degree less than or equal to 33 is the first which is greater than
Thus, a zero-aliasing LFSR with a primitive feedback polynomial, of degree at most 33, exists
for the CUT. On the other hand, using the bound of Lemma 6 we get p(d h
A closer look at table 1 shows that the product of all the primitive polynomials of degree less
than or equal to 53 has degree D greater than 1:4 . Thus, as long as the product of the
number of faults and the test sequence length is less than D (which is the case for all practical
test applications) a zero-aliasing MISR of degree less than or equal to 53 exists.
2.2 The expected bounds
In deriving the expected bounds we assume that the polynomials fh i g are random polynomials.
Denote the product of the distinct irreducible factors of degree j of h i by g i;j . Denote the
number of distinct irreducible factors of h i , of degree j, by v. The value of v can range from 0
to minfbd i =jc; I 2 (j)g.
Lemma 7: For j - 2, the expected value of v (the number of irreducible, degree-j, factors of
than or equal to 1
.
Proof: Let IR 2
be the set of irreducible polynomials of degree j over GF [2]. For
a given polynomial q, of degree greater or equal to j, define the indicator function d(p i ; q) to be
one if p i divides q and zero otherwise. The probability that a polynomial of degree j divides a
random polynomial of degree greater or equal to j is 2 \Gammaj , hence the probability that
is equal to 2 \Gammaj . Thus
The same type of analysis can be used to bound V ar[v], the variance of v, and oe v , the
standard deviation of v.
Lemma 8: For j - 2, the variance of the number of irreducible factors of g i;j is less than 1
.
The standard deviation is less than
.
Proof: The variance of v is given by V
I 2 (j)
i!k
I 2 (j)
i!k
I 2 (j)
Hence
I 2 (j)
Having computed the mean and variance of the number of irreducible factors of degree j per
polynomial, we can compute a confidence measure for these results.
Lemma 9: For j - 4, the expected number of polynomials g i;j with more than 5 (50) factors
is less than jHj=100 (jHj=10; 000).
Proof: Using the Chebyshev inequality [8, p. 376]
the probability that v is greater than 5 is less than 0:01. Using this result we can
define a second random process in which the random variable x is 1 iff v is greater than 5 and
otherwise. This process is a Bernoulli experiment [6, Sec. 6.4]. The expected number of
i;j 's with more than 5 factors is upper bounded by jHj=100, as is the variance. Similarly, the
probability that v is greater than 50 is less than 0:0001 and the expected number of g i;j 's with
more than 50 factors is bounded by jHj=10; 000.
Lemma 10: The expected degree of the smallest irreducible non-factor of the set of polynomials
H is bounded from above by dlog jHje + 1.
Proof: Denote the product of the polynomials g i;j , 1 - i - jHj, by g j . By Lemma 7, the
expected number of (not necessarily distinct) factors of g j is less than jHj=j. The smallest j for
which I 2 (j) exceeds this value is an upper bound on the expected degree d a of a non-factor of
hence d a - ffi.
By applying Lemma 5 on the result of Lemma 10, we have
Corollary 11: The expected degree of the smallest primitive non-factor of the set of polynomials
H is bounded from above by 2
Example 2: Using the numbers of Example 1, let . The first j for which
exceeds jHj=j is hence we expect to find a zero-aliasing MISR
with a primitive feedback polynomial of degree less than or equal to 14, as opposed to the worst
case of 33. Corollary 11 would give us an upper bound of 19.
As the expected bound is a only a function the number of faults and not the length of the test
sequence, the expected degree of a zero-aliasing MISR will never exceed 53. In fact, as long as
the number of faults is less than 1 million, we expect to find a zero-aliasing MISR of degree less
than or equal to 21.
3 Polynomial operations in GF [2]
In search for a (least degree) non-factor of H we use procedures that sift the factors of the same
degree from a given polynomial. These procedures are based on the following lemma.
Lemma 12: [12, Lemma 2.13, p.48]
x is the product of all irreducible polynomials of
degree l, where l is a divisor of m.
Thus, a basic step in finding the distinct irreducible factors of a polynomial b(x) is the computation
of
The result of this operation is the product of all the irreducible factors of degree l, where ljm, of
b(x). For most polynomials b(x) of interest to us, 2 m ?? deg(b(x)). Therefore, we first compute
and then
In analyzing the complexity of our proposed procedures, we rely on the following results which
are stated in greater detail in the appendix.
The complexity of a polynomial gcd operation is O(M(s) log s) [1, pp. 300-308], where
s is the degree of the larger polynomial operand and M(s) is the complexity of polynomial
multiplication, where the product has degree s. The complexity of polynomial division is also
We considered two multiplication algorithms. The first algorithm is due to Sch-onhage [17].
Its complexity is O(s log s log log s). The second algorithm is suggested by Cormen et al. [6,
p. 799]. Its complexity is O(s log s). In the sequel we shall use the notation O(M(s)) for
the complexity of polynomial multiplication. Whenever possible it will mean s log s, otherwise
it should be taken as s log s log log s. Similarly the notation L(s) will denote either log s or
log s log log s, as appropriate.
The cost of finding the remainder of x 2 m
when divided by b(x) without actually carrying out
the division is [3] [15] O(mM(s)) where
Thus,
can be computed in O(mM(s) +M(s) log s).
4 Finding a non-factor of smallest degree for a given set
of polynomials
After establishing the bounds on the least degree non-factor of H in Section 2, this section
addresses the question of finding a least degree non-factor for H.
Problem 2: Given a set of polynomials
with
let
Find an irreducible (primitive) polynomial a(x), with deg(a) = d a , such that
1. For all (equivalently, h 6j 0 mod a).
2. For all irreducible (primitive) polynomials b(x), with deg(b) ! d a , h j 0 mod b (or equiv-
alently, there exists an i for which h i j 0 mod b).
One way of solving the problem is by factoring the polynomials of H. This would require
too much work, since we do not need to know all the factors in order to find a non-factor. We
only need to know the "small" factors.
In this section we present algorithms for solving Problem 2 and analyze their complexity.
The complexity is given in two forms. The first is with worst case complexity bounds, referred
to as the worst case complexity. The second is with expected complexity bounds, referred to as
the expected complexity. The expected complexity is a refinement of the worst case complexity
based on the expected size of the results from our procedures.
By Lemmas 1 and 6 (Section 2), we have an upper bound
depending on whether we are looking for an irreducible or a primitive non-factor. Using this
bound, we begin our search process, which is made up of three phases.
1. For all h i 2 H, find g i;j (x), the product of all distinct irreducible (primitive) factors of h i ,
of degree j.
2. Having found the polynomials g i;j , determine whether all irreducible (primitive) polynomials
of degree j are factors of H.
3. If not all irreducible (primitive) polynomials of degree j are factors of H, find one that is
not.
The worst case complexities of the three phases for the irreducible case are O(jHju 2 M(n)),
log n) and O(jHj 2 n 2 u 2 M(u)). The dominant term is O(jHj 2 n 2 u 2 M(u)). The
worst case complexities of the three phases for the primitive case are O(jHju 3 M(n)), O(jHj 2 \Delta
log n) and O(jHj 2 n 2 u 3 M(u) log log u). The dominant term is O(jHj 2 n 2 u 3 M(u) log log u).
The expected complexity of the first two phases are O(jHju 2 M(n)) and O(jHj log jHju 2 \Delta
log n). The expected complexity for the third phase is O(jHj log jHjd a M(d a )) to find an
irreducible non-factor and O(jHj log jHjd 2
a log log d a M(d a )) to find a primitive non-factor. The
dominant term is O(jHju 2 M(n)).
The worst case complexity is a function of jHj 2 n 2 multiplied by terms that are logarithmic
in jHj and n whereas the expected complexity is a function of jHjn multiplied by terms that
are logarithmic in jHj and n.
4.1 The product of all distinct factors of the same degree for a
given polynomial
Given the polynomial h i (x) and the upper bound u, we wish to compute g i;j , the product of all
distinct factors of h i of degree j, for u. The procedure for computing the polynomials
g i;j is given in Figure 2. The polynomials g i;j are computed in three steps. First, for u=2
compute
\Gamma x). Each g i;j is a product of all the distinct irreducible factors
of h i (x) of degree j and of degree l, where ljj.
When j is less than or equal to u=2, we have 2j - u. By Theorem 12, g i;2j contains the
product of all irreducible factors of degree l, where ljj, of h i . Since the degree of g i;2j is (much)
less than the degree of h i , it is more efficient to compute g i;j from g i;2j than from h i . Thus, in
Step 2, for
At the end of Step 2, each g i;j contains all the factors of degree ljj of h i . To sift out the
factors of degree less than j from g i;j , we need to divide g i;j by g i;l , where l ranges over the set
of divisors of j. This is carried out in Step 3.
Procedure distinct factors() is not enough when we are looking for a primitive non-factor.
At the end of the procedure, each g i;j is the product of all distinct irreducible polynomials of
degree j, that are factors of h i . From g i;j we need to sift out the non-primitive factors. Before
describing this aspect, we introduce the notion of maximal divisors.
being the distinct prime factors of q. The set
of maximal divisor of q is the set
For example,
only one prime factor,
A polynomial over GF [q] of degree m is irreducible iff it divides x does not
divide x divisors k of m. It is primitive of degree m iff it is irreducible and does
not divide x l \Gamma 1 for all l in md(q Ch. 3]. Procedure distinct primitives(), shown in
Figure
3, sifts out the non-primitive factors of g i;j .
Lemma 13:
1. The complexity of Procedure distinct factors() is O(u 2 M(n)).
2. The complexity of Procedure distinct primitive() is O(u 3 M(n)).
3. The complexity of the first phase is O(jHju 2 M(n)) for the irreducible case and O(jHju 3 \Delta
M(n)) for the primitive case.
In the above expressions for the irreducible case and for the primitive case.
Proof:
1. The worst case complexity of Procedure distinct factors() is as follows. In Step 1, the
procedure performs u=2 gcd computations involving h i . The complexity of each gcd computation
is O(jM(d i Thus the total work for the first stage is
' u
In Step 2 the procedure carries out u=2 gcd operations. The work required for this step is
In Step 3, for every element of the sets of divisors, the procedure performs a division
operation. The cost expression is
O(M(d
O(jM(d i;j
2. The complexity of Procedure distinct primitive() is as follows. Each iteration of Procedure
distinct primitives() reduces i;j and performs one gcd and one
division operation. The cost of each iteration is O(jM(d i;j There
are u, and we run the procedure u times.
Therefore, the additional work for the primitive case is bounded by O(u 3 M(d i )).
In most cases, the values d i;j will be (much) less than n, hence the actual work will
be much less then O(u 2 M(n)) and the dominant factor will be Step 1 of Procedure
distinct factors().
3. Over the set H, based on 1 and 2, the complexity of the first phase is (jHju 2 M(n)) for the
irreducible case and (jHju 3 M(n)) for the primitive case. The value of u is either s(d h ) or
corresponding to either the irreducible or primitive case.
Lemma 14: The expected complexity of the first phase is O(jHju 2 M(n))) with u equal to
either es(H) or ep(H).
Proof: The expected complexity of Procedure distinct factors() is dominated by the complexity
of Step 1, which is O(u 2 M(n)). The difference in the complexity of the other steps,
over the worst case, comes from using the expected size of the d i;j s, instead of their worst
case size, which is equal to n. The expected complexity of the procedure (including Procedure
distinct primitive()), over the set H is, thus, O(jHju 2 M(n)), with u equal to either es(H) or
ep(H).
4.2 The number of all distinct factors, of the same degree, for a
set of polynomials
After the first phase, for all degrees 1 - j - u, we have jHj polynomials g i;j , each a product of
the distinct irreducible (primitive) factors of degree j of h i . Some of the g i;j 's might equal
some pairs might have factors in common. Our goal is to find a least degree non-factor of H.
First we must determine whether all irreducible polynomials of degree j appear in
This is the second of our three phases (page 13). A simple test is to compare deg(g j )
with I 2 (j). If deg(g j )
then there is a non-factor of degree j. For the primitive case we
compare with OE(2 j \Gamma1)
.
(j), the only way to determine whether all irreducible (primitive) polynomials
of degree j are factors of g j is to find those factors that appear in more than one of the g i;j 's
and to eliminate all their appearances except for one.
We considered two methods for removing repeated factors. The first is referred to as the lcm
method and the second is referred to as the gcd method. The lcm method will be shown to be
faster, but it also requires more space, which might not be available.
In the lcm method we first sort the g i;j s according to their degrees and then place them in
the sets s k , where g i;j 2 s k iff 2 . The sets fs k g are ordered according to
their index, in increasing order. We then begin computing lcms of two polynomials taken from
the first set. If this set has only one polynomial we take the second polynomial from the next
set. The resulting lcm polynomial is placed in the set corresponding to its degree. This process
ends when we are left with one polynomial, representing the lcm of all the polynomials g i;j .
In the gcd method the polynomials g i;j are sorted by their degrees. In each iteration the
polynomial with the highest degree is taken out of the set and and all pairwise gcds between
itself and the other polynomials are taken. If the gcd is greater than 1, the other polynomial is
divided by this gcd. At the end of the iteration none of the remaining polynomials in the set
has a factor in common with the polynomial that was taken out. Thus, when the procedure
ends, no factor appears in more than one of the g i;j s.
Lemma 15:
1. The complexity of the second phase is O(jHj 2 M(n) log n).
2. The expected complexity of the second phase is O(jHj log 3 jHjL(n) log(jHjn)).
Proof:
1. We can bound the work required for the lcm method as follows. First assume jHj and d i;j
are powers of 2 (if they are not, for bounding purposes increase them to the nearest power
of 2). Also, assume the polynomials are leaves of a binary tree. All the polynomials in the
same level have the same degree (each level corresponds to a different set s k ). Assume that
in every lcm step, the degree of the lcm is the sum of the degrees of its two operands (i.e.
the operands are relatively prime). The maximum degree the final lcm can have is jHjn and
computing this lcm costs O(M(jHjn) log(jHjn)). Computing the two lcm's of the next to
last level costs at most O(2 \Delta M(jHjn=2) \Delta log(jHjn=2)). In each lower level there are at most
twice as many lcm's being computed but each costs less than half the cost of the level above
it, hence the total cost is bounded by O(log(jHjn)M(jHjn) log(jHjn)) - O(u 2 M(jHjn)).
To use the lcm method we need enough memory to store the final lcm. If we do not have
the required memory, we use the gcd method. The work required is O(jHj 2 M(n) log n).
2. When taking into account the expected size of the polynomials g i;j , factorization becomes
practical. The factoring algorithm used is that of Cantor and Zassenhaus [4]. The complexity
for factoring a product of r distinct irreducible polynomials of the degree j is given
by O(rM(rj)(j log(rj)). By Lemma 9, the expected number of polynomials g i;j that
have more than 5 factors is less than jHj=10 2k+2 . If we take the number of polynomials
with factors to be 99jH j
polynomials with at most 5 factors are assumed
to have 5, all polynomials with are assumed to have 50, etc.), then the
expected work required to factor all the polynomials is bounded by
OB
99jHj
By using the fact that 5j
bound the sum by
OB
When the factorization is completed, all the irreducible factors can be sorted in time
O(jHj \Delta log jHj) and the unique factors can be counted.
Summing over log n)). Since u -
log jHj, the expression becomes O(jHj log 3 jHjL(n) log(jHjn)).
4.3 Finding a non-factor
We are now at the third phase, where we know the smallest degree d a for which there exists
a non-factor for h. We also have, m - jHj polynomials g i;d a that are products of distinct
irreducible (primitive) factors of h, all g i;d a 's are pairwise relatively prime and every irreducible
(primitive) factor of degree d a of h is a factor of one of these polynomials. We want to find an
irreducible (primitive) polynomial of degree d a that is a non-factor of H.
One approach is to divide the product of all irreducible (primitive) polynomials of degree d a
by the product of all m polynomials and find a factor of the result. This might pose a problem
if we do not have the product at hand, i.e. only the polynomials g i;d a , or if the product is too
large to handle as one polynomial.
Another way is to randomly select irreducible (primitive) polynomials and check whether
they are factors or non-factors. The only way to check is by doing the actual division. This
division, however, will be regular long division, and not FFT division, whenever the divisor
has very small degree compared to the degree of the dividend. If an irreducible (primitive)
polynomial is relatively prime to all of the g i;da 's, it is a non-factor. If it divides at least one of
the polynomials, we can keep the result of the division and reduce our work in upcoming trials.
This reduction requires that polynomials do not repeat in the selection process.
Lemma
1. The complexity of finding a non-factor once d a is known is O(jHj
a M(d a )) for the
irreducible case and O(jHj 2 n 2 d 3
a M(d a ) log log d a ) for the primitive case.
2. The expected complexity is O(jHj log jHjd a M(d a )) for the irreducible case and O(jHj \Delta
log jHj \Delta d 2
a log log d a M(d a )) for the primitive case.
Proof:
1. The procedure generates random polynomials, checks them for irreducibility (primitivity)
and whether they are factors or not. The expected number of random polynomials that
are tested for irreducibility (primitivity) before an irreducible (primitive) polynomial of
degree d a is found is d a =2 ( dalog log d a ) [15]. The work required to test each polynomial
for irreducibility is O(d a M(d a
a M(d a ))) [15]. The sum of the d i;j 's cannot exceed
jHjn, therefore after at most jH jn
da irreducible polynomials are tried, a non-factor is found.
The work involved with each try is jHjn \Delta d a (long division). Thus, the expected work
required to find a non-factor is O(jHj
a M(d a )). For the primitive case the work is
a M(d a ) log log d a ).
2. If the polynomials g i;j were factored (see proof of Lemma 15,(2)), once d a is known, we draw
irreducible (primitive) polynomials until a non-factor is found. We expect no more than
jHj=d a factors. When an irreducible (primitive) polynomial is drawn, it takes O(log jHj)
to check whether it is a factor or not. Hence, the expected work required to find a non-
once d a is known, is bounded by O(jHj log jHjd a M(d a )) for the irreducible case and
O(jHj log jHjd 2
a M(d a ) \Delta log log d a ) for the primitive case.
5 Practical scenarios
In this section we discuss some practical scenarios for finding zero-aliasing polynomials. First,
when we want a non-factor of a pre-specified degree. Second, when we want to find a non-factor
fast. Third, we compare our algorithm for finding a least degree non-factor with an exhaustive
search over all irreducible (primitive) polynomials in ascending degrees. In some cases, this type
of search will be faster.
5.1 Finding a non-factor of a pre-specified degree
In cases where the register is required to function as both a RA and a PG, a non-factor of a
pre-specified degree is needed. Thus
Problem 3: Given a set of polynomials an
irreducible (primitive) non-factor of degree t for H.
This problem is exactly the same as finding the least degree non-factor, except that we only
need to consider the case of t, instead of iterating over all 1 - j - u. We first compute the
polynomials g i;t , then determine whether a non-factor of degree t exists, and if so find one.
Lemma 17:
1. The complexity of finding a non-factor of degree t is O(jHj for the irreducible
case and O(jHj 2 n 2 t 3 M(t) log log t) for the primitive case.
2. The expected complexity is O(jHjM(n)(t log n)).
Proof:
1. Computing the polynomials g i;t involves computing g
for each
l 2 md(t) computing f
. The cost of the first gcd
computation is O(tM(d The cost of the jmd(t)j subsequent gcd and
divisions is bounded by O(log t(tM(d i;t ) +M(d i;t ) log d i;t )). Substituting n for d i and d i;t
we get O(log t log n)).
Once we have the polynomials g i;t , we need to sift out multiple instances of the same irreducible
polynomial. When using the gcd method, this has a worst case cost of O(jHj 2 M(n) \Delta
log n).
At this stage, we know whether a non-factor of degree t exists or not. If one exists, we
carry out phase 3. This has a worst case complexity of O(jHj 2 This is the
dominant term for the whole process. The analysis is the same for the primitive case,
hence the worst case complexity of finding an irreducible (primitive) non-factor of a given
degree t for a set of polynomials H is O(jHj log log t)).
2. We turn to analyze the expected complexity. For each h i , we compute g
x). This costs O(jHjM(n)(t log n)). The cost of sifting out the factors of degree less
than t from the g i;t 's, based on the expected number of factors for each degree, will be
insignificant. Factoring and sorting the polynomials in the second phase has expected cost
of O(jHj log jHjt log n)) (Eq. (5)). The expected number of distinct irreducible
factors of degree t of H is bounded by jHj=t. Thus, the cost of finding a non-factor at this
stage which consists of drawing at most jHj
irreducible (primitive) polynomials, each at
an expected cost of O( ttM(t)) (O( tlog log t \Delta t 2 M(t))), and checking it against the list of
factors, is bounded by O( jHj
ttM(t) log(jHj=t)) for the irreducible case and O(jHj(log jHj \Gamma
log t)t 2 M(t) log log t) for the primitive case. Hence, the expected complexity of finding a
non-factor of degree t for H is bounded by O(jHjM(n)(t log n)).
5.2 Finding a non-factor fast
Problem Given a set of polynomials
find an irreducible (primitive) non-factor of H in less than 2 c tries.
The sum of the degrees of all irreducible (primitive) polynomials of degree less than or equal
to s(d h ) (p(d h )) is greater than d h . If we look at
then
'P u
and if we draw uniformly from all irreducible (prim-
itive) polynomials of degree u, after 2 c drawings we expect to find a non-factor. The expected
work cost for this case is O(2 c \Delta which is the cost of 2 c iterations
of drawing a polynomial and testing for irreducibility, and once one is found dividing all
jHj polynomials by this candidate non-factor, using long division. For the primitive case this
becomes O(2 c \Delta (u 3 M(u) log log u
Example 3: Using the numbers in Example 1 again, say we want to find a non-factor in no
more than 8 tries. We compute the bound p
and draw from all the primitive polynomials
up to the computed bound. If we use table 1, we see that instead of looking at the polynomials
of degree less than or equal to 33, we need to consider all primitive polynomials of degree up to
34. In general, 2 c
- 2, hence by Lemma 6, we only have to consider polynomials of degree
greater by at most 2 than for the case when we want the minimum degree non-factor.
We can also use the expected bounds es(d) and ep(d) to lower the degrees of the candidate
non-factors.
5.3 Exhaustive search
In this subsection, we compare our algorithms with an exhaustive search for a least degree
non-factor. We will look at the irreducible case.
Assume the least degree irreducible non-factor has degree d a . Also, assume we have a list of
all irreducible polynomials in ascending order. The number of irreducible roots of degree j is
less than 2 j . We can bound the work required to find the non-factor, by an exhaustive search,
by O(jHjn2 da+1 ). Using the expected bound on d a (d a = O(log jHj)), we can bound the work by
n). The expected work required to find the least degree non-factor, by our algorithms,
is O(jHju 2 M(n)), which becomes O(jHj log 2 jHj \Delta n log n) when we substitute in the value of u.
Not taking into account any of the constants involved with these two results, the ratio of the
work required for an exhaustive search, relative to the work required for our algorithm is
log 2 jHj log n
Assuming this ratio is less than 1 for n ? 1210. Assuming the ratio is
less than 1 for n ? 124; 500. For the ratio is less than 1 for n ? 365; 284; 284. This
suggests that when the number of faults of interest is "small" (less than 1024) an exhaustive
search might be more efficient than our algorithms. However, as the number of faults increases,
our algorithms are more efficient for test sequences of realistic length. Finally, when the number
of faults is greater than 4096, then for all practical test lengths our algorithms will be more
efficient than the simple exhaustive search.
6 Experimental results
The following experiments were conducted to verify our results. The experiments were conducted
on a HP-700 workstation.
6.1 Random selections based on the absolute bounds
An experiment was set up as follows. We generated a set of 1000 random polynomials of degree
at most 200; 000. This corresponds to a CUT with 1000 faults, i.e. 1000, and a test length
of 200; 000, i.e. The degree of the product of these polynomials (d h ) was less than
or equal to 200; 000; 000. We wanted a probability greater than 1=2 of finding a non-factor
with just one drawing of a primitive polynomial. By looking at table 1, we can achieve this
by selecting from the set of all primitive polynomials of degree less than or equal to 29. The
polynomials were drawn in a 2 step process. The first step selected the degree of the primitive
candidate, the second selected the candidate. In the first step we selected a number and
took its value modulo the number of primitive roots in the fields GF [2] through GF [2 29 ]. The
result was used to determine the degree of the primitive candidate, by looking at the first field
GF [2 d ] such that the number of primitive roots in the fields GF [2] through GF [2 d ] is greater
than the result. The selection of the actual polynomial was done by setting the coefficients
of by a LFSR with a primitive feedback polynomial of degree d \Gamma 1 that was
initialized to a random state. This guarantees that no candidate will be selected twice and all
candidates will have a chance at being considered. The candidates were tested for primitivity
and if they were primitive, they were tested for being non-factors. If at some point they were
found to be factors, the search continued from the current state of the degree
We ran 200 such experiments. In all 200 experiments the first primitive candidate turned
out to be a non-factor. Of the non-factors that were found, 1 was of degree 21, 2 were of degree
22, 3 of degree 23, 2 of degree 24, 7 of degree 25, 13 of degree 26, 32 of degree 27, 35 of degree
28 and 105 were of degree 29.
The number of polynomials that were tested for primitivity before one was found ranged
from 1 to 160. The average number was 16. The time it took to find a primitive polynomial
ranged from 0.01 seconds to 0.79 seconds. The average time was 0.104 seconds. It took between
153.25 and 166.68 seconds to find a non-factor, with the average being 160.50 seconds.
These experiments show that given the error sequences for each of the faults of interest, it
is very easy to find a zero-aliasing polynomial for a circuit.
6.2 Random selections based on the expected bounds
Based on our expected bounds, Corollary 11, we should be able to find a non-factor of degree at
most 14. We ran 100 experiment as above, only this time, we selected only primitive polynomials
of degree 11 (the expected bound based on table 1). The first primitive candidate that was
selected was a non-factor in 66 of the 100 experiments. 19 experiments found the non-factor
with the second candidate, 11 with the third, 2 with the fourth, 1 with the fifth and 1 with the
sixth. We ran 100 experiments selecting only primitive candidates of degree 9. The number of
primitive candidates that were tried before a non-factor was found ranged from 1 to 28. The
average number of candidates was 7:5.
To test the tightness of our expected bound, we ran 126 experiments. In which 1024 random
polynomials of degree at most 200,000 were generated and an exhaustive search, in increasing
order of degrees, was conducted to find the least degree non-factor. By our expected bound,
this least degree should be less than 14. In one experiment, the least degree was 7. In 35 it was
8 and in the remaining 90 experiments, the least degree was 9.
From these experiments we conclude that when the error polynomials are in fact random
polynomials, the expected bounds, based on the analysis of the expected number of factors of a
certain degree for a random polynomial, are in fact upper bounds on the least degree non-factor
for a set of polynomials. As expected, the bounds obtained from table 1 are tighter than those
from Corollary 11.
6.3 Experiments on benchmark circuits
We tried our worst case and expected bounds on error sequences of two circuits from the Berkeley
synthesis benchmarks [2]. The first circuit was in5, the second was in7. We used a fault simulator
that did not take into account any fault collapsing, hence the number of faults was twice the
number of lines in the circuit (for stuck-at-0 and stuck-at-1 faults on each line).
For circuit in5 there were 1092 faults, six of which were redundant, hence there were 1086
detectable faults. The circuit had 14 primary outputs. We used a test sequence of length 6530
that detects all the non-redundant faults and computed the effective output polynomials of all the
faults. All were non-zero, hence there were no cancellation of errors from one output by errors of
another output. Thus we had 1086 error polynomials of degree at most 6543. From table 1, the
worst case bound on the degree of a primitive non-factor is 23. To draw a primitive non-factor
with probability greater than 1we need to consider all primitive polynomials of degree 24 or
less. We conducted 20 experiments of drawing zero-aliasing primitive polynomials, based on our
worst case bounds. In all experiments, the first candidate was a non-factor. We then conducted
another 20 experiments, this time drawing primitive polynomials of degree 14, the size of the
register available at the circuit outputs. In all experiments the first candidate was a non-factor.
Based on our expected bounds (table 1), we should find a non-factor of degree 11 or less. We
tried finding non-factor of degree 11, 9 and 7. For the degree 11 experiments, in 17 of 20 cases,
the first primitive candidate was a non-factor. Two experiments found the non-factor with the
second try, one with the third. We conducted 15 degree 9 experiments before considering all 48
primitive polynomials of degree 9. Of the 48 primitive polynomials of degree 9, 33 were factors,
and 15 were non-factors. The average number of candidates tried before a non-factor was found
was 3 1. All primitive polynomials of degree 7 were factors.
For circuit in7 there were 568 faults, 567 of which were non-redundant. The circuit has 10
primary outputs and we used a test sequence of length 9280. Using the worst case bounds,
to ensure selection of a primitive non-factor with probability greater than 1, we considered all
primitive polynomials of degree 24 or less. All 20 experiments found a non-factor with the first
candidate. The expected bound (table 1) for the degree of a primitive non-factor was 10. We
tried to find non-factors of degree 11 and 10 (the size of the register available at the outputs). All
degree 11 experiments found a non-factor with the first try. Of the
13 found a non-factor with the first try, 6 with the second and one with the third.
For both circuits we tried to find the least degree non-factor using an exhaustive search.
Since the fault extractor we used did not do any fault collapsing, some of the error polynomials
were identical. By summing the values of all non-zero erroneous output words for each simulated
fault, we found at least 292 different error polynomials in in7 and at least 566 different error
polynomials in in5. This would make our expected bounds (table 1) to be 9 for in7 and 10 for
in5. For both circuits the least degree non-factor had degree 8. It took 11 CPU minutes to find
each of these polynomials.
The experiments on the two benchmark circuits show that the assumption that the error
polynomials behave as random polynomials does not invalidate our analysis and results. The
expected bounds, as was the case for the random experiments, were upper bounds on the least
degree non-factor.
Conclusions
In this paper we presented procedures for selecting zero-aliasing feedback polynomials for MISR-
based RAs. When both PGs and RAs are designed as LFSRs/MISRs, our scheme, combined
with algorithms for selecting efficient feedback polynomials for pattern generation [11], enables
the selection of one feedback polynomial that serves both tasks, thus reducing the overhead of
reconfigurable registers.
We presented upper bounds on the least degree irreducible and primitive zero-aliasing polynomial
for a set of modeled faults. We showed that in all practical test applications such a
polynomial will always be of degree less than 53. In fact, by our expected bounds, when the
number of faults is less than 10 6 , this degree will be at most 21. In the experiments that were
conducted, a zero-aliasing polynomial of degree less than the expected bound was always found.
We also presented procedures for finding a zero-aliasing polynomial, when the objective is to
minimize the degree, to have a specific degree or speed. We analyzed the computational effort
that is required both under worst case conditions and expected conditions. A (partial) summary
of the results is presented in table 2. For both the worst case analysis and expected analysis,
table
2 shows the upper bounds on the smallest non-factor, the computational complexity of
finding a smallest non-factor and the complexity of finding a factor of a given degree. When
speed is a requirement, we showed we can find a zero aliasing polynomial with, on average, two
tries, by increasing the degree of the polynomials we consider by at most two over the upper
bound on the size of the minimum degree.
Based on our analysis and on our experiments, it is our conclusion that when the error
polynomials of the modeled target faults are available, zero-aliasing is an easily achievable goal.
Thus, to ensure high quality tests, a premium should be put on fault modeling, automated test
pattern generator design and fault simulation. With these tools available, zero-aliasing is not a
problem.
Acknowledgment
We wish to thank Professors L. A. Adleman, M. A. Breuer, D. J. Ierardi
and L. R. Welch for many helpful discussions. We also wish to thank the anonymous referees
for some very helpful comments.
--R
The Design and Analysis of Computer Algo- rithms
Logic Minimization Algorithms for VLSI Synthesis
Factoring Polynomials Over Large Finite Fields
A New Algorithm for Factoring Polynomials over Finite Fields
Introduction to Algorithms
Concrete Mathematics
Complexity of multiplication in finite fields
Test Embedding with Discrete Logarithms
Introduction to finite fields and their applications
On Achieving Zero Aliasing for Modeled Faults
A. New Framework for Designing and Analyzing BIST Techniques and
Probabilistic Algorithms in Finite Fields
The Book of Prime Number Records
Schnelle Multiplikation von Polynomen - uber K-orpern der Charakteristik 2
--TR
--CTR
Krishnendu Chakrabarty , Brian T. Murray , John P. Hayes, Optimal Zero-Aliasing Space Compaction of Test Responses, IEEE Transactions on Computers, v.47 n.11, p.1171-1187, November 1998
O. Novk , Z. Plva , J. Nosek , A. Hlawiczka , T. Garbolino , K. Gucwa, Test-Per-Clock Logic BIST with Semi-Deterministic Test Patterns and Zero-Aliasing Compactor, Journal of Electronic Testing: Theory and Applications, v.20 n.1, p.109-122, February 2004 | built-in self-test;linear feedback shift registers;response compaction;zero-aliasing;signature analysis |
627097 | An Algorithm for Exact Bounds on the Time Separation of Events in Concurrent Systems. | AbstractDetermining the time separation of events is a fundamental problem in the analysis, synthesis, and optimization of concurrent systems. Applications range from logic optimization of asynchronous digital circuits to evaluation of execution times of programs for real-time systems. We present an efficient algorithm to find exact (tight) bounds on the separation time of events in an arbitrary process graph without conditional behavior. This result is more general than the methods presented in several previously published papers as it handles cyclic graphs and yields the tightest possible bounds on event separations. The algorithm is based on a functional decomposition technique that permits the implicit evaluation of an infinitely unfolded process graph. Examples are presented that demonstrate the utility and efficiency of the solution. The algorithm will form a basis for exploration of timing-constrained synthesis techniques. | Introduction
In this paper, we derive an exact algorithm that
determines tight upper and lower bounds on the separation
in time of an arbitrary pair of system events.
Depending on the level of abstraction in the specifica-
tion, events may represent low-level signal transitions
at a circuit interface or control flow in a more abstract
behavioral view. If we are able to determine
the bounds on the separation in time of two events
then we can use this information to: simplify combinational
and sequential logic by extracting temporal
don't care information, verify that a logic implementation
meets specified timing constraints, identify and
remove hazards from asynchronous circuits, and focus
optimization efforts in data-path synthesis by generating
useful scheduling constraints. Thus, determining
the time separation of events is a fundamental problem
in the analysis, synthesis, and optimization of concurrent
systems.
We develop an efficient solution for determining
time separation bounds and also take into account the
effects of starting the system from a specific reset or
start state. We model a concurrent system as a cyclic
connected graph. The nodes of the graph represent
events and the arcs are annotated with lower and upper
bounds on delays between events. Currently, our
solution is limited to graphs without conditional be-
havior. However, that still leaves a large and useful
class of concurrent specifications to which our analysis
applies.
Other approaches to the problem of finding bounds
on the separation in time of two events have either
been inexact or based on a more restrictive graph
topology. Loose bounds that may not enable all possible
optimizations were obtained by [8]. Both [7] and
[10] can only handle acyclic graphs, and [2] only supports
a limited form of synchronization and concurrency
This paper is composed of five sections. We follow
this introduction with a formalization of the problem,
a review of the foundation provided by the solution
for finite acyclic graphs, and some examples. Section
3 provides the details of our algorithm, which is based
on a structural decomposition of the unfolded process
graph. Some practical examples are presented in Section
4, and finally, Section 5 summarizes the contributions
of this paper.
Problem Formalization
Consider a simple concurrent system consisting of
three processes that synchronize over two channels a
and b, and do some internal computation (delay ranges
specified in brackets):
repeat f
Synchronize a;
Compute
repeat f
Synchronize a;
Compute
Synchronize b;
Compute
repeat f
Synchronize b;
Compute
We represent the system as a directed graph, called
the process graph, where the vertices represent events
(synchronizations) and the edges are annotated with
delay information. The process graph for the system
is shown in Figure 1. The initial state of the processes
is specified by marking the edges that can execute initially
To formalize the problem we use a simple modification
of the event-rule system developed in [3] 1 . Let
process graph composed of
a finite set of (repeatable) events E 0 , the vertices
of the graph.
a finite set of rule templates R 0 , the edges of the
graph.
Each edge is labelled with two objects, the delay range
[d; D] (integers with 0 - d - D), and the occurrence
index offset 2 ". For our example we have
and R
7\Gamma! a; a [1;2];0
7\Gamma! bg.
We restrict our analysis to well-formed graphs, that is,
graphs that are strongly-connected and have "(c) ? 0
for all cycles c in the graph, where "(c) is the sum of
the " values for all edges in the cycle c.
a
Figure
1: Three processes synchronizing at points a and
b. The number of lines drawn through an edge indicates
the value of the occurrence index offset.
2.1 Execution Model
We denote the k th occurrence of event
and refer to k as the occurrence index of v k . Let
E be the set of all event occurrences (infinite in one
direction, i.e., k - 0). To model the initial startup
behavior of a process, we also include in E a single
event occurrence named root. Thus,
The set R consists of the rules generated by instantiating
each rule template of R 0 at each occurrence
introduced a similarly modified system. The model can
also be viewed as an extension of [7] and [10], where we consider
cyclic max-only or type-2 graphs.
2 The occurrence index offset is used to specify how much the
occurrence index is incremented when the edge is executed-see
Section 2.1.
ae
oe
Special startup rules are included in the set R 0 , a non-empty
finite subset of froot [d;D]
We call the infinite directed graph constructed from
the vertex set E and the edge set R the unfolded process
graph. Figure 2 shows the unfolded process graph
for the example in Figure 1.
root
a3
[4, 10] [4, 10] [4, 10] [4, 10]
[5, 20] [5, 20] [5, 20] [5, 20]
[1,6]
[1,6]
[1,6] [1,6]
Figure
2: A portion of the unfolded process graph for
the process graph in Figure 1. Two startup edges have
been added, specifying that both a 0 and b 0 must occur
after root .
An execution of a process graph is the consistent assignment
of time values to event occurrences. A timing
assignment, - , maps event occurrences to global
time, thus - (v k ) is the time of the k th occurrence of
event v. The delay information in R restricts the set of
possible timing assignments. Formally, we define constraints
on the time values introduced by each event
occurrence:
ae
oe
ae
oe
The constraints on - (v k ) embody the underlying semantics
of a process graph's execution, i.e., events
correspond to synchronizations and an event can occur
only when all of its incident events have occurred.
Each incident event is delayed by some number in a
bounded interval ([d; D]). Thus, the earliest time at
which v k can occur is constrained by d values, the latest
by D values.
2.2 Problem Definition
The problem we address in this paper is: given two
events s and t in E 0 and a separation in occurrence
index fi, what are the strongest bounds ffi and \Delta such
that
for all ff - max(0; fi)? For example, to determine the
bounds on the time separation between two consecutive
a events, we would set
consider the bounds on - (a ff We address
only the problem of finding the maximum separation,
since ffi can be obtained from - (s ff
2.3 Algorithm for a Finite Unfolded Process
Graph
We build our solution to this problem on a variation
of a graph algorithm developed by McMillan and
Dill [7] that applies only to finite unfolded graphs. In
Section 3 we will generalize this algorithm to infinite
unfolded graphs.
Let \Delta ff be the strongest bound for the separation
problem given an ff, i.e.,
We can determine \Delta ff by analyzing a finite acyclic
graph created by only including the vertices in our
unfolded process graph for which there is a path to
either t ff or s ff\Gammafi . Name the resulting graph hE ; R i.
The algorithm consists of two simple steps. First,
we compute m-values backwards from s ff\Gammafi for all
event occurrences,
where d(h) is sum of the d values of the edges on the
path h. We can compute these values in linear time in
the size of R by a reverse topological traversal from
s ff\Gammafi . If there is no path from v k to s ff\Gammafi , denoted by
, we can assign an arbitrary constant value
to m(v k )-we use m(v k
We then compute \Delta assigning
and then for all other occurrences in
(normal) topological order
(1)
If v k 6; s ff\Gammafi , the minimization with 0 is omitted.
root
root411 104 0110100
6
(a) (b)
Figure
3: Finite acyclic graph, hE ; R i for obtaining
for the process graph in Figure 1 given the parameters
In (a), the edges are labeled with the d values, and the
vertices are labelled with the m-values obtained in the
first step of the algorithm. In (b), the edges are labeled
with the D values, and the vertices are labelled with
the M-values obtained in the second step. We obtain
Applying the algorithm to the example in Figure 1
(see
Figure
3 for the computation of yields the
following maximum separations:
To compute \Delta, the maximum separation in time
over all occurrences of s and t, separated in occurrence
by fi, we maximize \Delta ff over all values of ff:
The problem, of course, is that this requires an infinite
number of applications of the algorithm. Before we
present an algebraic solution that allows us to analyze
the infinite unfolded graph, we illustrate the difficulties
of this analysis with a few examples.
2.4 Examples
Our first example, in Figure 4, is a process graph
that represents two coupled pipelines. If the pipelines
were not coupled at c, the maximum separation between
a and e would be unbounded. This is because
the first pipeline (choosing the delay between consecutive
a's as being 2) could be arbitrarily slower than
the second pipeline (choosing the delay between consecutive
e's as being 1). The coupling of the pipelines
forces one pipeline to wait for the other if it gets too
far ahead.
a
e d
c
Figure
4: A process graph that represents two coupled
pipelines. All unspecified delay ranges are [0; 0].
We start the pipeline by rooting all of the initial
occurrences at zero, e.g., root [0;0]
7\Gamma! a 0 . For all ff - 0,
it can be shown that - (a ff
Our second example, in Figure 5, exhibits interesting
behavior. We root all of the initial occurrences at
zero. If
f
e [-]
d
c
b a
[3; 3]
[3; 3]
[3; 3]
[3; 3]
Figure
5: A process graph with unusual timing behavior.
All unspecified delay ranges are [1; 1].
If we change
a
d
e
c
[10; 10] [10; 10]
Figure
Two processes synchronizing at c.
Our final example, in Figure 6, corresponds to
two simple processes that synchronize at the event
c. Clearly, the startup rules can affect the initial
timing behavior of the processes. However, this example
demonstrates that the initial startup rules also
can determine the maximum separation at every point
in the infinite execution. We have two startup rules:
root
7\Gamma! d 0 and they determine
every \Delta ff for - (e ff
As the process graph is a repetitive system, presumably
the \Delta ff values will eventually reach a steady
state, for example, large ff. Unfortu-
nately, as our examples illustrate, the behavior of the
ff values can be non-monotonic and periodic, and
might even start out periodic and then later stabilize
to a constant value. Thus, no simple criteria for determining
when steady state has been reached can be
derived based on the behavior of the \Delta ff values.
3 Functional Solution
Our solution to the problem is based on a structural
decomposition of the unfolded process graph that exploits
its repetitive nature. By dividing the unfolded
process graph up into segments and representing the
computation of the finite graph algorithm in a symbolic
manner we can reuse the computations for each
segment.
3.1 Introducing Functions
We introduce a symbolic execution of the acyclic algorithm
presented in Section 2.3. Instead of computing
the numeric M-values in (1), we compute functions
that relate M-values with one another. We present an
algebra for representing and manipulating these functions
Functions are represented as sets of pairs. A singleton
set, fhl; wig, represents the function
In general, the set
wn ig (2)
corresponds to the function
We associate two operators with functions: function
maximization, f max g, and function composi-
g. It follows from (3) that function maximization
is defined as set union: f g. The
following observation leads to an important efficiency
optimization:
Pruning Rule In (2), if l i - l j and w , we can
prune the pair hl since for all x, min(x+ l
Thus, a function (2) can always be represented such
that
Function composition, h, is defined as
Notice that we use left-to-right
function composition [4]. For
(g
If g or h contain more than one pair then
where g i and h i are singleton sets. Function composition
is performed using distributivity, i.e.,
We can now express the M-values using functions.
We associate a function, f , to each edge u k\Gamma"
in the unfolded process
f
ae
The function f incorporates the min-part of (1), and
the max-part of (1) corresponds to function maximization
of the functions for the incoming edges. Using
function composition and function maximization,
we can create a function F v k that relates M (root) to
F
f
root
Figure
7: Fragment of unfolded process graph annotated
with functions corresponding to each edge (the m-values
are given in Figure 3 (a)).
For the example in Figure 3 (see Figure 7), we
relate M (root) to M (b 0 ) with the function F
0ig. Evaluating the function at
yields \Gamma1, which is exactly the value
obtained for M (b 0 ) in Figure 3 (b). The functions F b0
and F a0 are then used to relate M (root) to M (a 1 ),
etc., until a function that relates M (root) to M (t ff ) is
created. In our example, t ff = a 2 and the construction
produces F
We can find the separation between s ff\Gammafi and t ff as
F t ff (0). For our example, we get
where F a2 is evaluated using (3).
3.2 Decomposition
Instead of forming a single function relating
M (root) to M (t ff ), we can perform this construction
in segments, that is, determine the functional relationship
between M (root) and the M-values at some
interior nodes, and compose those functions with the
functions relating the M-values at the interior nodes
with M (t ff ). We will see that this process is akin to
matrix multiplication.
Consider an unfolded process graph used to determine
We decompose the graph into three seg-
ments: an initial segment, R, containing the root
event, a terminal segment, T, containing s ff\Gammafi and
t ff , and an interior segment, S (see Figure 8 (a)).
root
(a)
root
(b)
s
ff\Gammafi+\Omega
ff+\Omega vk
vk
R
Figure
8: Decomposing an unfolded process graph into
segments.
A cutset is a set of event occurrences such that
every path from the root to t ff goes through an element
of the cutset. Let X and Y be two cutsets such that
We say that Y is X shifted to the right by \Omega\Gamma We can
construct a square matrix S that maps the M-values of
the events in X to the M-values of the events in Y , i.e.,
to the same
events\Omega occurrences
later(\Omega ? 0). Simi-
larly, we can construct a matrix R that maps M (root)
to the M-values of the events in X, and a matrix T
that maps the M-values of the events in Y to M (t ff ).
We can now restate the maximum separation problem
in matrix form. Using (
tion, that is, function maximization for scalar addi-
tion, and function composition for scalar multiplica-
tion, we can form RST, a 1 \Theta 1 matrix containing
a single function relating M (root) to M (t ff ), which is
used to obtain \Delta ff .
For the graph in Figure 7, a possible decomposition
is yielding
f 9
Now consider finding
ff+\Omega . We add another S
segment to the graph, defined by the cutsets Y and
Z, where Z is Y shifted to the right
by\Omega (see Figure
8 (b)). We get the matrix product R 0 S 0 ST where
S and T are the same as above, but R 0 and S 0 may differ
from R and S since the m-values are now computed
from s
ff\Gammafi+\Omega instead of s ff\Gammafi . This decomposition is
only useful if we can arrange the symbolic computation
such that R i.e., such that
adding an S segment will not change the functional
representation. The next section characterizes the behavior
of the m-values that allows us to utilize this
decomposition effectively.
3.3 Repetition of the m-values
Since the m-values are constructed from a repetitive
system (the process graph) the values eventually
are determined by the maximum ratio cycles in the
process graph (see [5]). A maximum ratio cycle c is a
cycle with ratio d(c)="(c) equal to that of the maximum
c a simple cycle in G 0
d(c)
"(c)
The m-values repeat precisely when the values for all
events are determined repetitively using maximum ratio
cycles. Formally, there exists integers k ? and " ?
such that for all
is the number of unfoldings of the process
graph (relative to s ff\Gammafi ) before all of the m-values repeat
is the occurrence period of this repetition.
If there are multiple cycles with maximum ratio then
the m-values computed for different events may use
different maximum ratio cycles. Thus, a simple upper
bound on " ? is the least common multiple of "(c) for
each maximum ratio cycle c.
Figure
9 illustrates the behavior of the m-values for
the process graph in Figure 1. Both k ? and " ? are values
specific to a particular process graph. For exam-
ple, changing the delays [4; 10] and [5; 20] to [999,1000]
and [1000,1000], respectively, changes k ? from 3 to
998. Note that only the lower delay bounds affect k ? .
b522
Figure
9: A portion of the unfoldedprocess graph for the
process graph in Figure 1 labeled with m-values (s
which occurs after three unfoldings relative to a 10 , thus
3. The occurrence period of the repetition is one,
making
3.4 Matrices
unfoldings of the process graph, the m-values
are repeating. Let T be the matrix obtained
from the cutsets
the property that the m-values for the vertices topologically
left of X k ? repeat (with an occurrence period
implies that for all edges
topologically left of
This makes the M-values independent of k. There-
fore, after k ? unfolding of the process graph (relative
to s ff\Gammafi ), the functional representations of R and S
remain the same independently of the number of unfoldings
Let the matrix product RT solve \Delta ff ? . We can
find by adding an S segment, i.e., RST. By
repeatedly adding S segments to the graph, we can
compute
T. The
maximum over all n - 0 can be found from
which by matrix algebra can be rewritten as
R (I
where I is the identity matrix. The elements of I,
and 1, are the identity elements for function maximization
and composition, respectively. We have
(note that 0 is
an annihilator for function composition).
A matrix closure algorithm [1] can be used to compute
S , the middle part of (6), because in this con-
text, function maximization and composition form a
closed semi-ring. This is the key observation that allows
us to implicitly compute an infinite number of
values.
To compute S we need to be able to compute
the closure of the diagonal elements of S. For
wn ig, the scalar closure operation
can be
efficiently computed by:
ae
where the pairs are ordered as in (4) and w q corresponds
to the first positive l, i.e., l q ? 0 and if q ? 1
then l We can form the closure of an n \Theta n matrix
in O(n 3 ) scalar semi-ring operations
RS T is used to compute the maximum of the \Delta ff
values for only a subset of the integers ff - max(0; fi).
need only compute the maximum of
a finite number of additional \Delta ff , precisely for those
since RT is used to compute \Delta ff ? . This is
done by applying the finite graph algorithm for each
ff such that max(0; fi) - ff
we need to also compute those ff such that
does not divide ff\Gammaff ? . This can be accomplished by
choosing " ? different initial matrices, named R 0 , R 1 ,
corresponding to 0;
unfoldings of the process graph. Thus we can compute
the maximum of \Delta ff for all ff - ff ? by creating the
function
(R 0
3.5 Example
We now apply the details of the decomposition
method to the example in Figure 1. We decompose
the unfolded process graph into matrices R, S, and T
as shown in Figure 10. The size of the T segment is
determined as k unfoldings relative to the s ff\Gammafi
node, and the size of the S segment is "
ings. The functions in S relate M (a 0 ) and M (b 0 ) to
(b 1 ). For this example
root
R
Figure
10: A decomposed unfolded process graph corresponding
to the process graph in Figure 1.
The closure of S is:
yielding the final product
The maximum separation between a ff\Gamma1 and a ff for
is computed from the function
i.e., yielding
25.
3.6 Efficiency Considerations
There are two potential inefficiencies associated
with this algorithm.
1. depend on the delay ranges and
are not polynomial in the size of the process
graph.
2. The size of the representation of a particular function
may be as large as the number of paths between
the two events related by the function.
Point 1 is potentially serious, however in most process
graphs derived from circuits, "
is more of a concern because it can be large if there exists
a cycle c such that d(c)="(c) is almost equal to r ? .
Although of theoretical interest, point 2 is not likely
to be of practical concern. In practice the functions
can be efficiently pruned and the size of the functions
seems to be linear with respect to the size of the process
graph.
Applications
This section describes two applications demonstrating
the practicality of the algorithm for realistic examples
4.1 Memory Management Unit
Consider an edge u k\Gamma"
in an arbitrary process
graph. If the minimum time separation between
u k\Gamma" and v k is larger than D, event u k\Gamma" will never
constrain the time of event v k , i.e., v k must always
wait for some other event to occur, and the edge from
u k\Gamma" can be removed from the process graph without
changing the behavior of the system.
This idea can be used to remove redundant circuitry
in asynchronous circuits given (conservative) bounds
on the actual delays of a speed-independent design.
Superfluous edges can be removed by analyzing the
process graph corresponding to the circuit. This approach
has been taken by Myers and Meng [8] who use
an inexact timing analysis algorithm, i.e., the algorithm
doesn't necessarily give tight bounds on separation
times. Clearly, being able to obtain tight bounds
potentially enables the removal of more edges.
One of the examples in [8] is a memory management
unit (MMU) designed to interface to the Caltech
Asynchronous Microprocessor [6]. The process
graph (for one of the possible execution modes of the
MMU) consists of 16 events and 23 edges. For the
chosen delay intervals, k Analyzing
the 23 edges using our exact algorithm takes on average
CPU seconds on a SPARC 2 for each edge.
The analysis results in the removal of six edges from
the process graph or equivalently, the removal of six
transistors from the circuit. This is the same result as
in [8].
4.2 Asynchronous Microprocessor
A subset of the Caltech Asynchronous Microprocessor
[6] has been modelled and analyzed using the
techniques described in this paper. The process graph
for this simplified model consists of 60 events and 127
edges, and has " Using our implementation
of the techniques described in the paper,
computations of the instruction fetch cycle period and
the pipeline latency can be performed in under 2 CPU
seconds on a SPARC 2.
These and similar computations can be used to determine
the real-time properties of the asynchronous
microprocessor. For example, to bound the execution
time of a code fragment, we can use the minimum and
maximum separation in cycle period of each instruction
type [9]. Furthermore, this information is useful
when interfacing the microprocessor to an external
synchronous component, especially in cases where the
synchronous component is clocked using a signal produced
by the microprocessor.
5 Conclusion
We have presented an efficient exact solution to
a fundamental problem in circuit synthesis and opti-
mization, namely, the determination of bounds on the
separation in time of events in concurrent systems.
The major contribution of this paper is the structural
decomposition of the infinitely unfolded process graph
that enables it to be implicitly analyzed to obtain the
tightest possible bounds. This aspect of our algorithm
and its algebraic formulation enables it to be efficient
enough for practical use. Furthermore, our algorithm
handles a wide range of process graphs and is thus
useful in a variety of domains.
We are looking into adaptations of this technique to
graphs that include conditional behavior and thus process
an ever-larger class of graphs. This may require
the exploration of tradeoffs between the tightness of
the bounds and computation time that has not been a
concern up to now because of the high efficiency of the
algorithm in practice. In concert with this effort, we
are also investigating other problem domains such as
high-level synthesis and hardware/software co-design
as potential application areas.
Acknowledgments
This work was supported by an NSF PYI Award
(MIP-8858782), an NSF YI Award (MIP-9257987), by
the DARPA/CSTO Microsystems Program under an
ONR monitored contract (N00014-91-J-4041), by an
IBM Graduate Fellowship, and by the Technical University
of Denmark. The authors wish to thank Chris
Myers for several stimulating technical discussions.
--R
The Design and Analysis of Computer Algorithms.
An approach to symbolic timing verification.
Performance Analysis and Optimization of Asynchronous Circuits.
Topics in Algebra.
Combinatorial Optimization: Networks and Matroids.
The design of an asynchronous mi- croprocessor
Algorithms for interface timing verification.
Synthesis of timed asynchronous circuits.
Experiments with a program timing tool based on source-level timing schema
Specification and analysis of timing constraints in signal transition graphs.
--TR
--CTR
Dinesh Ramanathan , Ravindra Jejurikar , Rajesh K. Gupta, Timing driven co-design of networked embedded systems, Proceedings of the 2000 conference on Asia South Pacific design automation, p.117-122, January 2000, Yokohama, Japan
Nicholas H. Zamora , Xiaoping Hu , Radu Marculescu, System-level performance/power analysis for platform-based design of multimedia applications, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.12 n.1, p.2-es, January 2007
Steve Haynal , Forrest Brewer, A model for scheduling protocol-constrained components and environments, Proceedings of the 36th ACM/IEEE conference on Design automation, p.292-295, June 21-25, 1999, New Orleans, Louisiana, United States
Sangyun Kim , Sunan Tugsinavisut , Peter Beerel, Reducing probabilistic timed petri nets for asynchronous architectural analysis, Proceedings of the 8th ACM/IEEE international workshop on Timing issues in the specification and synthesis of digital systems, December 02-03, 2002, Monterey, California, USA
Ali Dasdan , Anmol Mathur , Rajesh K. Gupta, RATAN: A Tool for Rate Analysis and Rate Constraint Debugging for Embedded Systems, Proceedings of the 1997 European conference on Design and Test, p.2, March 17-20, 1997
Anmol Mathur , Ali Dasdan , Rajesh K. Gupta, Rate analysis for embedded systems, Readings in hardware/software co-design, Kluwer Academic Publishers, Norwell, MA, 2001
Anmol Mathur , Ali Dasdan , Rajesh K. Gupta, Rate analysis for embedded systems, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.3 n.3, p.408-436, July 1998
Tod Amon , Gaetano Borriello , Taokuan Hu , Jiwen Liu, Symbolic timing verification of timing diagrams using Presburger formulas, Proceedings of the 34th annual conference on Design automation, p.226-231, June 09-13, 1997, Anaheim, California, United States
Vijay K. Madisetti , Lan Shen, Interface Design for Core-Based Systems, IEEE Design & Test, v.14 n.4, p.42-51, October 1997
Abhijit Davare , Kelvin Lwin , Alex Kondratyev , Alberto Sangiovanni-Vincentelli, The best of both worlds: the efficient asynchronous implementation of synchronous specifications, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Peggy B. McGee , Steven M. Nowick , E. G. Coffman, Jr., Efficient performance analysis of asynchronous systems based on periodicity, Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, September 19-21, 2005, Jersey City, NJ, USA
Michael Kishinevsky , Jordi Cortadella , Alex Kondratyev, Asynchronous interface specification, analysis and synthesis, Proceedings of the 35th annual conference on Design automation, p.2-7, June 15-19, 1998, San Francisco, California, United States
Ali Dasdan , Sandy S. Irani , Rajesh K. Gupta, Efficient algorithms for optimum cycle mean and optimum cost to time ratio problems, Proceedings of the 36th ACM/IEEE conference on Design automation, p.37-42, June 21-25, 1999, New Orleans, Louisiana, United States
Yiping Cheng , Da-Zhong Zheng, Min-Max Inequalities and the Timing Verification Problem with Max and Linear Constraints, Discrete Event Dynamic Systems, v.15 n.2, p.119-143, June 2005
R. Marculescu , A. Nandi, Probabilistic application modeling for system-level perfromance analysis, Proceedings of the conference on Design, automation and test in Europe, p.572-579, March 2001, Munich, Germany
Ali Dasdan, Experimental analysis of the fastest optimum cycle ratio and mean algorithms, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.4, p.385-418, October 2004
Henrik Hulgaard , Steven M. Burns, Bounded Delay Timing Analysis of a Class of CSP Programs, Formal Methods in System Design, v.11 n.3, p.265-294, Oct. 1997
Jeremy Gunawardena, From max-plus algebra to nonexpansive mappings: a nonlinear theory for discrete event systems, Theoretical Computer Science, v.293 n.1, p.141-167, 3 February | discrete event systems;asynchronous systems;concurrent systems;time separation of events;abstract algebra;timing verification |
627126 | Edge-Disjoint Spanning Trees on the Star Network with Applications to Fault Tolerance. | AbstractData communication and fault tolerance are important issues in parallel computers in which the processors are interconnected according to a specific topology. One way to achieve fault tolerant interprocessor communication is by exploiting and effectively utilizing the disjoint paths that exist between pairs of source and destination nodes. In this paper, we construct n - 1 directed edge-disjoint spanning trees, on the star network. These spanning trees are used to derive a near optimal single-node broadcasting algorithm, and fault tolerant algorithms for the single-node and multinode broadcasting, and for the single-node and multinode scattering problems. Broadcasting is the distribution of the same group of messages from one processor to all the other processors. Scattering is the distribution of distinct groups of messages from one processor to all the other processors. We consider broadcasting and scattering from a single processor of the network and simultaneously from all processors of the network. The single-node broadcasting algorithm offers a speed up of n - 1 for a large number of messages, over the straightforward algorithm that uses a single shortest path spanning tree. Fault tolerance is achieved by transmitting the same messages through a number of edge-disjoint spanning trees. The fault tolerant algorithms operate successfully in the presence of up to n - 2 faulty nodes or edges in the network. The degree of fault tolerance can be adjusted depending on the network reliability. The importance of this method lies in the fact that no prior knowledge of the faulty nodes or edges is required. All of the algorithms operate under the store-and-forward, all-port communication model. | Introduction
The star network was proposed in [1] as "an attractive alternative to the n-cube" topology for interconnecting
processors in parallel computers. Since its introduction, the network received considerable attention. Let
us denote by Vn the set of n! permutations of symbols f1; 2; :::; ng. A star interconnection network on
symbols, denoted by is an undirected graph with n! nodes. Each node
is connected to nodes that are obtained by transposing the first with the k th symbols of i, i.e.
dimensions. Thus each node is an endpoint of dimensions 2; 3; :::; n. Sn enjoys a number
of properties desirable in interconnection networks. These include node and edge symmetry, maximal fault
tolerance, and strong resilience. Because of its symmetry, the network is easily extensible, can be decomposed
in various ways and allows for simple routing algorithms. In addition Sn is superior to Cn (the n-cube) with
respect to two key properties: degree (number of edges at each node), and diameter (maximum distance
between any two nodes) [1]. The degree of Sn is sublogarithmic to the number of its nodes while a
hypercube with \Theta(n!) nodes has degree \Theta(log log n), i.e. logarithmic to the number of its nodes.
The same can be said for the diameter of Sn which is b 3(n\Gamma1)
c. The network was shown to be Hamiltonian
[24], and efficient algorithms for sorting [23] and Fourier transform computation [10, 11], were developed on
it.
Data communication and fault tolerance are important issues in multiprocessor systems, in which processors
are connected to each other according to a specific topology. In order for a network of processors to
be candidate for parallel processing, it must lend itself to the derivation of optimal communication and fault
tolerant algorithms. Working towards this direction in this paper, we construct the multiple edge-disjoint
spanning trees structure on the star interconnection network. We say that a node h of Sn is the root of
multiple edge-disjoint spanning trees, denoted by EDT h , if each of the nodes adjacent to h is the root of a
tree that spans all nodes of Sn except h and all of these trees are edge-disjoint. This structure is useful for
the construction of optimal communication and fault tolerant communication algorithms and has been used
before for other popular interconnection networks such as the hypercube [16, 18] and the cube connected
cycles [15] networks.
Using the multiple edge-disjoint spanning trees structure we derive an optimal algorithm for the single
node broadcasting problem and optimal fault tolerant algorithms for the single node broadcasting, multinode
broadcasting, single node scattering and total exchange problems under the all-port communication assumption
on Sn . Single node broadcasting is the problem where a node wishes to transmit the same message to
all other nodes. Multinode broadcasting is the problem of simultaneous single node broadcasting of the same
message from every node to all other nodes. Single node scattering is the problem of a single node sending
distinct messages to each one of the other nodes. Finally, total exchange is the problem of each node sending
distinct messages to every other node. The optimal single node broadcasting algorithm derived offers a speed
up of the straightforward algorithm that uses a single breadth first spanning tree. The basic idea
is to split the original message into packets of equal size, each of which is broadcast independently
through a different edge-disjoint spanning tree. Each node receives part of the message through a different
disjoint path from the source node and as a consequence the network resources are fully utilized. To
achieve fault tolerant communication multiple copies of the same message are send through the edge-disjoint
spanning trees. As a consequence each node receives a copy of the message through a number of disjoint
paths from the source node and the reliability of the algorithm is increased. The algorithms presented can
operate successfully in the presence of up to nodes or edges in the system. They also offer the
flexibility of controlling the degree of fault tolerance depending on the required reliability, by forcing the
same message through a specific number of edge-disjoint subtrees. As pointed out in [28], the importance
of these algorithms lies in the fact that no knowledge of the faulty nodes or edges is required in advance.
In all of the algorithms the assumption that each node can exchange messages of fixed length with all of
its neighbors at each time step, i.e. the all-port communication assumption, is adapted. Communication is
assumed to be bidirectional. Other data communication algorithms and properties on Sn can be found in
[1, 4, 5, 13, 14, 22, 25, 26, 27]. Fault tolerant algorithms and properties on Sn using different approaches
can be found in [2, 8, 9, 17, 19, 29].
This paper is organized as follows: Following the introduction to the subject in section 1, notations and
definitions that are used throughout the paper are introduced in section 2. Section 3 presents the multiple
edge-disjoint spanning trees structure on the star network. In section 4 we demonstrate several applications
of this structure in the areas of data communication and fault tolerance. More specificly, lower bounds for all
the algorithms presented are derived in subsection 4:1. The optimal single node broadcasting algorithm of M
messages under the all-port assumption is presented in subsection 4:2. Finally, the fault tolerant algorithms
for the single node broadcasting, multinode broadcasting, single node scattering and total exchange problems,
under the all-port assumption again, are presented in subsections 4:3 to 4:6 respectively. We conclude in
section 5, along with a summary of the results and some suggestions for further research.
Notations and definitions
In what follows, node i is labeled by permutation . By I n we denote the sorted permutation on the n
ng. Calligraphic letters are used for sets. We denote by N the set of symbols f1; 2; :::; ng.
Symbols i, j and h are used for nodes of Sn . By dim(i; j) we denote the dimension of edge (i; j). Two paths
between a pair of nodes are parallel if they are node (and as an extension edge) disjoint. A misplaced symbol
of a node is a symbol that does not occupy its correct position.
induced by all nodes of Sn with symbol 1, in the k th position of their
label. It is well known that S k
is an Sn\Gamma1 defined on symbols f2; :::; ng, [1]. For notation
purposes, in what follows, we use the
, to denote the set of (n \Gamma 1)! nodes of Sn with symbol 1
in the first position of their label. It is known that S 1
is a collection of (n \Gamma 1)! isolated nodes.
Definition 1: In the cycle notation of a node each symbols position is that occupied by the next symbol
(cyclically) in the cycle (the position of a symbol is defined with respect to the sorted permutation I n ) [20].
Cycles with only one symbol are excluded from the cycle notation of a node. For example node 341526 has
cycle notation (13)(245).
In what follows for node i, we denote by c i , s i , the number of cycles and the number of symbols that
belongs to those cycles, respectively, in the cycle notation of i. The minimum distance of a node i from node
I n has been shown to be [1]:
d In
ae
We now define two operations on nodes of the star network, namely the translation and the rotation
operations, that will be of primary importance for the construction of the multiple edge-disjoint spanning
trees on Sn and the description of the fault tolerant communication algorithms.
Definition 2: Consider a node h of the star network. We define T h , the translation with respect to h, of
a node i as:
(this operation is often referenced as permutation composition). By translation of a network with respect to h
we mean that each node of the network is translated with respect to h. The inverse translation with respect
to h, denoted by T
h , of a node i, is defined as:
Lemma 1: Let i, j and h represent nodes of Sn . Then (i; are edges of the same
dimension.
Proof: This becomes obvious if we analytically express (i; as:
is an edge of dimension k then (T h (i); T h (j)) is also an edge of dimension k. 2
Definition 3: Let us define the function r from N to N as:
ae
(notice that r maps f1; 2; 3; ng to f1; 3; 4; :::; n; 2g). The rotation of a node i 2 Sn , denoted by R,
is defined as:
or equivalently i
applications of rotation. By
rotation of a network we mean that rotation is applied to each node of the network.
Lemma 2: Let i and j be nodes of Sn and i be the nodes obtained from i and j,
respectively, by application of a rotation:
1. If (i; j) is an edge of dimension k, is an edge of dimension r(k). As an
extension to this, the edges obtained after applications of rotation on (i; dimensions
respectively. With this observation we conclude that the each
obtained as a rotation of its previous one, are all of different dimensions.
2. If i
.
3. The rotation operation preserves the distance between nodes of Sn , or equivalently, d In In (i 0 ).
Proof: We'll prove each part separately:
1. If we analytically express (i;
we notice that if (i; j) is an edge of dimension k, then (i is an edge of dimension r(k). This is true
because from the definition of rotation the position of symbol r(i k ) in i 0 is r(k).
2. If i
1. From the definition of r, if i
As a result
.
3. We must prove the following: (a) if
(c) c From part 2 of this lemma (a) is easily derived. We know from the definition of rotation
that
This means that if symbol i k occupies position k in i then symbol r(i k ) occupies
position r(k) in i 0 . As a consequence if cycle (i k1
belongs to the cycle notation of i then
cycle (r(i k1 ); r(i k2 ); :::; r(i k l
belongs to the cycle notation of i 0 and we conclude that s
To summarize, the translation and the rotation operations preserve the distance between nodes of Sn .
The rotation operation maps every edge in dimension d to an edge in dimension
Application of rotation k times, or R k , maps every edge in dimension d to an edge in dimension r k
2. The translation operation preserves the dimension of every edge. Finally the
topology of Sn , or a subgraph of Sn , remains unchanged under translation or rotation.
Definition 4: A group of nodes for which each one is derived from its previous one by application of a
rotation is called a necklace
Lemma 3: Necklaces have the following properties:
1. Each node i 2 S k
belongs to a necklace that includes nodes.
2. Each node i
belongs to a necklace that includes at most n \Gamma 1 distinct nodes.
3. All nodes of a necklace have the same minimum distance from I n .
Proof: We prove each part separately.
1.
1. From the definition of r, the nodes derived from i by
consecutive rotations have first symbols 1. So the nodes that belong to
a necklace of this type start with different symbols and as a consequence are different. Also a necklace
of this type contains exactly nodes. From the definition of r, it is true that r
produced by i after rotations then i 0
that
For example node 4123 of S 4 belongs to necklace ( 4123; 2413; 3421 ).
2. Node i
1. From the definition of r, all nodes derived from i by consecutive rotations
start with symbol 1. If i 0 is produced by i after rotations then i 0
we conclude that However it is possible that
For example node 13254 of S 5 is mapped to itself after only two and not applications of
rotation, R 2 belongs to a necklace that contains only two nodes ( 13254; 15432 ),
while node 12435 belongs to a necklace that contains
3. From part 3 of lemma 2 this is easily derived. 2
From part 3 of lemma 3 we conclude that the nodes of Sn at each distance from I n are grouped into
necklaces. For example, the necklaces of S 4 at each distance from I 4 are given below enclosed in parentheses:
The size of a necklace of Sn is always a divisor of n \Gamma 1.
Definition 5: An unfolded necklace is a group of exactly nodes, each obtained as a rotation
of its previous one. Unfolded necklaces can contain the same node more than once. For example
is an unfolded necklace of S 5 .
The definitions of the rotation operation and the necklace will be of primary importance for the construction
of the multiple edge-disjoint spanning trees and for the description of the fault tolerant algorithms
on Sn . Both of these definitions have been developed in analogy to definitions with similar properties that
exist for the hypercube interconnection network. The application of rotation on a node of Sn is analogous
to the application of a right cyclic shift operation on a node of the hypercube. The definition of necklace for
nodes of Sn is analogous to similar groups defined for nodes of the hypercube in [18]. The term necklace was
initially used in [21] for similar groups of nodes in the shuffle-exchange graph. An interesting observation is
that although the definitions in [18] were motivated by specific properties of the hypercube topology, similar
definitions, with the same properties, can be derived for other networks, like the star network, which has a
structure that is fundamentally different from that of the hypercube.
(b)
(a)
shortest
path tree
on S 2
shortest
path tree
on S 3
shortest
path tree
on S n
Figure
2: (a) A schematic representation of SPT In . (b) The SPT I4 .
3 Construction of the multiple edge-disjoint spanning trees
We say that node h of Sn is the root of multiple edge-disjoint spanning trees, denoted by EDT h , if each of
the nodes adjacent to h is the root of a tree that spans all nodes of Sn except h and all of these trees are
edge-disjoint. In this section we construct EDT In rooted at node I n of Sn . The EDT h , rooted at any other
node h of Sn , will be obtained by applying the operation of translation with respect to h on EDT In
Before we proceed to the construction of the EDT In , we construct a balanced shortest path tree, rooted
at node I n , that includes all nodes of S k
In . For the definition of the SPT In
we need the following: Denote by C k , 1 - k - n, the set of dimensions f2; 3; :::; ng \Gamma fkg (C 1 is the set of
dimensions f2; 3; :::; ng). Assume node i 2 S k
n. If we move from i along any of the dimensions
in C k , the resulting node belongs to the same substar S k
that i belongs. We split C k into two subsets
is the first misplaced symbol cyclically
to the right of symbol 1 in i (excluding
;; otherwise
and C 2
k;i . Also let be such that i p i
is the first misplaced symbol
cyclically to the right of symbol 1 in i (excluding
In what follows the k th subtree of a spanning tree ST h rooted at node h, is defined to be the subtree
rooted at the neighbor of h over dimension k, and is denoted by T STh
k .
Definition 6: The shortest path tree SPT In rooted at node I n of Sn is defined through the following
parent and children functions:
parent SPT (i; I n
ae
children SPT (i; I n
It can be easily seen that the parent SPT and children SPT functions are consistent. A schematical representation
of SPT In along with SPT I4 can be seen in Fig. 2.
Lemma 4: The SPT In has the following characteristics:
1. All nodes of S k
k .
2. It is a shortest path tree.
Proof: We prove each part separately.
1. We'll prove that if i
also belongs to S k
node adjacent to I n in which case its parent is I n . To show this we must prove that p i 6= k for all nodes
that are not adjacent to I n , which is true from the definition of p i . If
is such that i p i
is the first misplaced symbol cyclically to the right of symbol 1 (position
in i, excluding symbol i 1 . In this case p if the only misplaced symbols in i are symbols
and which concludes that node i is adjacent to I n .
2. We'll prove that if i
In (i p i
In (i) \Gamma 1, or that the parent of each node
in SPT In is closer to I n than the node itself. This can be verified from a close look to the definition of
and the first symbol of i is moved to its correct position. If
such that i p i
is the first misplaced symbol (excluding cyclically to the right of symbol 1 in i, which
has the effect of merging cycle (1k) with the cycle that includes symbol p i in the cycle notation of i.
From the definition of d In the above follows. 2.
We now extend the definition of SPT In , to include nodes of S 1
. SPT In is extended so that each one
of its nodes has a child that belongs to S 1
(except nodes that are adjacent to I n ). The resulting structure
is no more a spanning tree but a directed graph denoted by SPG In .
Definition 7: The shortest path graph SPG In , rooted at node I n of Sn is defined through the following
parent and children functions. By parent SPG (i; l; I n ) and children SPG (i; l; I n ), we denote the parent and
children nodes, respectively, of node i in subtree T SPG I n
l .
parent SPG (i; l; I n
ae
parent SPT (i; I n ); if
children SPG (i; l; I n
children SPT (i; I n ); if or i is adjacent to I n ;
children SPT (i; I
but i is not adjacent to I n ;
It can be easily seen that the parent SPG and children SPG functions are consistent. The SPG I4 can be
seen in Fig. 3.
Lemma 5: The SPG In has the following characteristics:
1. Each node of S 1
times in SPG In , once in each of the subtrees T SPG I n
l ,
2. For each i
parallel paths that lead to node I n through SPG In , and these
paths have minimum lengths [8].
Proof: We prove each part separately.
1. According to the definition of parent SPG , each node i
I n is connected to T SPG I n
l
through dimension l.
Figure
3: The SPG I4 .
2. Node i
is connected to subtree T SPG I n
l l, to node i
. From lemma 4, i 0 is connected with a shortest path to I n through subtree
l that includes only nodes of S l
. As a consequence, these paths are parallel since the path
through the l th subtree includes only nodes of S l
. Using this type of reasoning it has been proven
in [8] that these paths have minimum lengths. 2
Up to this point only nodes of S 1
belong to all subtrees T SPG I n
l , 2 - l - n. However nodes of any
other S l
only to subtree T SPG I n
l . Now we further extend SPG In so that each subtree
includes all nodes i 2 S l
n. The resulting structure will be the multiple edge-disjoint spanning
trees, denoted by EDT In . In order for the subtrees to be edge-disjoint, each node should be connected to
each subtree through a different neighboring node and as an extension through a different one of its incident
edges. Let us remind that node i 2 S l
connected to its parent in the l th subtree through
neighbor
Definition 8: The EDT In rooted at node I n of Sn is now defined through the following parent and
children functions. By parent EDT (i; l; I n ) and by children EDT (i; l; I n ), we denote the parent and children
nodes, respectively, of node i in subtree T EDT I n
l . For clarity of definition we distinguish among different
kinds of nodes:
1.
(the parent of i in SPG In starts with symbol k). Node i
is connected to subtree T EDT I n
through neighbor 1i 2 :::i n , and to any other subtree T EDT I n
l , l
li 2 :::i n . For example node 3124 of Sn is connected to subtree T EDT I 4through neighbor 1324, and to subtree T EDT I 4
4 through neighbor 4123.
parent EDT (i; l; I n
Figure
4: The EDT I4 .
2. Node i
starts with symbol k). Node i is connected to subtree
through neighbor 1i 2 :::i n , and to any other subtree T EDT I n
l , l
neighbor li 2 :::i n . For example node 2143 of S 4 is connected to subtree T EDT I 4
4 through neighbor 1243,
and to subtree T EDT I 4
3 through neighbor 3142.
parent EDT (i; l; I n
li
children EDT (i; l; I n
is adjacent to I n ,
not adjacent to I n ,
3. Node i
its parent in SPG In start with
connected to subtree T EDT I n
through neighbor 1i 2 :::i n , and to subtree T EDT I n
through neighbor ki 2 :::i n . To any other subtree T EDT I n
l , l
, l 6= k, it is connected through
neighbor li 2 :::i n . For example node 4123 of S 4 is connected to subtree T EDT I 4
4 through neighbor 1423,
and to subtree T EDT I 4
3 through neighbor 2143.
parent EDT (i; l; I n
4. Node i
(nodes that start with symbol 1).
parent EDT (i; l; I n ng (11)
children EDT (i; l; I n ng [
;; otherwise
5. Finally, the parent and children nodes of I n are:
parent EDT
children EDT
The parent EDT and children EDT functions that define EDT In are consistent. The EDT I4 can be seen in
Fig. 4. Notice that each edge belongs twice in EDT In , once in each direction, since communication is
bidirectional.
Lemma The EDT In has the following characteristics:
1. Subtrees T EDT I n
l , 2 - l - n, are all edge-disjoint.
2. Subtree T EDT I n
r(l) is a rotation of subtree T EDT I n
l (its previous subtree cyclically).
3. For each node i 2 S k
parallel paths of almost minimum lengths that lead
to node I n through EDT In .
4. The depth of EDT In is at most b 3(n\Gamma1)
Proof: See Appendix. 2
The multiple edge-disjoint spanning trees, EDT h , rooted at any other node h of Sn can be obtained from
In , using the operation of translation with respect to h (see definition 1). Node i of Sn is connected to
its parent, children nodes in subtree T EDTh
l along the same dimensions that node T \Gamma1
h (i) is connected to its
parent, children nodes in subtree T EDT I n
l . This is easily derived because connectivity and the dimension of
each edge are preserved under translation in Sn (lemma 1).
We need to pose an ordering to the children of each node in each of the subtrees T EDT I n
This will be useful in the construction of the algorithms described in the following section. We define the k th
ordering of numbers f2; 3; :::; ng, denoted by OE k to be such
k. Each node arranges its children in each subtree T EDT I n
k according to the k th ordering of the
dimensions of the edges it is connected to them. This guarantees that if node i is connected to its children
in subtree T EDT I n
k through dimensions c l in order, then node R(i) is connected to its children in
subtree T EDT I n
dimensions r(c 1 ); r(c 2 ); :::; r(c l ) again in order. This ordering in combination with
the fact that subtrees T EDT I n
are rotations of each other guarantees that corresponding nodes
of the subtrees form unfolded necklaces. For example the nodes enclosed in rectangulars of the same kind in
Fig. 4 form unfolded necklaces. Also corresponding edges of the subtrees are rotations of each other and as
consequence all of different dimensions (lemma 6). For example the dotted edges in Fig. 4 are rotations of
each other and of different types. The ordering is carried by translation to EDT h rooted at any other node
h of Sn .
Applications
The multiple edge-disjoint spanning trees structure, defined in the previous section is used to derive optimal
communication and fault tolerant communication algorithms on the star network. More specificly we derive
an optimal single node broadcasting algorithm. We also derive optimal fault tolerant algorithms for four
basic communication problems in interconnection networks, namely the single node broadcasting, multinode
broadcasting, single node scattering and total exchange problems. All of the algorithms operate under the
all-port communication assumption. Before we proceed to the description of the algorithms, we derive lower
bound for the time and the number of message transmissions required for each of them.
4.1 Lower Bounds
Broadcasting on an interconnection network is the problem where a node wishes to send the same message to
all other nodes in the network. To broadcast M messages from a node of Sn , by pipelining the communication
from the root towards the leaves along any b 3(n\Gamma1)
first spanning tree, under the allport
communication assumption, the number of time steps required is which is not
optimal. Since Sn is a regular network with degree the lower bound for the single node broadcasting
algorithm of M messages assuming all ports of a node can be used simultaneously for message transmission
is d M
To achieve this lower bound the M messages are grouped into packets of
equal size, each of which is communicated over a different edge of the source node and is pipelined down
a different edge-disjoint subtree rooted at a node adjacent to the source node. Since each node receives
each of the M messages once, the minimum number of message transmissions required for an optimal single
node broadcasting algorithm is M (n! \Gamma 1). In the fault tolerant single node broadcasting algorithm the M
messages are pipelined down each one of the of the spanning trees rooted at the nodes
adjacent to the source node. The time required for this algorithm is M
each node receives
each of the M messages through parallel paths, the minimumnumber of message transmissions required
is
Multinode broadcasting on an interconnection network is the problem where each node of the network
wishes to send a message to all other nodes. If each node wishes to broadcast M messages, then each node
must receive a total of M (n! \Gamma 1) messages. As a consequence the minimum number of message transmissions
required is Mn!(n! \Gamma 1). Under the all-port assumption all edges of the network can be used for
message transmissions at each time step. Thus the minimum time required for the algorithm to complete
is d M(n!\Gamma1)
e. The lower bounds for the fault tolerant multinode broadcasting algorithm are easily derived
from the lower bounds for the multinode broadcasting with a multiplication by factor n \Gamma 1.
Single node scattering on an interconnection network is the problem where a node wishes to send a
different message to each one of the other nodes. If the source node wishes to send M messages to each one
of the other nodes, M (n! \Gamma 1) different messages must be transmitted by the source node. Under the all-port
assumption all the incident to the source node can be used for message transmissions at each
time step and as a consequence the minimum time required for the algorithm to complete is d M(n!\Gamma1)
e. The
number of message transmissions required can be found as follows: A message destined to a specific node
must travel as many edges as the shortest distance from the source to this node. If we sum the shortest
distances from the source to each node, this will be the minimum number of message transmissions required
for this problem:
is the number of nodes at a distance k from the source and d has been shown to be [3]:
Hn is the n th harmonic number:
n . Thus the minimumnumber of message transmissions
required for a single node scattering algorithm on Sn is:
In the fault tolerant single node scattering algorithm the source node transmits the M (n! \Gamma 1) messages to all
of its neighbors simultaneously. Each of the spanning trees rooted at the nodes adjacent
to the source node are used for a single node scattering algorithm. The number of message transmissions
required is
1). Since the source node must transmit
messages the time required for this algorithm is M (n! \Gamma 1).
Total exchange on an interconnection network is the problem where each node wishes to send a distinct
message to every other node, in other words, every possible pair of nodes exchange distinct messages. The
fault tolerant total exchange algorithm is equivalent to n! different fault tolerant single node scattering
algorithms, one from each node of Sn . Thus the minimum number of message transmissions required is
1). Under the all-port assumption n!(n \Gamma 1) edges can be
used for message transmission at each time step simultaneously. Thus the minimum time required for the
algorithm to complete is M
All the lower bounds were derived for degree of fault tolerance 2. This means that each node receives
each message through paths. The lower bounds for the algorithms with controlled degree of
fault tolerance will be derived in the following sections along with the description of the algorithms.
Table
below summarizes the lower bounds for all of the above problems, with degree of fault tolerance
messages transmitted to each node. By t n we denote the quantity n!(n
problem time number of transmissions
single node broadcasting d M
fault tolerant single node broadcasting M
fault tolerant multinode broadcasting M
fault tolerant single node scattering M
Table
1: Lower bounds on the star network.
The algorithms derived here for all of the above problems are optimal in terms of time and number of
message transmissions. Some of the methods used in this sections to derive lower bounds for the communications
problems under consideration are similar to the methods used in [7] to derive lower bounds for
similar problems on the hypercube network.
4.2 Optimal single node broadcasting
In a single node broadcasting algorithm one node wishes to transmit a single message or a group of messages to
each other node. To broadcast M messages from a node of Sn , by pipelining the communication from the root
towards the leaves along any b 3(n\Gamma1)c depth, breadth first spanning tree, under the all-port communication
assumption, the number of time steps required is
which is not optimal. Since Sn is a
regular network with degree the lower bound for the single node broadcasting algorithm of M messages
assuming all ports of a node can be used simultaneously for message transmission is d M
2 c. This
lower bound can be achieved if the M messages are grouped into packets, each of size M
. Each of
the packets is communicated over a different edge of the source node h and is pipelined down a different
edge-disjoint subtree of the EDT h rooted at the source node. As soon as a node receives a message from
its parent node in subtree T EDTh
copy, and forwards the message to its children
nodes in the same subtree. The result is that each node receives each of the packets of the message
through a different parallel path from the source node. The time required for this algorithm to complete is
at most d M
which is almost optimal, since the depth of the multiple edge-disjoint spanning
trees structure is at most b 3(n\Gamma1)
4. The number of message transmissions required for the algorithm is
since each node receives each of the M messages once, which is the minimum possible. Using
this algorithm the resources of the network are fully utilized since all communication edges contribute to the
distribution of the information.
4.3 Fault tolerant single node broadcasting
The multiple edge-disjoint spanning trees structure can be used to derive a fault tolerant single node broadcasting
algorithm under the all-port communication assumption. Assume that the source node h, wishes to
broadcast M messages to all the other nodes. Node h sends the messages it wishes to broadcast through
all its incident edges simultaneously and these are pipelined down each of the
rooted at the nodes adjacent to h. As soon as a node receives a message from its parent node in subtree
and forwards the message to its children nodes in the same subtree. Using
this algorithm each of the nodes of Sn receives the same message through paths. If up to
node or edge faults occur in the system that block the message from passing we are still guaranteed that each
node receives a copy of the message and as a consequence the algorithm is tolerant. If we assume
that the system has faults that alter the contents of the messages instead of just blocking or destroying it,
the fault tolerance degree of the algorithm decreases since an election algorithm is required at each node in
order to select the intact message. A brief discussion on the election algorithms can be found in [28]. The
time required for this algorithm to complete using the multiple edge-disjoint spanning trees structure is at
most
which is almost optimal, since the depth of the multiple edge-disjoint spanning trees
is at most b 3(n\Gamma1)
4. The number of message transmissions required is since each node
receives each of the M messages which is the minimum possible.
Using a similar technique we can control the degree of fault tolerance of the single node broadcasting
algorithm. Assume that the required degree of fault tolerance is 2. This means that each node
must receive each message through x parallel paths, or in other words that each message must be pipelined
down at least x edge-disjoint subtrees rooted at the nodes adjacent to the source node. However the number
of available edge-disjoint subtrees is n \Gamma 1. In order to achieve maximum utilization of the network resources
the M messages are grouped into
x packets, each of size M
(n\Gamma1)=x
must divide this to
work properly). Each of the
x packets is pipelined down x edge-disjoint subtrees. As a consequence, all
of the
subtrees are used for message transmission. The result is that
each node receives each of the
x packets through x of its incident edges, and as an extension through
x parallel paths from the source node, and as a consequence the fault tolerance degree of the algorithm is
1. The time required for the algorithm is at most Mx
which is almost optimal, since
the depth of the multiple edge-disjoint spanning trees is at most b 3(n\Gamma1)
4, and in addition we have
the flexibility of controlling the degree of fault tolerance based on how reliable the system is. The number
of message transmissions required is M (n! \Gamma 1)x which is again optimal, since each node receives each of
the M messages x times. To illustrate the algorithm assume that node 12345 of S 5 , which is the root of
whishes to broadcast messages with degree
of fault tolerance means that up to one faulty node or edge should be tolerated by the
algorithm). The message of size M is split into
each of size Mx
2. Each
of the packets is pipelined down
5 . As a consequence, each node receives each packet through two parallel paths and the fault
tolerance degree of the algorithm is one.
4.4 Fault tolerant multinode broadcasting
In a multinode broadcasting algorithm, each node wishes to transmit a single message, or a group of messages
to each one of the other nodes. As a consequence each of the nodes should be the root of multiple edge-disjoint
spanning trees. The EDT In can be replicated at any other node h of Sn using the operation of
translation with respect to h, as it was explained at the end of section 3 (see definition 1). Fault tolerance
can be achieved if each node receives each message through paths. However in this case we
have to guarantee that no conflicts arise during the execution of the algorithm, since all nodes are sources of
messages. Under the all-port assumption are available on Sn at each time step for message
transmission. This means that the messages originating from a specific node should be transmitted through
at most n \Gamma 1 edges, at each time step. Let us denote by L k (h) the set of edges on which messages originating
at node h are transmitted at time step k of the algorithm. For each k, L k (h) is obtained from L k using
the operation of translation with respect to h (if (i; definition
1). The following lemma is enough to guarantee that no conflicts arise during the execution of the algorithm.
Lemma 7: If for each k, the edges in L k are all of different dimensions, then for each k, the sets
ranges over all nodes of Sn , are disjoint.
Proof: Assume two different edges (i; and take the edges (T h (i); T h (j)) 2
are obtained by (i; respectively, under translation
with respect to two different nodes of Sn , h and h 0 . Also assume that (T h (i); T h
Since the dimension of each edge is preserved under translation (lemma 1), this means that dim(i;
our assumption that (i; j) and
are two different edges of L k (I n ). 2
The fault tolerant multinode broadcasting algorithm on Sn , assuming each node wishes to broadcast M
messages, proceeds as follows:
1. Each source node sends the M messages it wishes to broadcast to all of its neighbors simultaneously.
2. As soon as a node receives a group of M messages from its parent in subtree T EDTh
k , it saves a copy,
and forwards the messages to its leftmost child in the same subtree. However, if the node is a leaf of
subtree T EDTh
k , it sends an acknowledgement to its parent node in the subtree.
3. When a node receives an acknowledgement from one of its children nodes in subtree T EDTh
k , it forwards
the M messages it received from its parent in this subtree to its next child node in the subtree. However,
if the node has no more children in this subtree, it sends an acknowledgement to its parent node in the
subtree.
The algorithm terminates when each source node receives acknowledgements from all its neighbors. This
algorithm corresponds to a depth first traversal of the edges in each of the edge-disjoint subtrees. This means
that at each time step of the algorithm corresponding edges of the subtrees, T EDTh
rooted at
the nodes adjacent to h, are used simultaneously for message transmission. Since corresponding edges of the
subtrees of EDT In are all rotations of each other, they are all of different dimensions (lemma
the requirement of lemma 7 for conflict avoidance is satisfied by the algorithm.
The time required for this algorithm to complete is M (n! \Gamma 1) which is optimal. The number of message
transmissions required is which is the minimum possible, since each node receives each of
the times. The way the algorithm was constructed, the degree of fault tolerance
is which means that each message is transmitted through all of the edge-disjoint subtree rooted at
the nodes adjacent to each source node. Controlling the degree of fault tolerance is possible by a technique
similar to the one described in subsection 4.3.
4.5 Fault tolerant single node scattering
In a single node scattering algorithm one node wishes to transmit distinct messages to each one of the other
nodes. The single node scattering algorithm on Sn , under the all-port assumption, can become fault tolerant
using the multiple edge-disjoint spanning trees. A message destined to a specific node is transmitted through
each of the edge-disjoint subtrees rooted at the nodes adjacent to the source node. In each subtree, messages
destined to nodes that are the furthest from the source are transmitted first.
If each node is the destination of M messages, the time required for this algorithm to complete is M (n!\Gamma1),
which is optimal, since each edge incident to the source node constitutes a bottleneck for M (n! \Gamma 1) messages.
The number of message transmissions required is which is asymptotically
optimal, because the lengths of the between two nodes of Sn are not all equal the length
of a shortest path between the two nodes [8]. Controlling the degree of fault tolerance is possible using a
technique similar to the one described in subsection 4.3.
4.6 Fault tolerant total exchange
In a total exchange algorithm each node wishes to transmit distinct messages to each other node. As a
consequence, each of the nodes should be the root of multiple edge-disjoint spanning trees. The EDT In can
be replicated at any other node h of Sn using the operation of translation with respect to h (see definition
1). Fault tolerance can be achieved if each node receives each message through paths. As in
the fault tolerant multinode broadcasting algorithm, we have to guarantee that no conflicts arise during the
execution of the algorithm, since all nodes are sources of messages, or in other words we have to guarantee
that the requirement of lemma 7 is satisfied.
The way node I n transmits the messages through the edge-disjoint subtrees rooted at its neighbors is the
following: For each node i of Sn , I n sends the messages destined to nodes R k\Gamma2 (i), 2 - k - n, respectively
through subtrees T EDT I n
As soon as a group of messages reaches its destination
another group is send from I n . Nodes R k\Gamma2 (i), 2 - k - n, form an unfolded necklace of nodes (see definition
5) at a specific level of EDT In , since subtrees T EDT I n
rotations of each other (lemma
6). As a consequence the that lead from I n to nodes R k\Gamma2 (i), 2 - k - n, respectively through
subtrees T EDT I n
rotations of each other. This means that the edges at each level
of the paths are of different dimensions and the requirement of lemma 7 for conflict avoidance is satisfied. If
at a specific instance of the algorithm node I n transmits messages to nodes R k\Gamma2 (i), 2 - k - n, respectively
through subtrees T EDT I n
simultaneously, then any other node h of Sn transmits messages to
nodes T h (R k\Gamma2 (i)), 2 - k - n, respectively through subtrees T EDTh
simultaneously. This is a
simple application of the operation of translation with respect to h.
If M messages must be transmitted to each node from each other node the time required for the algorithm
to compete is M (n! \Gamma 1) +O(M t n ) which is asymptotically optimal. The number of message transmissions
required is which is again asymptotically optimal. This algorithm
is only asymptotically optimal because the lengths of the between two nodes of Sn
are not all equal to the length of a shortest path between the two nodes [8]. The way the algorithm was
described the degree of fault tolerance is which means that each message is transmitted through each
different edge-disjoint subtree rooted at the nodes adjacent to each source node. Controlling the degree of
fault tolerance is possible by a technique similar to the one described in subsection 4.3.
Conclusions
We presented several algorithms on the star interconnection network, in the areas of data communication
and fault tolerance. New definitions like that of the rotation operation and the necklace for nodes of Sn were
introduced to facilitate the construction of multiple edge-disjoint spanning trees on Sn . As a result a multiple
edge-disjoint spanning trees structure of optimal depth was constructed on the star interconnection network.
Using this structure an optimal single node broadcasting algorithm and optimal fault tolerant algorithms for
the single node broadcasting, multinode broadcasting, single node scattering and total exchange problems
on the star network were presented. All of the algorithms operate under the all-port assumption and are
optimal in terms of time and number of message transmissions. Constructing multiple edge-disjoint spanning
trees on the star network that would offer optimal solutions to the above problems under the assumption
that each node can exchange a message of fixed length with only one of its neighbors at each time step, i.e.
the one-port communication assumption, is a problem that remains open.
We now provide a comparison of the algorithms presented in this paper for the four communication
problems under consideration on the star network, with algorithms for the same problems, under exactly the
same assumptions, on the popular hypercube network. Tables 2 and 3 below give the number of message
transmissions and the communication time required for each of the problems on the Sn and the hypercube
network of dimension k, denoted by C k , respectively. For the fault tolerant communication algorithms the
degree of fault tolerance is assumed to be x.
problem time number of transmissions
single node broadcasting d M
fault tolerant single node broadcasting d Mx
fault tolerant multinode broadcasting d Mx(n!\Gamma1)
fault tolerant single node scattering d Mx(n!\Gamma1)
fault tolerant total exchange d Mx(n!\Gamma1)
Table
2: Lower bounds on the star network of dimension n.
problem time number of transmissions
single node broadcasting d M
fault tolerant single node broadcasting d Mx
fault tolerant multinode broadcasting d Mx(2 k \Gamma1)
fault tolerant single node scattering d Mx(2 k \Gamma1)
fault tolerant total exchange d Mx(2 k \Gamma1)
Table
3: Lower bounds on the hypercube network of dimension k.
In table 4 below the performances of the two networks are compared. Since the star network is defined
for numbers of nodes which are factorials, while the hypercube is defined for powers of two, the comparison
cannot be exact. In the comparison below a hypercube network with O(2 k nodes and degree
log n) is assumed.
From table 4 we notice that whenever the performance of an algorithm depends on the degree of the
network, as for example the communication times of the fault tolerant multinode broadcasting, single node
scattering and total exchange algorithms, the hypercube network performs better than the star network by
a factor of log n. On the other hand, whenever the performance of an algorithm depends on the diameter
of the network, or the lengths of the shortest paths between nodes, as for example the number of message
transmissions of the fault tolerant single node scattering and total exchange algorithms, the star network
performs better by a factor of log n. The communication times of the single node broadcasting and the fault
tolerant single node broadcasting algorithms depend on both the degree and the diameter of the networks
and this is reflected at the comparison of their performances. In any other case the performance of the two
networks is the same. However we should not forget that the star network has smaller degree resulting in
processors with a smaller number of ports and as a consequence smaller cost.
problem net time number of transmissions
single node broadcasting Sn O( M
fault tolerant single node broadcasting Sn O( Mx
Cn O( Mx
fault tolerant multinode broadcasting Sn O( Mxn!
Cn O( Mxn!
fault tolerant single node scattering Sn O( Mxn!
Cn O( Mxn!
fault tolerant total exchange Sn O( Mxn!
Cn O( Mxn!
Table
4: Comparison of star and hypercube performances.
--R
"The Star Graph: An Attractive Alternative to the Hypercube"
"The Fault Tolerance of Star Graphs"
"A Group Theoretic Model for Symmetric Interconnection Net- works"
"A Novel Routing Scheme on the Star and Pancake Networks and its Applica- tions"
Parallel and Distributed Computation: Numerical Methods
"Optimal Communication Algorithms for Hypercubes"
"A Comparative Study of Topological Properties of Hypercubes and Star Graphs,"
"Three Disjoint Path Paradigms in Star Networks"
"Parallel Algorithms for the Fourier and Other Mathematical Transforms,"
"A Parallel Algorithm for Computing Fourier Transforms on the Star Graph,"
"Optimal Communication Algorithms on the Star Interconnection Net- work"
"Optimal Communication Algorithms on Star Graphs Using Spanning Tree Constructions"
"Methods and Models of Communication in Usual Networks"
"Arc Disjoint Spanning Trees on Cube Connected Cycles Network"
"Fault-tolerant Gossiping on Hypercube Multicomputers"
"Fault Tolerant Routing in the Star and Pancake Interconnection Network"
"Optimum Broadcasting and Personalized Communication in Hypercubes"
"Characterization of Node Disjoint (parallel) Paths in Star Graphs"
The Art of Computer Programming
Complexity Issues in VLSI: Optimal Layouts for the Shuffle-Exchange and Other Net- works
"Optimal Broadcasting on the Star Graph"
"An Efficient Sorting Algorithm for the Star Graph Interconnection Net- work"
"Embedding Hamiltonians and Hypercubes in Star Inter-connection Graphs"
"Data Communication and Computational Geometry on the Star and Pancake Networks"
"On the Tree Structure of the Star Graph"
"On the Properties of Breadth First Spanning Tree of the Star Graph"
"Reliable Broadcast in Hypercube Multicomputers"
"A Fault Tolerance Routing Algorithm in Star Graphs"
--TR
--CTR
Abderezak Touzene, Optimal all-ports collective communication algorithms for the k-ary n-cube interconnection networks, Journal of Systems Architecture: the EUROMICRO Journal, v.50 n.4, p.221-231, March 2004
Abderezak Touzene , Khaled Day , Burkhard Monien, Edge-disjoint spanning trees for the generalized butterfly networks and their applications, Journal of Parallel and Distributed Computing, v.65 n.11, p.1384-1396, November 2005
Satoshi Fujita, A Fault-Tolerant Broadcast Scheme in the Star Graph under the Single-Port, Half-Duplex Communication Model, IEEE Transactions on Computers, v.48 n.10, p.1123-1126, October 1999
Cheng-Kuan Lin , Hua-Min Huang , Lih-Hsing Hsu, The super connectivity of the pancake graphs and the super laceability of the star graphs, Theoretical Computer Science, v.339 n.2, p.257-271, 12 June 2005
N. W. Lo , Bradley S. Carlson , D. L. Tao, Fault Tolerant Algorithms for Broadcasting on the Star Graph Network, IEEE Transactions on Computers, v.46 n.12, p.1357-1362, December 1997
Chin-Tsai Lin, Embedding k(n - spanning trees in arrangement graphs, Journal of Parallel and Distributed Computing, v.63 n.12, p.1277-1287, December
Satoshi Fujita, Neighborhood Information Dissemination in the Star Graph, IEEE Transactions on Computers, v.49 n.12, p.1366-1370, December 2000
Chi-Chang Chen , Jianer Chen, Nearly Optimal One-to-Many Parallel Routing in Star Networks, IEEE Transactions on Parallel and Distributed Systems, v.8 n.12, p.1196-1202, December 1997
Shan-Chyun Ku , Biing-Feng Wang , Ting-Kai Hung, Constructing Edge-Disjoint Spanning Trees in Product Networks, IEEE Transactions on Parallel and Distributed Systems, v.14 n.3, p.213-221, March
Adele A. Rescigno, Optimally Balanced Spanning Tree of the Star Network, IEEE Transactions on Computers, v.50 n.1, p.88-91, January 2001
Abderezak Touzene, Edges-disjoint spanning trees on the binary wrapped butterfly network with applications to fault tolerance, Parallel Computing, v.28 n.4, p.649-666, April 2002 | communication algorithm;interconnection network;spanning tree;edge-disjoint trees;parallel algorithm;star network;fault tolerance |
627189 | Automatic Accurate Cost-Bound Analysis for High-Level Languages. | This paper describes a language-based approach for automatic and accurate cost-bound analysis. The approach consists of transformations for building cost-bound functions in the presence of partially known input structures, symbolic evaluation of the cost-bound function based on input size parameters, and optimizations to make the overall analysis efficient as well as accurate, all at the source-language level. The calculated cost bounds are expressed in terms of primitive cost parameters. These parameters can be obtained based on the language implementation or can be measured conservatively or approximately, yielding accurate, conservative, or approximate time or space bounds. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The results helped confirm the accuracy of the analysis. | Introduction
Analysis of program cost, such as running time and space consumption, is important for
real-time systems, embedded systems, interactive environments, compiler optimizations,
performance evaluation, and many other computer applications. It has been extensively
This work was supported in part by NSF under Grant CCR-9711253 and ONR under Grants N00014-99-
1-0132 and N00014-01-1-0109. Yanhong A. Liu's address: Computer Science Department, State University
of New York at Stony Brook, Stony Brook, NY 11794-4400. Gustavo Gomez's address: Computer Science
Department, Indiana University, Bloomington, IN 47405-7104. Corresponding author: Yanhong A. Liu.
Email: [email protected]. Tel: 631-632-8463. Fax: 631-632-8334. URL: http://www.cs.sunysb.edu/liu/.
studied in many elds of computer science: algorithms [25, 16, 17, 53], programming languages
[50, 26, 41, 44], and systems [46, 37, 43, 42]. It is particularly important for many
applications, such as real-time systems and embedded systems, to be able to predict accurate
time bounds and space bounds automatically and eciently, and it is particularly desirable
to be able to do so for high-level languages [46, 37, 38].
For analyzing system running time, since Shaw proposed timing schema for high-level
languages [46], a number of people have extended it for analysis in the presence of compiler
optimizations [37, 12], pipelining [20, 28], cache memory [4, 28, 14], etc. However, there
remains an obvious and serious limitation of the timing schema, even in the absence of
low-level complications. This is the inability to provide loop bounds, recursion depths,
or execution paths automatically and accurately for the analysis [36, 3]. For example, the
inaccurate loop bounds cause the calculated worst-case time to be as much as 67% higher than
the measured worst-case time in [37], while the manual way of providing such information
is potentially an even larger source of error, in addition to its inconvenience [36]. Various
program analysis methods have been proposed to provide loop bounds or execution paths [3,
13, 19, 21]; they ameliorate the problem but can not completely solve it, because they apply
only to some classes of programs or use approximations that are too crude for the analysis.
Similarly, loop bounds and recursion depths are needed also for space analysis [38].
This paper describes a language-based approach for automatic and accurate cost-bound
analysis. The approach combines methods and techniques studied in theory, languages, and
systems. We call it a language-based approach, because it primarily exploits methods and
techniques for static program analysis and transformation.
The approach consists of transformations for building cost-bound functions in the presence
of partially known input structures, symbolic evaluation of the cost-bound function
based on input size parameters, and optimizations to make overall the analysis ecient as
well as accurate, all at the source-language level. We describe analysis and transformation
algorithms and explain how they work. The calculated cost bounds are expressed in terms of
primitive cost parameters. These parameters can be obtained based on the language implementation
or be measured conservatively or approximately, yielding accurate, conservative,
or approximate time or space bounds. The cost analysis currently does not include cache
analysis. We have implemented this approach and performed a number of experiments for
analyzing Scheme programs. The results helped conrm the accuracy of the analysis. We
describe our prototype system, ALPA, as well as the analysis and measurement results.
This approach is general in the sense that it works for multiple kinds of cost analysis.
Our main analysis sums the cost in terms of dierent operations performed; it gives upper
bounds for all kinds of operations, such as arithmetic operations, data eld selections, and
constructor allocations. Variations of it can analyze stack space, live heap space, output
size, etc., and can analyze lower bounds as well as upper bounds. The basic ideas also apply
to other programming languages.
The rest of the paper is organized as follows. Section 2 outlines our language-based ap-
proach. Sections 3, 4, and 5 present the analysis and transformation methods and techniques.
Section 6 describes our implementation and experimental results. Section 7 compares with
related work and concludes.
Language-based approach
2.1 Cost and cost bound
Language-based cost-bound analysis starts with a given program written in a high-level
language, such as C or Lisp. The rst step is to build a cost function that (takes the
same input as the original program but) returns the cost in place of (or in addition to) the
original return value. This is done easily by associating a parameter with each program
construct representing its cost and by summing these parameters based on the semantics of
the constructs [50, 10, 46]. We call parameters that describe the costs of program constructs
primitive cost parameters. To calculate actual cost bounds based on the cost function, three
dicult problems must be solved.
First, since the goal is to calculate cost without being given particular inputs, the calculation
must be based on certain assumptions about inputs. Thus, the rst problem is to
characterize the input data and re
ect them in the cost function. In general, due to imperfect
knowledge about the input, the cost function is transformed into a cost-bound function.
In algorithm analysis, inputs are characterized by their size; accommodating this requires
manual or semi-automatic transformation of the cost (time or space) function [50, 26, 53].
The analysis is mainly asymptotic, and primitive cost parameters are considered independent
of input size, i.e., are constants while the computation iterates or recurses. Whatever values
of the primitive cost parameters are assumed, a second problem arises, and it is theoretically
challenging: optimizing the cost-bound function to a closed form in terms of the input
size [50, 10, 26, 41, 17, 7]. Although much progress has been made in this area, closed forms
are known only for subclasses of functions. Thus, such optimization can not be automatically
done for analyzing general programs.
In systems, inputs are characterized indirectly using loop bounds or execution paths in
programs, and such information must in general be provided by the user [46, 37, 36, 28],
even though program analyses can help in some cases [3, 13, 19, 21]. Closed forms in
terms of parameters for these bounds can be obtained easily from the cost function. This
isolates the third problem, which is most interesting to systems research: obtaining values of
primitive cost parameters that depend on compilers, run-time systems, operating systems,
and machine hardwares. In recent year, much progress has been made in analyzing low-level
dynamic factors, such as clock interrupt, memory refresh, cache usage, instruction scheduling,
and parallel architectures, for time analysis [37, 4, 28, 14]. Nevertheless, inability to compute
loop bounds or execution paths automatically and accurately has led calculated bounds to
be much higher than measured worst-case time.
In programming-language area, Rosendahl proposed using partially known input structures
[41]. For example, instead of replacing an input list l with its length n, as done in
algorithm analysis, or annotating loops with numbers related to n, as done in systems, we
simply use as input a list of n unknown elements. We call parameters, such as n, for describing
partially known input structures input size parameters. The cost function is then
transformed automatically into a cost-bound function: at control points where decisions depend
on unknown values, the maximum cost of all possible branches is computed; otherwise,
the cost of the chosen branch is computed. Rosendahl concentrated on proving the correctness
of this transformation. He assumed constant 1 for primitive cost parameters and relied
on optimizations to obtain closed forms in terms of input size parameters, but again closed
forms can not be obtained for all cost-bound functions.
2.2 Language-based cost-bound analysis
Combining results from theory to systems, and exploring methods and techniques for static
program analysis and transformation, we have studied a language-based approach for computing
cost bounds automatically, eciently, and more accurately. The approach has three
main components.
First, we use an automatic transformation to construct a cost-bound function from the
original program based on partially known input structures. The resulting function takes
input size parameters and primitive cost parameters as arguments. The only caveat here is
that the cost-bound function might not terminate. However, nontermination occurs only if
the recursive/iterative structure of the original program depends on unknown parts in the
given partially known input structures.
Then, to compute worst-case cost bounds eciently without relying on closed forms,
we optimize the cost-bound function symbolically with respect to given values of input
size parameters. This is based on partial evaluation and incremental computation. This
symbolic evaluation always terminates provided the cost-bound function terminates. The
resulting function expresses cost bounds as counts of dierent operations performed, where
the cost of each kind of operations is denoted by a primitive cost parameter.
A third component consists of transformations that enable more accurate cost bounds
to be computed: lifting conditions, simplifying conditionals, and inlining nonrecursive func-
tions. The transformations should be applied on the original program before the cost-bound
function is constructed. They may result in larger code size, but they allow subcomputations
based on the same control conditions to be merged, leading to more accurate cost bounds,
which can be computed more eciently as well.
The approach is general because all three components we developed are based on general
methods and techniques. Each particular component is not meant to be a new analysis or
transformation, but the combination of them for the application of automatic and accurate
cost-bound analysis for high-level languages is new. In the resulting cost bounds, primitive
cost parameters can be obtained based on the language implementation or be measured
conservatively or approximately, to give accurate, conservative, or approximate time or space
bounds.
We have implemented the analyses and transformations for a subset of Scheme [2, 11, 1],
a dialect of Lisp. All the transformations are done automatically, and the cost bounds,
expressed as operation counts, are computed eciently and accurately. Example programs
analyzed include a number of classical sorting programs, matrix computation programs,
and various list processing programs. We also estimated approximate bounds on the actual
running times by measuring primitive cost parameters for running times using control loops,
and calculated accurate bounds on the heap space allocated for constructors in the programs
based on the number of bytes allocated for each constructor by the compiler. We used a
functional subset of Scheme for three reasons.
Functional programming languages, together with features like automatic garbage col-
lection, have become increasingly widely used, yet work for calculating actual running
time and space of functional programs has been lacking.
Much work has been done on analyzing and transforming functional programs, including
complexity analysis, and it can be used for estimating actual running time and
space eciently and accurately as well.
Analyses and transformations developed for functional language can be applied to
improve analyses of imperative languages as well [52].
All our analyses and transformations are performed at the source level. This allows implementations
to be independent of compilers and underlying systems and allows analysis
results to be understood at source level.
2.3 Language
We use a rst-order, call-by-value functional language that has structured data, primitive
arithmetic, Boolean, and comparison operations, conditionals, bindings, and mutually recursive
function calls. A program is a set of mutually recursive function denitions of the
where an expression e is given by the grammar below: 1
e ::= v variable reference
primitive operation
binding expression
application
For binary primitive operations, we will be changing between inx and prex notations
depending on whichever is easier for the presentation. Following Lisp and Scheme, we use
1 The keywords are taken from ML [35]. Our implementation supports both this syntax and Scheme
syntax.
cons(h; t) to construct a list with head h and tail t, and use car(l) and cdr(l) to select the
head and tail, respectively, of list l. We use nil to denote an empty list, and use null(l) to
test whether l is an empty list. For example, the program below selects the least element in
a non-empty list.
else let
in if car(x) s then car(x) else s end
We use least as a small running example. To present various analysis results, we also use
several other examples: insertion sort, selection sort, merge sort, set union, list reversal
(the standard linear-time version), and reversal with append (the standard quadratic-time
version).
Even though this language is small, it is suciently powerful and convenient to write
sophisticated programs. Structured data is essentially records in Pascal, structs in C, and
constructor applications in ML. Conditionals and bindings easily simulate conditional statements
and assignments, and recursions can simulate loops. We can also see that cost analysis
in the presence of arrays and pointers is not fundamentally harder [37], because the costs
of the program constructs for them can be counted in a similar way as costs of other con-
structs. For example, accessing an array element a[i] has the cost of accessing i, osetting
the element address from that of a, and nally getting the value from that address. Note
that side eects caused by these features often cause other analysis to be dicult [9, 22].
For pure functional languages, higher-order functions and lazy evaluations are important.
Cost functions that accommodate these features have been studied [49, 44]. The symbolic
evaluation and optimizations we describe apply to them as well.
Constructing cost-bound functions
3.1 Constructing cost functions
We rst transform the original program to construct a cost function, which takes the original
input and primitive cost parameters as arguments and returns the cost. This is straightforward
based on the semantics of the program constructs.
Given an original program, we add a set of cost functions, one for each original function,
which simply count the cost while the original program executes. The algorithm, given below,
is presented as a transformation C on the original program, which calls a transformation C e
to recursively transform subexpressions. For example, a variable reference is transformed
into a symbol C varref representing the cost of a variable reference; a conditional statement
is transformed into the cost of the test plus, if the condition is true, the cost of the true
branch, otherwise, the cost of the false branch, and plus the cost for the transfers of control.
We use cf to denote the cost function for f .
program: C44
variable reference:
data construction: C e [
primitive operation: C e [
else
binding:
function call: C e [
Applying this transformation to program least, we obtain function least as originally
given and cost function cleast below, where inx notation is used for additions, and un-necessary
parentheses are omitted. Note that various C's are indeed arguments to the cost
function cleast; we omit them from argument positions for ease of reading.
else
in C if
This transformation is similar to the local cost assignment [50], step-counting function
[41], cost function [44], etc. in other work. Our transformation extends those methods
with bindings, and makes all primitive cost parameters explicit at the source-language level.
For example, each primitive operation p is given a dierent symbol C p , and each constructor
c is given a dierent symbol C c . Note that the cost function terminates with the appropriate
sum of primitive cost parameters if the original program terminates, and it runs forever to
sum to innity if the original program does not terminate, which is the desired meaning of
a cost function.
3.2 Constructing cost-bound functions
Characterizing program inputs and capturing them in the cost function are dicult to automate
[50, 26, 46]. However, partially known input structures provide a natural means [41].
A special value unknown represents unknown values. For example, to capture all input lists
of length n, the following partially known input structure can be used.
list(n) , if
else cons(unknown; list(n 1))
Similar structures can be used to describe an array of n elements, a matrix of m-by-n
elements, a complete binary tree of height h, etc.
Since partially known input structures give incomplete knowledge about inputs, the original
functions need to be transformed to handle the special value unknown. In particular,
for each primitive function p, we dene a new function f p such that f p (v returns
unknown if any v i is unknown and returns p(v usual otherwise. For example,
unknown then unknown else v 1 v 2 . We also
dene a new function lub, denoting least upper bound, that takes two values and returns
the most precise partially known structure that both values conform with. For example, if
_ ::: _
then unknown
else
else unknown
Also, the cost functions need to be transformed to compute an upper bound of the cost: if
the truth value of a conditional test is known, then the cost of the chosen branch is computed
normally, otherwise, the maximum of the costs of both branches is computed. Transformation
B, given below, embodies these algorithms, where B e transforms an expression in the
original functions, and B c transforms an expression in the cost functions. We use uf to denote
function f extended with the value unknown, and we use cbf to denote the cost-bound
function for f .
program: B44
variable reference:
data construction:
primitive operation:
in if
else if v then e 0
function call:
primitive cost parameter:
in if
else if v then e 0
Applying this transformation on functions least and cleast yields functions uleast and
cbleast below, where function f p for each primitive operator p and function lub are as given
above. Shared code is presented with where-clauses when this makes the code smaller.
in if
else if v then e 1 else e 2 end
in let
in if
else if v then f car (x) else s end end
in if
else if v then e 1 else e 2 end)
in C if
in if
else if v then C car
The resulting cost-bound function takes as arguments partially known input structures,
such as list(n), which take as arguments input size parameters, such as n. Therefore, we
can obtain a resulting function that takes as arguments input size parameters and primitive
cost parameters and computes the most accurate cost bound possible.
Both transformations C and B take linear time in terms of the size of the program, so
they are extremely ecient, as also seen in our prototype system ALPA. Note that the
resulting cost-bound function might not terminate, but this occurs only if the recursive
structure of the original program depends on unknown parts in the partially known input
structure. As a trivial example, if partially known input structure given is unknown, then the
corresponding cost-bound function for any recursive function does not terminate, since the
original program does cost innite resource in the worst case. We can modify the analysis
to detect nontermination in many cases, as for example in [27]. For the example of giving
unknown to a recursive cost-bound function, nontermination is trivial to detect, since the
arguments to recursive calls would remain unknown.
Optimizing cost-bound functions
This section describes symbolic evaluation and optimizations that make computation of cost
bounds more ecient. The transformations consist of partial evaluation, realized as global
inlining, and incremental computation, realized as local optimization.
We rst point out that cost-bound functions might be extremely inecient to evaluate
given values for their parameters. In fact, in the worst case, the evaluation takes exponential
time in terms of the input size parameters, since it essentially searches for the worst-case
execution path for all inputs satisfying the partially known input structures.
4.1 Partial evaluation of cost-bound functions
In practice, values of input size parameters are given for almost all applications. This is
why time-analysis techniques used in systems can require loop bounds from the user before
time bounds are computed. While in general it is not possible to obtain explicit loop bounds
automatically and accurately, we can implicitly achieve the desired eect by evaluating the
cost-bound function symbolically in terms of primitive cost parameters given specic values
of input size parameters.
The evaluation simply follows the structures of cost-bound functions. Specically, the
control structures determine conditional branches and make recursive function calls as usual,
and the only primitive operations are sums of primitive cost parameters and maximums
among alternative sums, which can easily be done symbolically. Thus, the transformation
inlines all function calls, sums all primitive cost parameters symbolically, determines conditional
branches if it can, and takes maximum sums among all possible branches if it can
not.
The symbolic evaluation E dened below performs the transformations. It takes as arguments
an expression e and an environment of variable bindings (where each variable
is mapped to its value) and returns as result a symbolic value that contains the primitive
cost parameters. The evaluation starts with the application of the cost-bound function to
a partially unknown input structure, e.g., cbleast(list(100)), and it starts with an empty
environment. We assume that add s is a function that symbolically sums its arguments, and
s is a function that symbolically takes the maximum of its arguments.
variable reference:
look up binding of v in environment
primitive cost parameter: E [
data
primitive
bind v to value of e 1 in environment
function calls: E [
where f is dened by
As an example, applying symbolic evaluation to cbleast on a list of size 100, we obtain
the following result:
This symbolic evaluation is exactly a specialized partial evaluation. It is fully automatic
and computes the most accurate cost bound possible with respect to the given program
structure. It always terminates as long as the cost-bound function terminates.
The symbolic evaluation given only values of input size parameters is inecient compared
to direct evaluation given values of both input size parameters and particular primitive cost
parameters, even though the resulting function takes virtually constant time given any values
of primitive cost parameters. For example, directly evaluating a quadratic-time reverse function
(that uses append operation) on input of size 20 takes about 0.96 milliseconds, whereas
the symbolic evaluation takes 670 milliseconds, hundreds of times slower. We propose further
optimizations below that greatly speed up the symbolic evaluation.
4.2 Avoiding repeated summations over recursions
The symbolic evaluation above is a global optimization over all cost-bound functions in-
volved. During the evaluation, summations of symbolic primitive cost parameters within
each function denition are performed repeatedly while the computation recurses. Thus, we
can speed up the symbolic evaluation by rst performing such summations in a preprocessing
step. Specically, we create a vector and let each element correspond to a primitive cost
parameter. The transformation S, given below, performs this optimization. We use vcbf to
denote the transformed cost-bound function of f that operates on vectors. We use function
add v to compute component-wise sum of the argument vectors, and we use function
to compute component-wise maximum of the argument vectors.
program: S44
primitive cost parameter: S c [ create a vector of 0's except with the
component corresponding to C set to 1
all others: S c [
Let V be the following vector of primitive cost parameters:
Applying the above transformation on function cbleast yields function vcbleast, where components
of the vectors correspond to the components of V , and inx notation + v is used for
vector addition.
in if
else if v then e 1 else e 2 end)
in < 2; 0; 0; 0;
in if
else if v then <
else <
The cost-bound function cbleast(x) is simply the dot product of vcbleast(x) and V .
This transformation incrementalizes the computation over recursions to avoid repeated
summation. Again, this is fully automatic and takes time linear in terms of the size of the
cost-bound function.
The result of this optimization is drastic speedup of the evaluation. For example, optimized
symbolic evaluation of the same quadratic-time reverse on input of size 20 takes only
2.55 milliseconds, while direct evaluation takes 0.96 milliseconds, resulting in less than 3
times slow-down; it is over 260 times faster than symbolic evaluation without this optimization
Making cost-bound functions accurate
While loops and recursions aect cost bounds most, the accuracy of the cost bounds calculated
also depends on the handling of the conditionals in the original program, which is
re
ected in the cost-bound function. For conditionals whose test results are known to be
true or false at the symbolic-evaluation time, the appropriate branch is chosen; so other
branches, which may even take longer, are not considered for the worst-case cost. This is a
major source of accuracy for our worst-case bound.
For conditionals whose test results are not known at symbolic-evaluation cost, we need to
take the maximum cost among all alternatives. The only case in which this would produce
inaccurate cost bound is when the test in a conditional in one subcomputation implies the
test in a conditional in another subcomputation. For example, consider a variable v whose
value is unknown and
If we compute the cost bound for e 1 directly, the result is at least cF
cF ibonacci(2000). However, if we consider only the two realizable execution paths, we know
that the worst case is cF ibonacci(2000) plus some small constants. This is known as the
false-path elimination problem [3].
Two transformations, lifting conditions and simplifying conditionals, applied on the source
program before constructing the cost-bound function, allow us to achieve the accurate analysis
results. In each function denition, the former lifts conditions to the outermost scope
that the test does not depend on, and the latter simplies conditionals according to the lifted
condition. For in the above example, lifting the condition for e 1 , we obtain
Simplifying the conditionals in the two occurrences of e 2 to F ibonaccis(2000) and 2, respec-
tively, we obtain
2:
To facilitate these transformations, we inline all function calls where the function is not
dened recursively.
The power of these transformations depends on reasonings used in simplifying the condi-
tionals, as have been studied in many program transformation methods [51, 45, 47, 18, 32].
At least syntactic equality can be used, which identies the most obvious source of inac-
curacy. These optimizations also speed up the symbolic evaluation, since now obviously
infeasible execution paths are not searched.
These transformations have been implemented and applied on many test programs. Even
though the resulting programs can be analyzed more accurately and more eciently, we have
not performed separate measurements. The major reason is that our example programs do
not contain conditional tests that are implied by other conditional tests. These simple transformations
are just examples of many powerful program optimization techniques, especially
on functional programs, that can be used to make cost-bound function more accurate as well
as more ecient. We plan to explore more of these optimizations and measure their eects
as we experiment with more programs.
Note that these transformations on the source program are aimed at making the cost-
bound function more accurate and more ecient, not at optimizing the source program.
Even though making the source program faster also makes the corresponding cost-bound
function faster, these two goals are dierent. Optimizing the source program is meant to
produce a dierent program that has a smaller cost. Cost analysis is meant to analyze
accurately the cost of a given program.
To make use of all the techniques for making cost-bound analysis ecient and accurate,
we perform an overall cost-bound analysis by applying the following transformations in order
to the source program: lifting conditions and simplifying conditionals (as in Section 5), constructing
cost functions and then cost-bound functions (as in Section 3), and precomputing
repeated local summations and then performing global symbolic evaluation (as in Section 4).
6 Implementation and experimentation
We have implemented the analysis approach in a prototype system, ALPA (Automatic
Language-based Performance Analyzer). We performed a large number of experiments and
obtained encouraging good results.
6.1 Implementation and experimental results
The implementation is for a subset of Scheme [2, 11, 1]. An editor for the source programs is
implemented using the Synthesizer Generator [40], and thus we can easily change the syntax
for the source programs. For example, the current implementation supports both the syntax
used in this paper and Scheme syntax. Construction of cost-bound functions is written in
SSL, a simple functional language used in the Synthesizer Generator. Lifting conditions,
simplifying conditionals, and inlining nonrecursive calls are also implemented in SSL. The
symbolic evaluation and optimizations are written in Scheme.
Figure
1 gives the results of symbolic evaluation of the cost-bound functions for six
example programs on inputs of sizes 10 to 2000. For example, the second row of the gure
means that for insertion sort on inputs of size 10, the cost-bound function is
The last column lists the sums for every rows. For the set union example, we used inputs
where both arguments were of the given sizes. These numbers in the gure characterize
various aspects of the examples; they contribute to the actual time and space bounds discussed
below. We veried that all numbers are also exact worst-case counts. For example,
for insertion sort on inputs of size 10, indeed function calls are made during a worst-case
execution. The worst-case counts are veried by using a modied evaluator. These experiments
show that our cost-bound functions can give accurate cost bounds in terms of counts
of dierent operations performed.
Figure
2 compares the times of direct evaluation of cost-bound functions, with each primitive
cost parameter set to 1, and the times of optimized symbolic evaluation, obtaining the
exact symbolic counts as in Figure 1. These measurements are taken on a Sun Ultra 1 with
167MHz CPU and 64MB main memory. They include garbage-collection time. The times
without garbage-collection times are all about 1% faster, so they are not shown here. These
experiments show that our optimizations of cost-bound functions allow symbolic evaluation
to be only a few times slower than direct evaluation rather than hundreds of times slower.
For merge sort, the cost-bound function constructed using the algorithms in this paper
takes several days to evaluate on inputs of size 50 or larger. Special but simple optimizations
were done to obtain the numbers in Figure 1, namely, letting the cost-bound function for
merge avoid base cases as long as possible and using sizes of lists in place of lists of unknowns;
the resulting symbolic evaluation takes only seconds. Such optimizations are yet to be
implemented to be performed automatically. For all other examples, it takes at most 2.7
hours to evaluate the cost-bound functions.
Note that, on small inputs, symbolic evaluation takes relatively much more time than
direct evaluation, due to the relatively large overhead of vector setup; as inputs get larger,
symbolic evaluation is almost as fast as direct evaluation for most examples. Again, after
example size varref nil cons null car cdr if let call total
insertion
200 120401 201 20100 20301 40000 20100 19900 40201 0 20300 301504
1000 3002001 1001 500500 501501 1000000 500500 499500 1001001 0 501500 7507504
2000 12004001 2001 2001000 2003001 4000000 2001000 1999000 4002001 0 2003000 30015004
selection
200 220501 201 20100 40401 79800 80000 39800 80201 20100 40400 621504
500 1376251 501 125250 251001 499500 500000 249500 500501 125250 251000 3878754
1000 5502501 1001 500500 1002001 1999000 2000000 999000 2001001 500500 1002000 15507504
2000 22005001 2001 2001000 4004001 7998000 8000000 3998000 8002001 2001000 4004000 62015004
200 19526 598 3089 7372 5779 4832 1345 8717 0 5428 56686
1000 124710 2998 19953 45900 37907 30928 8977 54877 0 33924 360174
2000 273422 5998 43905 99804 83811 67856 19953 119757 0 73852 788358
set
union 20 2162 20 20 441 440 420 400 861 20 440 5224
50 12902 50 50 2601 2600 2550 2500 5151 50 2600 31054
100 50802 100 100 10201 10200 10100 10000 20301 100 10200 122104
200 201602 200 200 40401 40400 40200 40000 80601 200 40400 484204
300 452402 300 300 90601 90600 90300 90000 180901 300 90600 1086304
500 1254002 500 500 251001 251000 250500 250000 501501 500 251000 3010504
1000 5008002 1000 1000 1002001 1002000 1001000 1000000 2003001 1000 1002000 12021004
2000 20016002 2000 2000 4004001 4004000 4002000 4000000 8006001 2000 4004000 48042004
list
2000 8003 1 2000 2001 2000 2000 0 2001 0 2001 20007
1000 2003001 1001 500500 501501 500500 500500 0 501501 0 501500 5010004
2000 8006001 2001 2001000 2003001 2001000 2001000 0 2003001 0 2003000 20020004
Figure
1: Results of symbolic evaluation of cost-bound functions.
the symbolic evaluation, cost bounds can be computed in virtually no time given values of
primitive cost parameters.
insertion sort selection sort merge sort set union list reversal reversal w/app.
size direct symbolic direct symbolic direct symbolic direct symbolic direct symbolic direct symbolic
500 58240.0 58080.0 39480.0 46050.0 xxxxxx xxxxxx 125910. 117240. 0.50305 6.24266 21540.0 22180.0
2000
Figure
2: Times of direct evaluation vs. optimized symbolic evaluation (in milliseconds).
Among over twenty programs we have analyzed using ALPA, two of them did not termi-
nate. One is quicksort, and the other is a contrived variation of sorting; both diverge because
the recursive structure for splitting a list depends on the values of unknown list elements.
This is similar to nontermination caused by merging paths in other methods [33, 34], but
nontermination happens much less often in our method, since we essentially avoid merging
paths as much as possible. We have found a dierent symbolic-evaluation strategy that uses
a kind of incremental path selection, and the evaluation would terminate for both examples,
as well as all other examples, giving accurate worst-case bounds. That evaluation algorithm
is not yet implemented. A future work is to exploit results from static analysis for identifying
sources of nontermination [27] to make cost-bound analysis terminate more often. For
practical use of a cost-bound analyzer that might not terminate on certain inputs, we can
modify the evaluator so that if it is stopped at any time, it outputs the cost bound calculated
till that point. This means that a longer-running analysis might yield a higher bound.
6.2 Further experiments
We also estimated approximate bounds on the actual running times by measuring primitive
cost parameters for running times using control loops, and calculated accurate bounds on the
heap space allocated for constructors in the programs based on the number of bytes allocated
for each constructor by the compiler. For time-bound analysis, we performed two sets of
experiments: the rst for a machine with cache enabled, and the second for a machine with
cache disabled. The rst gives tight bounds in most cases but has a few underestimations
for inputs that are very small or very large, which we attribute to the cache eects. The
second gives conservative and tight bounds for all inputs. We rst describe experiments for
time-bound analysis with cache enabled and for analysis of heap space allocation bound, and
then analyze the cache eects and show results for time-bound analysis with cache disabled.
The measurements and analyses for time-bounds are performed for source programs compiled
with Chez Scheme compiler [8]. The source program does not use any library; in partic-
ular, no numbers are large enough to trigger the bignum implementation of Chez Scheme. We
tried to avoid compiler optimizations by setting the optimization level to 0; we view necessary
optimizations as having already been applied to the program. To handle garbage-collection
time, we performed separate sets of experiments: those that exclude garbage-collection times
in both calculations and measurements, and those that include garbage-collection time in
both. 2 Our current analysis does not handle the eects of cache memory or instruction
pipelining; we approximated cache eects by taking operands circularly from a cycle of 2000
elements when measuring primitive cost parameters, as discussed further below. For time-bound
analysis with cache enabled, the particular numbers reported are taken on a Sun
Ultra 1 with 167MHz CPU and 64MB main memory; we have also performed the analysis
for several other kinds of SPARC stations, and the results are similar.
Since the minimum running time of a program construct is about 0.1 microseconds, and
the precision of the timing function is 10 milliseconds, we use control/test loops that iterate
10,000,000 times, keeping measurement error under 0.001 microseconds, i.e., 1%. Such a loop
is repeated 100 times, and the average value is taken to compute the primitive cost parameter
for the tested construct (the variance is less than 10% in most cases). The calculation of the
time bound is done by plugging these measured parameters into the optimized time-bound
function. We then run each example program an appropriate number of times to measure
its running time with less than 1% error.
Figure
3 shows the estimated and measured worst-case times for six example programs
on inputs of sizes 10 to 2000. These times do not include garbage-collection times. The item
me/ca is the measured time expressed as a percentage of the calculated time. In general, all
measured times are closely bounded by the calculated times (with about 90-95% accuracy)
except when inputs are very small (20, in 1 case) or very large (2000, in 3 cases), which
is analyzed and addressed below. The measurements including garbage-collection times are
similar except with a few more cases of underestimation. Figure 4 depicts the numbers in
We had originally tried to avoid garbage collection by writing loops instead of recursions as much as
possible and tried to exclude garbage-collection times completely. The idea of including garbage-collection
times comes from an earlier experiment, where we mistakenly used a timing function of Chez Scheme that
included garbage-collection time.
insertion sort selection sort merge sort
size calculated measured me/ca calculated measured me/ca calculated measured me/ca
50 1.55379 1.48250 95.4 3.26815 3.01125 92.1 0.92702 0.85700 92.4
100 6.14990 5.86500 95.4 13.0187 11.9650 91.9 2.15224 1.98812 92.4
200 24.4696 24.3187 99.4 51.9678 47.4750 91.4 4.90017 4.57200 93.3
300 54.9593 53.8714 98.0 116.847 107.250 91.8 7.86231 7.55600 96.1
500 152.448 147.562 96.8 324.398 304.250 93.8 14.1198 12.9800 91.9
1000 609.146 606.000 99.5 1297.06 1177.50 90.8 31.2153 28.5781 91.6
2000 2435.29 3081.25 126.5 5187.17 5482.75 105.7 68.3816 65.3750 95.6
set union list reversal reversal w/append
size calculated measured me/ca calculated measured me/ca calculated measured me/ca
50 2.27555 2.11500 92.9 0.04436 0.04193 94.5 1.14035 1.01050 88.6
100 8.95400 8.33250 93.1 0.08834 0.08106 91.8 4.47924 3.93600 87.9
300 79.6987 75.1000 94.2 0.26424 0.24437 92.5 39.8220 35.6328 89.5
500 220.892 208.305 94.3 0.44013 0.40720 92.5 110.344 102.775 93.1
1000 882.094 839.780 95.2 0.87988 0.82280 93.5 440.561 399.700 90.7
2000 3525.42 3385.31 96.0 1.75937 1.65700 94.2 1760.61 2235.75 127.0
Figure
3: Calculated and measured worst-case times (in milliseconds) with cache enabled.
Figure
3 for inputs of sizes up to 1000. Examples such as sorting are classied as complex
examples in previous study [37, 28], where calculated time is as much as 67% higher than
measured time, and where only the result for one sorting program on a single input (of size
is reported in each experiment.
Using the cost bounds computed, we can also calculate, accurately instead of approx-
imately, bounds on the heap space dynamically allocated for constructors in the source
programs. The number of bytes allocated for each constructor can be obtained precisely
based on the language implementation. For example, Chez Scheme allocates 8 bytes for a
cons-cell on the heap; this information can also be obtained easily using its statistics utili-
ties. Based on results in Figure 1, by setting C cons to 8 and other primitive cost parameters
to 0, we obtain exact bounds on the heap space dynamically allocated for constructors in
the programs, as shown in Figure 5.
Consider the accuracy of the time-bound analysis with cache enabled. We found that
when inputs are very small (20), the measured time is occasionally above the calculated
time for some examples. Also, when inputs are very large (1000 for measurements including
time
(milliseconds)
input size
insertion sort
calculated
time
(milliseconds)
input size
selection sort
calculated
time
(milliseconds)
input size
merge sort
calculated
time
(milliseconds)
input size
set union
calculated
time
(milliseconds)
input size
list reversal
calculated
time
(milliseconds)
input size
reversal w/append
calculated
Figure
4: Comparison of calculated and measured worst-case times with cache enabled.
size insertion sort selection sort merge sort set union list reversal reversal w/app.
50 10200 10200 4584 400 400 10200
100 40400 40400 10760 800 800 40400
200 160800 160800 24712 1600 1600 160800
500 1002000 1002000 71816 4000 4000 1002000
1000 4004000 4004000 159624 8000 8000 4004000
2000 16008000 16008000 351240 16000 16000 16008000
Figure
5: Bounds of heap space allocated for constructors (in bytes).
garbage-collection time, or 2000 excluding garbage-collection time), the measured times for
some examples are above the calculated time. We attribute these to cache memory eects,
for the following reasons. First, the initial cache misses are more likely to show up on small
inputs. Second, underestimation for inputs of size 2000 in Figure 3 happens exactly for the
3 examples whose allocated heap space is very large in Figure 5, and recall that we used a
cycled data structure of size 2000 when measuring primitive cost parameters. Furthermore,
for programs that use less space, our calculated bounds are accuracy for even larger input
sizes, and for programs that use extremely large amount of space even on small inputs, we
have much worse underestimation. For example, for Cartesian product, underestimation
occurs for small input sizes (50 to 200); as an example, on input of size 200, the measured
time is 65% higher than the calculated time.
We performed a second set of experiments for time-bound analysis for a machine with
cache disabled. The machine used is a Sun Ultra 10 with 333MHz CPU and 256MB main
memory. Figure 6 shows the estimated and measured worst-case times for the same six
programs on inputs of sizes 10 to 2000. These times do not include garbage-collection times.
We can see that all measured times are closely bounded by the calculated times, with no
underestimation. Figure 7 depicts the numbers in Figure 6.
To accommodate cache eect in time-bound analysis with cache enabled, we could adjust
our measurements of primitive cost parameters on data structures of appropriate size. The
appropriate size can be determined based on a precise space usage analysis. Heap-space
allocation is only one less direct aspect. More directly, we can incorporate precise knowledge
about compiler-generated machine instructions into our analysis method. We leave this as a
future work. Our current method can be used for approximate time-bound estimation in the
insertion sort selection sort merge sort
size calculated measured me/ca calculated measured me/ca calculated measured me/ca
50 3.52196 3.22160 91.5 7.15578 6.22520 87.0 2.00717 1.91025 95.2
200 55.5253 50.5195 91.0 113.871 97.5660 85.7 10.6383 9.94885 93.5
300 124.726 113.551 91.0 256.057 219.080 85.6 17.0790 15.9820 93.6
500 346.007 315.220 91.1 710.928 610.595 85.9 30.6905 28.5640 93.1
1000 1382.66 1255.81 90.8 2842.68 2438.77 85.8 67.8999 63.3030 93.2
2000 5527.91 5053.00 91.4 11368.7 9794.00 86.1 148.836 138.786 93.2
set union list reversal reversal w/append
size calculated measured me/ca calculated measured me/ca calculated measured me/ca
50 4.73684 4.60915 97.3 0.10007 0.09114 91.1 2.56979 2.24415 87.3
200 73.7997 71.7215 97.2 0.39786 0.35615 89.5 40.0575 34.6355 86.5
300 165.552 161.145 97.3 0.59639 0.53297 89.4 89.8657 77.8655 86.6
500 458.766 446.670 97.4 0.99345 0.88594 89.2 249.041 216.280 86.8
1000 1831.75 1784.91 97.4 1.98611 1.76579 88.9 994.409 859.320 86.4
2000 7320.41 7133.00 97.4 3.97142 3.52055 88.6 3974.12 3469.58 87.3
Figure
Calculated and measured worst-case times (in milliseconds) with cache disabled.
presence of low-level eects or precise analysis in their absence, and can be used for more
accurate space-bound analysis that helps addressing memory issues.
7 Related work and conclusion
A preliminary version of this work appeared in [30]. An overview of comparison with related
work in cost analysis appears in Section 2. Certain detailed comparisons have also been discussed
while presenting our method. This section summarizes them, compares with analyses
for loop bounds and execution paths in more detail, and concludes.
Compared to work in algorithm analysis and program complexity analysis [26, 44, 53, 7],
this work consistently pushes through symbolic primitive cost parameters, so it allows us
to calculate actual cost bounds and validate the results with experimental measurements.
There is also work on analyzing average-case complexity [17], which has a dierent goal
than worst-case bounds. Compared to work in systems [46, 37, 36, 28], this work explores
program analysis and transformation techniques to make the analysis automatic, ecient,
and accurate, overcoming the diculties caused by the inability to obtain loop bounds,
time
(milliseconds)
input size
insertion sort
calculated
time
(milliseconds)
input size
selection sort
calculated
time
(milliseconds)
input size
merge sort
calculated
time
(milliseconds)
input size
set union
calculated
time
(milliseconds)
input size
list reversal
calculated
time
(milliseconds)
input size
reversal w/append
calculated
Figure
7: Comparison of calculated and measured worst-case times with cache disabled.
recursion depths, or execution paths automatically and precisely. There is also work for
measuring primitive cost parameters for the purpose of general performance prediction [43,
42]. In that work, information about execution paths was obtained by running the programs
on a number of inputs; for programs such as insertion sort whose best-case and worst-case
execution times dier greatly, the predicted time using this method could be very inaccurate.
A number of techniques have been studied for obtaining loop bounds or execution paths
for time analysis [36, 3, 13, 19, 21]. Manual annotations [36, 28] are inconvenient and error-prone
[3]. Automatic analysis of such information has two main problems. First, even when
a precise loop bound can be obtained by symbolic evaluation of the program [13], separating
the loop and path information from the rest of the analysis is in general less accurate than an
integrated analysis [34]. Second, approximations for merging paths from loops, or recursions,
very often lead to nontermination of the time analysis, not just looser bounds [13, 19, 34].
Some newer methods, while powerful, apply only to certain classes of programs [21]. In
contrast, our method allows recursions, or loops, to be considered naturally in the overall
cost analysis based on partially known input structures. In addition, our method does not
merge paths from recursions, or loops; this may cause exponential time complexity of the
analysis in the worst case, but our experiments on test programs show that the analysis is
still feasible for inputs of sizes in the thousands. We have also studied simple but powerful
optimizations to speed up the analysis dramatically.
In the analysis for cache behavior [14, 15], loops are transformed into recursive calls,
and a predened callstring level determines how many times the xed-point analysis iterates
and thus how the analysis results are approximated. Our method allows the analysis to
perform the exact number of recursions, or iterations, for the given partially known input data
structures. The work by Lundqvist and Stenstrom [33, 34] is based on similar ideas as ours.
They apply the ideas at machine instruction level and can more accurately take into account
the eects of instruction pipelining and data caching, but they can not handle dynamically
allocated data structures as we can, and their method for merging paths for loops would
lead to nonterminating analysis for many more programs than our method. We apply the
ideas at the source level, and our experiments show that we can calculate more accurate cost
bound and for many more programs than merging paths, and the calculation is still ecient.
There are also methods for time analysis based on program
ow graphs [39, 6]. Unlike our
method, these methods do not exploit given input sizes, and they require programmers to
give precise path information.
The idea of using partially known input structures originates from Rosendahl [41]. We
have extended it to manipulate primitive cost parameters. We also handle binding constructs,
which is simple but necessary for ecient computation. An innovation in our method is
to optimize the cost-bound function using partial evaluation, incremental computation, and
transformations of conditionals to make the analysis more ecient and more accurate. Partial
evaluation [5, 24, 23], incremental computation [32, 31, 29], and other transformations have
been studied intensively in programming languages. Their applications in our cost-bound
analysis are particularly simple and clean; the resulting transformations are fully automatic
and ecient.
We have started to explore a suite of new language-based techniques for cost analysis,
in particular, analyses and optimizations for further speeding up the evaluation of the cost-
bound function. We have also applied our general approach to analyze stack space and live
heap space [48], which can further help predict garbage-collection and caching behavior. We
can also analyze lower bounds using a symmetric method, namely by replacing maximum
with minimum at all conditional points. A future work is to accommodate more lower-level
dynamic factors for timing at the source-language level [28, 14], by examining the
corresponding compiler generated code, where cache and pipelining eects are explicit.
In conclusion, the approach we propose is based entirely on high-level programming
languages. The methods and techniques are intuitive; together they produce automatic
tools for analyzing cost bounds eciently and accurately and can be used to accurately or
approximately analyze time and space bounds.
Acknowledgment
We thank the anonymous referees for their careful reviews and many very helpful comments.
--R
Revised report on the algorithmic language Scheme.
Structure and Interpretation of Computer Programs.
On the false path problem in hard real-time programs
Bounding worst-case instruction cache performance
Cadence Research Systems.
Analysis of pointers and structures.
The Scheme Programming Language.
Facilitating worst-case execution time analysis for optimized code
Deriving annotations for tight calculation of execution time.
Applying compiler techniques to cache behavior prediction.
Automatic average-case analysis of algorithms
Generalized partial evaluation.
Automatic derivation of path and loop annotations in object-oriented real-time programs
A retargetable technique for predicting execution time.
Abstractions for recursive pointer data structures: Improving the analysis and transformation of imperative programs.
An introduction to partial evaluation.
Partial Evaluation and Automatic Program Generation.
The Art of Computer Programming
The size-change principle for program termination
An accurate worst case timing analysis for RISC processors.
Static caching for incremental computation.
Systematic derivation of incremental programs.
Predicting program execution times by analyzing static and dynamic program paths.
Experiments with a program timing tool based on source-level timing schema
Live memory analysis for garbage collection in embedded systems.
Computing maximum task execution times
The Synthesizer Generator: A System for Constructing Language-Based Editors
Automatic complexity analysis.
Analysis of benchmark characterization and benchmark performance prediction.
Machine characterization based on an abstract high-level language machine
Complexity analysis for a lazy higher-order language
Program improvement by internal specialization.
Reasoning about time in higher level language software.
The concept of a supercompiler.
Automatic accurate live memory analysis for garbage-collected languages
Strictness analysis aids time analysis.
Mechanical program analysis.
Value dependence graphs: Representation without taxation.
The automatic complexity analysis of divide-and-conquer algo- rithms
--TR
--CTR
Yanhong A. Liu , Scott D. Stoller, Optimizing Ackermann's function by incrementalization, ACM SIGPLAN Notices, v.38 n.10, p.85-91, October
Yanhong A. Liu , Scott D. Stoller, Dynamic Programming via Static Incrementalization, Higher-Order and Symbolic Computation, v.16 n.1-2, p.37-62, March-June | space analysis;worst-case execution time;performance analysis and measurements;timing analysis;cost bound;program optimization;cost analysis;program analysis and transformation;time analysis |
627209 | Design, Implementation, and Performance Evaluation of a Detection-Based Adaptive Block Replacement Scheme. | A new buffer replacement scheme, called DEAR (DEtection-based Adaptive Replacement), is presented for effective caching of disk blocks in the operating system. The proposed DEAR scheme automatically detects block reference patterns of applications and applies different replacement policies to different applications depending on the detected reference pattern. The detection is made by a periodic process and is based on the relationship between block attribute values, such as backward distance and frequency gathered in a period, and the forward distance observed in the next period. This paper also describes an implementation and performance measurement of the DEAR scheme in FreeBSD. The results from performance measurements of several real applications show that, compared with the LRU scheme, the proposed scheme reduces the number of disk I/Os by up to 51 percent (with an average of 23 percent) and the response time by up to 35 percent (with an average of 12 percent) in the case of single application executions. For multiple application executions, the results show that the proposed scheme reduces the number of disk I/Os by up to 20 percent (with an average of 12 percent) and the overall response time by up to percent (with an average of 8 percent). | Introduction
The speed gap between the processor and disks is becoming wider as VLSI technologies advance at
an enormous rate. To overcome this speed gap, buffer caches [1] are used to keep in main memory
This paper was presented in part at the USENIX 1999 Annual Technical Conference and at the Fourth IEEE
International Workshop on Multi-Media Database Management Systems.
y Department of Computer Engineering, Seoul National University, Seoul 151-742, Korea; e-mail:
[email protected], [email protected], [email protected].
z Department of Computer Engineering, Hong-Ik University, Seoul 121-791, Korea; e-mail: [email protected].
disk blocks that are likely to be accessed in the near future. Since the size of buffer cache is limited,
an effective scheme is needed to decide which block should be kept in the cache. To this end, study
of effective block replacement has been the focus of much research both in the systems and database
areas [2, 3, 4, 5, 6, 7].
Many traditional block replacement algorithms assume that past is a good predictor of the future.
For example, the LRU replacement algorithm assumes that disk blocks that were referenced recently
are more likely to be referenced in the near future than those referenced far back in the past.
Similarly, the LFU replacement algorithm assumes that disk blocks that were referenced frequently
are more likely to be referenced in the near future than those referenced sparsely. One common
problem with these approaches is that the underlying assumptions are not always correct since
actual disk block reference patterns of applications can differ widely depending on applications.
To address the problem above, a number of block replacement schemes have recently been proposed
that make use of user-level hints such as application-controlled file caching [8] and informed
prefetching and caching [9]. User-level hints in these schemes provide information about which
blocks are good candidates for replacement, allowing different replacement policies to be applied
to different applications.
However, to obtain user-level hints, users need to accurately understand the characteristics of
the block reference patterns of applications. This requires considerable effort from users limiting
the applicability. For simple reference patterns such as a sequential reference pattern, a heuristic
method can be used to detect the pattern without user-level hints and an appropriate replacement
policy can be used to improve the buffer cache performance [10]. Also for implicit I/Os that are
used to manage paged virtual memory, their reference pattern can be deduced by the compiler and
an appropriate replacement policies can be used based on the deduced pattern [11].
In this paper, we propose a new replacement scheme called DEAR (DEtection based Adaptive
Replacement) for general file caching. Without any help from the user or the compiler, the DEAR
scheme dynamically detects the reference pattern of each application and classifies the pattern as
sequential, looping, temporally-clustered, or probabilistic. After the detection, the scheme applies
an appropriate replacement policy to the application. As the reference pattern of an application
may change during its execution, the DEAR scheme periodically detects the reference pattern and
applies a different replacement policy, if necessary.
We implemented the DEAR scheme in FreeBSD 2.2.5 and evaluated its performance with several
real applications. The scheme is implemented at the kernel level without any modification to
the system call interface, so the applications may run as-is. Performance measurements with real
applications show that in the case of single application executions the DEAR scheme reduces the
number of disk I/Os by up to 51% (with an average of 23%) and the response time by up to 35%
(with an average of 12%), compared with the LRU buffer management scheme in FreeBSD. For
multiple applications, the reduction in the number of disk I/Os is by up to 20% (with an average
of 12%) while the reduction in the overall response time is by up to 18% (with an average of 8%).
We also compared the performance of the DEAR scheme with that of application-controlled file
caching [8] through trace-driven simulations with the same set of application traces used in [8].
The results show that the DEAR scheme without any use-level hints performs comparably to
application-controlled file caching for the traces considered.
The rest of the paper is organized as follows. In Section 2, we explain the DEAR scheme in detail.
Then, we describe the implementation of the DEAR scheme in FreeBSD in Section 3. In Section 4,
we evaluate the performance of the DEAR scheme. Finally, we conclude this paper with a summary
and discussions of future work in Section 5.
2 The DEAR Scheme
Recent research has shown that most applications show regular block reference patterns and that
these patterns vary depending on the nature of the application. For example, a large class of
scientific applications show a looping reference pattern where blocks are referenced repeatedly with
regular intervals [12]. On the other hand, many database applications show a probabilistic reference
pattern with different probabilities for index blocks and data blocks [13]. Unix applications tend
to show either a sequential or a temporally-clustered reference pattern [8, 14]. Applications that
deal with continuous media generally show a sequential or a looping reference pattern [15].
From these observations, we classify an application's reference pattern into one of the following: se-
quential, looping, temporally-clustered, or probabilistic reference pattern. In the proposed DEAR
Detect the reference pattern based on
the relationship between the block
attribute values and the forward distance,
both as seen at .
Calculate the forward distance of the
blocks referenced between and .
Update the block attributes of the blocks
Update the block attributes of the blocks
referenced between and .
referenced between and .
Figure
1: Detection process: two-stage pipeline with one-level look-behind.
scheme, the detection of an application's reference pattern is made by associating attributes of
blocks with their forward distances 1 . An attribute of a block can be anything that can be obtained
from its past reference behavior including backward distance, frequency, inter-reference gap
[5], and k-th backward distance [3]. In this paper, we consider only two block attribute
types: backward distance, which is the time interval between the current time and the time of the
last reference 2 , and frequency, which is the number of past references to the block.
The detection is performed by a monitoring process that is invoked periodically. At the time of its
i-th invocation (we denote this time by m i
), the monitoring process calculates the forward distances
(as seen from the standpoint of m
) of the blocks referenced between m
. These forward
distances are associated with block attribute values by two ordered lists, one according to backward
distance and the other according to frequency. Each ordered list is divided into a fixed number of
sublists of equal size. Based on the relationship between the attribute value of each sublist and the
average forward distance of blocks in the sublist, the block reference pattern of the application is
deduced.
After the detection, the block attributes of the blocks referenced between m
are updated
for the next detection. As shown in Figure 1, the detection process is essentially a two-stage pipeline
with one-level look-behind since the detection at m i
is made based on the relationship between the
block attribute values and the forward distance at m i\Gamma1 .
As an example, consider Figure 2. Assume that the detection period is 10 as measured in the number
1 The forward distance of a block is defined as the time interval between the current time and the time of the next
reference to the block.
2 In this paper, we assume that the (virtual) time is incremented on each block reference.
average
forward
average
forward
average
forward
average
forward
(a) (b) (c)
backward
backward
backward
backward
backward
backward
distance
distance
distance
distance
distance
time
sublist sublist
backward distance
sublist
sublist
sublist
sublist
frequency
bdbdbdfrfrfraverage
forward
average
forward
Figure
2: Example of block reference pattern detection.
of block references made by the associated application. Also assume that between m
, and b 6
were referenced in the given order (see
Figure
2-(b)). Finally, assume that at m i\Gamma1
the backward distance and frequency of the six distinct
blocks b 4
were 15, 12, 25, 4, 20, 9 and 6, 4, 5, 2, 1, 1, respectively (see Figure 2-
(a)). Note that these distinct blocks have forward distances of 1, 2, 3, 4, 6, 7, respectively as
seen at m i\Gamma1
. From the information about the block attribute values and the forward distance,
the DEAR scheme constructs two ordered lists, one according to backward distance and the other
according to frequency (see Figure 2-(c)). Each list is divided into a number of sublists of equal
size (3 sublists of size 2, in this example). Then various rules for detecting reference patterns,
which are explained below, are applied to the two lists. In this particular example, blocks with
higher frequency have smaller forward distance, which allows us to deduce that the block reference
pattern of the given application follows a probabilistic reference pattern. The detection rules for
the probabilistic reference pattern and the other reference patterns can be more formally stated as
follows:
Sequential Pattern: A sequential reference pattern has the property that all blocks are referenced
one after the other and never referenced again. In this pattern, the average forward distance
of all the sublists is 1. Therefore, a reference pattern is sequential if Avg fd(sublist bd
Avg fd(sublist fd(sublist fd(sublist
sublist bd
and sublist fr
are the i-th sublist for the backward distance and frequency block
attribute types, respectively, and Avg fd(sublist) is the average forward distance of blocks
in sublist.
Looping Pattern: A looping reference pattern has the property that blocks are referenced repeatedly
with a regular interval. In this pattern, a block with a larger backward distance
has a smaller forward distance. Therefore, a reference pattern is looping if the following
relationship holds: if fd(sublist bd
fd(sublist bd
Temporally-clustered Pattern: A temporally-clustered reference pattern has the property that
a block referenced more recently will be referenced sooner in the future. Thus, a block with
a smaller backward distance has a smaller forward distance. Therefore, a reference pattern
is temporally-clustered if the following relationship holds: if fd(sublist bd
Avg fd(sublist bd
Probabilistic Pattern: A probabilistic reference pattern has a non-uniform block reference behavior
that can be modeled by the Independent Reference Model (IRM) [16]. Each block b i
has a stationary probability p i
and all blocks are independently referenced with the associated
probabilities. Under the stationary and independent condition, the expected forward distance
of b i
is proportional to 1=p i
. Thus, a block with a higher frequency has a smaller forward
distance. Therefore, a reference pattern is probabilistic if the following relationship holds: if
fd(sublist fr
fd(sublist fr
In the DEAR scheme, different replacement policies are used for different applications depending
on the detected reference pattern. For the sequential and looping reference patterns, the MRU
replacement policy is used where the block with the smallest backward distance is always selected
for replacement. For the temporally-clustered reference pattern, the LRU replacement policy, which
replaces the block with the largest backward distance, is used. Finally, for the probabilistic reference
pattern, the LFU replacement policy that replaces the block with the lowest reference frequency is
used.
1. read()/write()
2. bread()/bwrite()
5. vfs_strategy()
3. getnewbuf()/
System Call Interface
Virtual File System
Buffer Cache
4. new_interface .
Unix File System Network File System Log-structured File System
ACM
ACM
ACM
Figure
3: Overall structure of the DEAR scheme in FreeBSD 2.2.5.
3 Implementation of the DEAR Scheme in FreeBSD
Figure
3 shows the overall structure of the buffer cache manager for the DEAR scheme as implemented
in FreeBSD 2.2.5. The DEAR scheme applies different replacement policies to different
applications. This requires a split of the buffer cache management module into two parts, one for
block allocation and the other for block replacement. The module responsible for block allocation
is the System Cache Manager (SCM). There is one SCM in the system. The module responsible
for block replacement is the Application Cache Manager (ACM). There is one ACM for each ap-
plication. This organization is similar to that proposed for application-controlled file caching [8].
Both of the modules are located in the VFS (Virtual File System) layer and collaborate with each
other for buffer allocation and block replacement.
An ACM is allocated to each process when the process is forked. When a block is referenced from
the process, the associated ACM is called by the bread() or bwrite() procedure in the SCM (1)
to locate the information about the referenced block using a hash table, (2) to update the block
attribute that is changed by the current reference, (3) to place the block into a linked list that
maintains the blocks referenced in the current detection period, and (4) to adjust the replacement
order according to the application-specific replacement policy. To maintain the replacement order,
the current implementation uses the linked list data structure for the LRU and MRU replacement
policies and the heap data structure for the LFU replacement policy.
Application K
Application
ACM
Application
(2) send a replacement request
using the application-specific block replacement policy
(3) select a victim block
(1) request new buffer space
allocate new buffer space
deallocate buffer space
of the victim block
Figure
4: Interaction between ACM and SCM.
After the steps (1)-(4) are performed, a check is made to see whether the current detection period
is over. If so, the monitoring process explained in the previous section is invoked to detect the
application's reference pattern. The detected reference pattern dictates the replacement policy of
the ACM. If none of the detection conditions previously explained is satisfied, the default LRU
replacement policy is used.
The structure of information maintained for each block by the ACM is !vnode #, block #,
backward distance, frequency, forward distance, hp, bp, fp, cp?. The pointer hp is used
to place the block into the hash table that is used to locate the information about the currently
referenced block. The pointers bp and fp are used to place the block into the ordered lists for the
backward distance and frequency block attribute types, respectively, which are constructed when
the monitoring process is invoked. Finally, the pointer cp is used to place the block into the list of
blocks referenced in the current detection period. This data structure is the main space overhead
of the DEAR scheme.
The main time overhead of the DEAR scheme is that needed to order the blocks according to
each block attribute value, which has an O(n log n) time complexity where n is the number of
distinct blocks referenced in the detection period. This operation is invoked once at the end of
each detection period for each block attribute type. Other time overheads include those needed
to calculate the forward distance, backward distance, and frequency of blocks at the end of each
detection period, which has a time complexity of O(n) where n is the number of distinct blocks
referenced in the detection period.
The ACM and SCM interact with each other as depicted in Figure 4. When an application misses
in the buffer cache, the ACM for the application makes a request to the SCM for additional buffer
space (step (1) in Figure 4). If the SCM does not have any free buffer space, it sends a replacement
request to one of the ACMs (step (2)). This operation is performed in the getnewbuf() procedure
in the SCM, and the first choice is an ACM associated with an application whose current reference
pattern is sequential. If there is no such application, the SCM simply chooses the ACM of the
application with the global LRU block. This strategy is similar to the one used in the application-controlled
file caching [8]. The selected ACM decides the victim block to be replaced using its
current replacement policy (step (3)) and deallocates its space to the SCM (step (4)). The SCM
allocates this space to the ACM that requested the space (step (5)).
Performance Evaluation
In this section, we present the results of the performance evaluation of the DEAR scheme. We first
describe the experimental set-up. Then, we give the results of reference pattern detection followed
by the performance measurement results for both single applications and multiple applications. We
also give results from sensitivity analysis for different cache sizes, detection periods, and numbers
of sublists. Finally, we compare the performance of the DEAR scheme with that of application-controlled
file caching [8].
4.1 Experimental Set-up
The experiments were conducted with FreeBSD 2.2.5 on a 166MHZ Intel Pentium PC with 64MB
RAM and a 2.1GB Quantum Fireball hard disk. The applications we used are described below and
are summarized in Table 1.
cscope Cscope is an interactive C-source examination tool. It creates an index file named cscope.out
from C sources and answers interactive queries like searching C symbols or finding specific
functions or identifiers. We used cscope on kernel sources of roughly 9MB in size and executed
queries that search for five literals.
Table
1: Characteristics of the applications.
Application Description Input data (MB)
cscope C-source examination tool C code
glimpse information retrieval tool text files (5-50)
utility text files (4.5)
link UNIX link editor object files (2.5)
cpp C preprocessor C code (1-10)
gnuplot GNU plotting utility numeric data (8)
postgres1 relational DB system two relations (7.5-15)
postgres2 relational DB system four relations (0.05-15)
glimpse Glimpse is a text information retrieval utility. It builds indexes for words and performs
fast searching. Text files of roughly 50MB in size were indexed resulting in about 5MB of
indexes. A search was done for lines that contain the keywords multithread, realtime, DSM,
continuous media, and diskspace.
sort Sort is a utility that sorts lines of text files. A 4.5MB text file was used as input, and this file
was sorted numerically using the first field as the key.
link Link is a UNIX link-editor. We used this application to build the FreeBSD kernel from about
2.5MB of object files.
cpp Cpp is the GNU C-compatible compiler preprocessor. The kernel source was used as input
with the size of header files and C-source files of about 1MB and 10MB, respectively.
gnuplot Gnuplot is a command-line driven interactive plotting program. Using 8MB raw data,
the program plotted three-dimensional plots four times with different points of view.
postgres1 and postgres2 Postgres is a relational database system from the University of California
at Berkeley. PostgresSQL version 6.2 and relations from a scaled-up Wisconsin benchmark
were used. Postgres1 is a join between the hundredthoustup and twohundredthoustup relations
while postgres2 is a join among four relations, namely, fivehundredup, twothoustup,
twentythoustup, and twohundredthoustup. The sizes of fivehundredup, twothoustup, twen-
tythoustup, hundredthoustup, and twohundredthoustup are approximately 50KB, 150KB,
1.5MB, 7.5MB, and 15MB, respectively.
4.2 Detection Results
seq seq loop loop loop loop
seq loop loop
Detection Result
prob prob prob prob prob prob
prob prob prob
Detection Result200600100014001800
logical
block
number
virtual time
(a) cscope1003005007000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
logical
block
number
virtual time
(b) cpp
Figure
5: Block reference patterns and detection results for cscope and cpp.
Figure
5 shows the results of the detection by the DEAR scheme for the cscope and cpp applications.
In each graph, the x-axis is the virtual time that increments on each block reference and the y-axis
is the logical block numbers of those referenced at the given time. The detection results are given at
the top of the graph assuming a detection period of 500 references. For cscope, the DEAR scheme
initially detects a sequential reference pattern but changes its detection to a looping reference
pattern after the sequentially referenced blocks are re-accessed. This behavior results from cscope
always reading the file cscope.out sequentially whenever it receives a query about the C source. For
cpp, the DEAR scheme detects a probabilistic reference pattern throughout the execution since as
we can see from the graph, some blocks are more frequently accessed than others. This reference
pattern results from the characteristic of cpp that header files are more frequently referenced than
files.
Figure
6 shows the detection results of the other applications. Although the result shows that the
DEAR scheme performs reasonably well for the other applications, it also reveals the limitation of
the current DEAR scheme, notably for the sort and postgres2 applications. They have either parallel
or nested reference streams, which indicates a need for the proposed DEAR scheme to address more
general reference patterns with arbitrary control structures such as parallel, sequence, and nested.
Detection Result
seq seq loop loop loop loop loop
seq
loop
Detection Result
seq seq seq loop
undetect loop50015002500
logical
block
number
virtual time
(a) glimpse200600100014001800
logical
block
number
virtual time
(b) sort
Detection Result
seq seq loop loop
seq loop loop
loop
Detection Result
seq loop loop loop loop loop
loop1003005000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
logical
block
number
virtual time
(c) link1003005007009000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
logical
block
number
virtual time
(d) gnuplot
Detection Result
seq seq loop loop loop loop loop loop loop prob prob
Detection Result
prob seq prob loop prob prob
undetect50015002500
logical
block
number
virtual time
logical
block
number
virtual time
(f) postgres2
Figure
reference patterns and detection results for the other applications.
QPPPP QPPPP
QRPPP QRPPP
QTPPP QTPPP
a#OEo/oof"o/oo#"a#OEo/oof"o/oo#"
deardear
@@@f#@@@f#OE#OE#"
(a) Number of Disk I/Os
QPPQPP
QRPQRP
QTPQTP
QVPQVP
a#OEo/oof"o/oo#"a#OEo/oof"o/oo#"
r.@to/oo#.@H".f#""I r.@to/oo#.@H".f#""I lrulru
deardear
@@@f#@@@f#
@@OEo/oo#<@@OEo/oo#<
#OEo/oo#OEo/oo#".
(b) Response Time
Figure
7: Single application performance.
4.3 Performance Measurements: Single Applications
We compared the performance of each application under the DEAR scheme with not only that
under the LRU scheme built in FreeBSD but also with those under the LFU and MRU schemes.
For this purpose, we implemented the DEAR scheme as well as the LFU and MRU schemes in
FreeBSD. We measured both the number of disk I/Os and the response time of each application
for a 6MB buffer cache with block size set to 8KB. For the DEAR scheme, we set the length of the
detection period to 500 and the number of sublists in the ordered lists to 5 for both the backward
distance and frequency block attribute types. The performance of the DEAR scheme for different
cache sizes, different detection periods, and different numbers of sublists in the ordered lists is
discussed in Section 4.5.
Figure
7 shows the number of disk I/Os and the response time of the four schemes. The values
reported here are the average of three separate executions and before each execution, the system
was rebooted to eliminate any effects resulting from prior buffer cache contents. From the results
we observe the following:
ffl The DEAR scheme performs almost as good as the best of the other three schemes for all
the applications we considered. Also, when compared with the LRU scheme in FreeBSD, the
number of disk I/Os is reduced by up to 51% (for the cscope application) with an average of
23% and the response time by up to 35% (also for the cscope application) with an average of
%.
ffl For the link application, there is no performance difference among the four schemes. This is
because the input data to the link application is small (2.5MB), and thus all the blocks reside
in the buffer cache after they are initially loaded.
ffl Postgres1 and postgres2 do not show as much improvement in the response time as that in the
number of disk I/Os when using the DEAR scheme. This is because of the constant synchronization
between the client (the psql utility that provides the user interface) and the server
(the postgres process that performs the query processing and database management). For
the gnuplot application, much time was spent for user mode computation and thus reduction
in the number of disk I/Os has a limited impact on the response time.
ffl Except for the above three applications, the ratio between the reduction in the number of
disk I/Os and that in the response time is consistent. This indicates that the DEAR scheme
incurs little extra overhead over those in the other schemes.
The last point is more evident in Figure 8 where the response time is divided into three components:
I/O stall time, system time, and user time. For the LRU scheme of FreeBSD, the system time
consists of VFS processing time, buffer cache management time, disk driver processing time, disk
interrupt handling time and, data copy time from buffer cache to user space. On top of those, the
DEAR scheme requires additional processing time such as the time for sorting blocks according to
block attribute values and also for maintaining block attribute values and forward distances. From
Figure
8, we can notice that the system times of the two schemes are comparable meaning that the
DEAR scheme incurs little additional overheads.
4.4 Performance Measurements: Multiple Applications
In real systems, multiple applications execute concurrently competing for limited buffer space. To
test the DEAR scheme in such an environment, we ran several combinations of two or more of
the applications with a buffer cache of 6MB and measured the total number of disk I/Os and the
overall response time for both the DEAR scheme and the LRU scheme in FreeBSD. Again, we set
the length of the detection period to 500 and the number of sublists in the ordered lists to 5.
QPPQPP
lru lru dear dear lru lru dear dear lru lru dear dear lru lru dear dear lru lru dear dear lru lru dear dear lru lru dear dear lru lru dear dear
a#OEo/oof"o/oo#"a#OEo/oof"o/oo#"
r.@to/oo#[email protected]#""I r.@to/oo#[email protected]#""I iOo@""OEOE@"o/oo#.iOo@""OEOE@"o/oo#.
"^TM"".#@"o/oo#."^TM"".#@"o/oo#.
.'@''o/oo#.'@"o/oo#.
#'.``R#'."R
#OEo/oo#OEo/oo#OE#OE#"
OEo/oo#<OEo/oo#<
Figure
8: Decomposition of response time.
QPPPPQPPPP
QRPPPQRPPP
QTPPPQTPPP
QVPPPQVPPP
QXPPPQXPPP
a#OEo/oof"o/oo#"a#OEo/oof"o/oo#"
deardear
(a) Number of Disk I/Os
UPUP
QPPQPP
QUPQUP
a#OEo/oof"o/oo#"a#OEo/oof"o/oo#"
o-.'OEOE@r.@to/oo#[email protected]#''``I o-.'OEOE@r.@to/oo#[email protected]#""I lrulru
deardear
(b) Overall response time
Figure
9: Multiple application performance.
Table
2: Performance comparison between the LRU-SEQ and the DEAR schemes.
Scheme Response Time (seconds)
cs+sort gli+link cs+wc gli+wc
LRU 70.96 89.87 81.27 89.97
DEAR 66.61 74.29 62.88 82.36
The results in Figure 9 show that the number of disk I/Os is reduced by up to 20% (for the
cscope+sort+link case) with an average of 12% and the overall response time by up to 18% (for
the glimpse+link case) with an average of 8%.
In the multiple application case, there are two possible benefits from using the proposed DEAR
scheme. The first is from applying different replacement policies to different applications based on
their detected reference patterns. The second is from giving preference to blocks that belong to an
application with the sequential reference pattern when a replacement is needed. To quantify these
two different types of benefit, we performed an experiment where even the LRU replacement policy
gives preference to blocks belonging to an application with the sequential reference pattern, which
we call the LRU-SEQ replacement policy.
Table
2 shows the results of the LRU-SEQ scheme for the 6MB buffer cache size. In the case
of cscope+sort and glimpse+link, there is little difference between the LRU and the LRU-SEQ
schemes, since the reference pattern of the four component applications is not sequential in the
steady state. Replacing sort and link with wc, whose reference pattern is sequential, produces
a significant difference in the response time between the LRU and the LRU-SEQ schemes. This
results from the LRU-SEQ scheme allocating more buffer space to cscope (or glimpse) by replacing
blocks of the wc application earlier than the usual LRU order. Still, there is a substantial difference
in the response time between the LRU-SEQ scheme and the DEAR scheme indicating that the
benefit from applying different replacement policies tailored for different applications is significant.
4.5 Sensitivity Analysis
Cache Size Tables 3 and 4 compare the performance of the DEAR scheme against the LRU
Table
3: Single application performance for various buffer cache sizes.
Application Scheme Response Time (seconds)
cscope DEAR 16.99 14.90 12.87 11.17
LRU 19.79 19.79 19.77 19.77
glimpse DEAR 39.12 35.68 33.73 32.87
link DEAR 28.19 23.38 23.38 23.38
cpp DEAR 132.94 94.42 91.61 91.36
gnuplot DEAR 43.54 42.26 41.39 41.19
postgres1 DEAR 38.37 36.16 34.22 32.17
postgres2 DEAR 74.57 72.51 71.15 68.45
LRU 82.93 74.93 74.75 73.93
scheme for various buffer cache sizes for the single and multiple application cases, respectively. The
results from the single application case show that as long as the total number of distinct blocks
accessed by an application is greater than the number of blocks in the buffer cache, there is a
substantial difference in the response time between the DEAR and the LRU schemes. However,
when the number of distinct blocks of an application is smaller than the number of blocks in
the buffer cache, all the blocks are cached in the buffer cache and the two schemes show similar
performance. This behavior is most visible for the link application that has the smallest number
of distinct blocks (about 310 blocks).
For the multiple application case, the case where the total number of distinct blocks accessed by
the component applications is smaller than the number of blocks in the buffer cache does not occur
and the DEAR scheme shows consistently better performance than the LRU scheme.
Detection Period and the number of Sublists Determining the length of the detection
period is an important design issue that requires a trade-off. If the detection period is too long,
the scheme will not be adaptive to possible changes of the reference pattern within a detection
period. On the other hand, if the period is too short, the scheme would incur too much overhead
Table
4: Multiple application performance for various buffer cache sizes.
Applications Scheme Response Time (seconds)
cs+sort DEAR 70.4 66.6 62.9 53.5
LRU 71.5 70.9 69.9 67.3
gli+link DEAR 79.1 74.2 71.6 70.1
LRU 94.5 89.8 79.1 77.9
cpp+ps1 DEAR 222.2 216.7 209.8 202.4
gli+ps2 DEAR 145.6 139.8 132.5 128.3
cs+sort+link DEAR 116.1 112.7 106.7 101.8
LRU 121.3 118.0 112.8 105.3
LRU 246.3 245.3 225.9 222.9
Table
5: The effect of the detection period on the performance of the DEAR scheme for the single
application case.
Scheme Detection Response Time (seconds)
Period cscope glimpse sort cpp gnuplot postgres1 postgres2
100 12.85 33.70 13.72 98.81 40.92 34.62 76.56
DEAR 500 12.87 33.73 13.60 91.61 41.39 34.22 71.15
1000 13.52 36.26 13.88 91.78 41.66 34.53 72.41
2000 15.20 36.45 15.77 91.99 42.36 34.84 72.53
Table
The effect of the detection period on the performance of the DEAR scheme for the multiple
application case.
Scheme Detection Response Time (seconds)
Period cs+sort gli+link cpp+ps1 gli+ps2 cs+sort+link gli+sort+cpp
100 66.68 73.54 236.29 144.67 108.86 251.54
DEAR 500 66.61 74.29 216.73 139.88 112.73 235.56
1000 67.34 74.99 216.91 139.24 116.30 238.70
2000 68.70 81.69 219.38 139.34 116.84 241.41
to be practical. Moreover, if the period is too short, a short burst of references may mislead the
detection. For example, a probabilistic reference pattern may be mistaken for a looping reference
pattern when a small number of blocks are repeatedly accessed over two detection periods.
The above trade-off relationship is evident in Table 5 that gives the response time of all but the
link application as the detection period varies from 100 to 2000. We exclude the link application
since as we mentioned earlier all of its blocks fit into the buffer cache. Thus different detection
periods do not make any difference. For most of the remaining applications, the best performance
was obtained when the detection period is either 250 or 500. The results also show that even with
detection periods that are considerably smaller or larger than these optimal values, the DEAR
scheme performs better than the LRU scheme in FreeBSD. The exceptions are with the cpp and
postgres2 applications when the detection period is 100. In these two cases, the performance degradation
is considerably larger than the others at the detection period of 100. A careful inspection
of the results revealed that when the detection period is 100 the DEAR scheme mistakenly detects
both applications to have a looping reference pattern when in reality it was part of a probabilistic
reference pattern. The multiple application case shows a similar effect of the detection period on
the performance as we can see in Table 6.
For the sequential reference pattern, we can use a simpler detection rule that checks whether the
referenced block numbers are consecutive and this detection can be made early in the execution of
an application. We experimented with this optimization and Table 7 shows the results assuming
the buffer cache size is 6MB. In the experiment, the DEAR scheme with early detection tries to
identify a sequential reference pattern within 20 block references and if not successful, it reverts to
the original DEAR scheme with a detection period of 500.
Table
7: Performance of early detection of sequential reference patterns.
Response Time (seconds)
DEAR DEAR with Early Detection
cscope 12.87 12.82
glimpse 33.73 33.36
cscope+sort 66.61 65.45
glimpse+link 74.29 73.18
cscope+sort+link 112.73 108.19
Table
8: The effect of the number of sublists on the detection results of the DEAR scheme.
Application Detection Results
Number of sublists = 3 Number of Number of
cscope seq[3],loop[8] seq[3],loop[8] seq[3],loop[8]
glimpse seq[4],loop[8] seq[4],loop[8] seq[3],loop[9]
link seq[3],loop[5] seq[3],loop[5] seq[3],loop[5]
cpp prob[18] prob[18] prob[13],undetect[5]
gnuplot seq[1],loop[6] seq[1],loop[6] seq[1],loop[6]
postgres1 seq[5],loop[16] seq[5],loop[16] seq[5],loop[16]
postgres2 prob[13],loop[5],seq[2],undetect[1] prob[12],loop[4],seq[2],undetect[3] prob[11],loop[3],seq[2],undetect[5]
The results show that in the case of single application executions the DEAR scheme with early
detection shows little improvement over the original DEAR scheme. This is because the original
DEAR scheme can determine an appropriate replacement policy before block replacements are
made since there are more blocks in the buffer cache (about 750 blocks when the buffer cache size is
6MB and the block size is 8KB) than the detection period. For the multiple application executions,
the early detection scheme shows a larger improvement since early detection of sequential reference
patterns allows more effective buffer allocation but still the improvement is not significant.
The number of sublists used in the detection process can affect the detection results of the DEAR
scheme. Table 8 gives the detection results of the DEAR scheme as the number of sublists increases
from three to seven. From the results, we can notice that the number of sublists hardly affects the
detection results although there is a slight increase in the number of undetected cases as the number
of sublists increases due to a more strict detection rule. Remember that to detect a reference pattern
the associated detection rule should be held for all the sublists.
4.6 Comparison with Application-controlled File Caching
To compare the performance of the DEAR scheme with that of application-controlled file caching
(ACFC) [8], we performed trace-driven simulations with the same set of three application traces
used in [8]. Figure 10 shows the miss ratio of the three applications for the LRU, ACFC, DEAR,
and OPT (off-line optimal) schemes when cache size increases from 1MB to 16MB. The results for
the LRU, ACFC, and OPT schemes were borrowed from [8] and those for the DEAR scheme were
obtained by simulating the DEAR scheme with detection period equal to 500 and the number of
sublists in the ordered list equal to 5 for both backward distance and frequency block attribute
types. The results show that the miss ratio of the DEAR scheme is comparable to that of the ACFC
scheme, which utilizes user-level hints to guide the replacement decisions. The small difference
between the two schemes results from the misses that occur before the DEAR scheme has a chance
to detect the reference pattern.
5 Conclusions and Future Work
In this paper, we proposed a new buffer management scheme called DEAR (DEtection based
Adaptive Replacement) that automatically detects the block reference pattern of applications as
sequential, looping, temporally-clustered, or probabilistic without any user intervention. Based on
the detected reference pattern, the proposed DEAR scheme applies an appropriate replacement
policy to each application.
We implemented the DEAR scheme in FreeBSD 2.2.5 and measured its performance using several
real applications. The results showed that compared with the buffer management scheme in
FreeBSD the proposed scheme reduces the number of disk I/Os by up to 51% (with an average of
23%) and the response time by up to 35% (with an average of 12%) in the case of single application
executions. For multiple applications, the reduction in the number of disk I/Os is by up to 20%
(with an average of 12%) while the reduction in the overall response time is by up to 18% (with an
average of 8%).
We also compared the performance of the DEAR scheme with that of application-controlled file
caching [8] through trace-driven simulations. The results showed that the DEAR scheme performs
UU
QPQP
QUQU
so/oos.@#+@b.'@cf.@HmbIso/oos.@#+@b.'@cf".@HmbI
mo/oo""@r"o/oo#HEI mo/oo""@r"o/oo#HEI lrulru
deardear
acfcacfc
optopt
@Q@@@@R@@@@@@@@@T@@@@@@@@@@@@@@@@@@@@@X@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@QV@ Q@@@@R@@@@@@@@@T@@@@@@@@@@@@@@@@@@@@@X@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@QV
(a) cscope
QPQP
UPUP
so/oos.@#+@b.'@cf.@HmbIso/oos.@#+@b.'@cf".@HmbI
mo/oo""@r"o/oo#HEI mo/oo""@r"o/oo#HEI lrulru
deardear
acfcacfc
optopt
(b) linking kernel
QPQP
UPUP
WPWP
so/oos.@#+@b.'@cf.@HmbIso/oos.@#+@b.'@cf".@HmbI
mo/oo""@r"o/oo#@HEI mo/oo""@r"o/oo#@HEI lrulru
deardear
acfcacfc
optopt
@Q@@@@@@R@@@@@@@@@@@@@@@@@T@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@X@ Q@@@@@@R@@@@@@@@@@@@@@@@@T@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@X
(c) Postgres
Figure
10: Comparison with application-controlled file caching.
comparably to application-controlled file caching for the traces considered.
As we noted in Section 4.2, some applications have block reference behavior that cannot be characterized
by a single reference pattern. One direction for future research is to extend the current
DEAR scheme so that it can detect more complex reference patterns with parallel, sequence, and
nested structures as well as to develop appropriate replacement policies for them. Another direction
for future research is to study more advanced buffer allocation strategies for the DEAR
scheme than the simple strategy explained in Section 3. A good buffer allocation strategy for the
DEAR scheme should reward more to applications with larger reductions in the number of disk
I/Os while preventing any one application from monopolizing the buffer space. Other directions
for future research include applying the detection capability of the DEAR scheme to prefetching
and considering block attribute types other than backward distance and frequency.
--R
The Design of the UNIX Operating System.
"Data Cache Management Using Frequency-Based Replacement,"
"The LRU-K Page Replacement Algorithm for Database Disk Buffering,"
"On the Existence of a Spectrum of Policies that subsumes the Least Recently Used (LRU) and Least Frequently Used (LFU) Policies,"
"An Inter-Reference Gap Model for Temporal Locality in Program Behavior,"
"A Generalized Interval Caching Policy for Mixed Interactive and Long Video Workloads,"
"Flexible and Adaptable Buffer Management Techniques for Database Management Systems,"
"Application-Controlled File Caching Policies,"
"Informed Prefetching and Caching,"
"Adaptive Page Replacement Based on Memory Reference Behavior,"
"Automatic Compiler-Inserted I/O Prefetching for Out-of-Core Applications,"
"A Static Analysis of I/O Characteristics of Scientific Applications in a Production Workload,"
"Characterization of Database Access Pattern for Analytic Prediction of Buffer Hit Probability,"
"Mea- surements of a Distributed File System,"
"Design and Implementation of Symphony: An Integrated Multimedia File System,"
Operating Systems Theory.
--TR
--CTR
Yannis Smaragdakis, General adaptive replacement policies, Proceedings of the 4th international symposium on Memory management, October 24-25, 2004, Vancouver, BC, Canada | performance evaluation;reference pattern;buffer cache;FreeBSD;replacement policy |
627222 | Using Application Benefit for Proactive Resource Allocation in Asynchronous Real-Time Distributed Systems. | This paper presents two proactive resource allocation algorithms, called RBA* and OBA, for asynchronous real-time distributed systems. The algorithms consider an application model where timeliness requirements are expressed using Jensen's benefit functions and propose adaptation functions to describe anticipated application workload during future time intervals. Furthermore, the algorithms consider an adaptation model, where application processes are dynamically replicated for sharing workload increases and a switched real-time Ethernet network as the underlying system model. Given such models, the objective of the algorithms is to maximize aggregate application benefit and minimize aggregate missed deadline ratio. Since determining the optimal allocation is computationally intractable, the algorithms heuristically compute near-optimal resource allocations in polynomial-time. While RBA* analyzes process response times to determine resource allocation decisions, which is computationally expensive, OBA analyzes processor overloads to compute its decisions in a much faster way. RBA* incurs a quadratic amortized complexity in terms of process arrivals for its most computationally intensive component when DASA is used as the underlying scheduling algorithm, whereas OBA incurs a logarithmic amortized complexity for the corresponding component. Our benchmark-driven experimental studies reveal that RBA* produces a higher aggregate benefit and lower missed deadline ratio than OBA. | set PR ? p1;p2;p3; .;p . We assume that the clocks of
the processors are synchronized using an algorithm such as
[21]. Furthermore, we use the nonpreemptive version of the
process-scheduling algorithm used at the processors for
scheduling packets at the switch. This is done for system
homogeneity and the consequent simplicity that we obtain
in the system model.
For process scheduling and packet scheduling, we consider
best-effort real-time scheduling algorithms including
IEEE TRANSACTIONS ON COMPUTERS, VOL. 51, NO. 8, AUGUST 2002
DASA [15], LBESA [22], RED [23], and RHD [24]. We
consider best-effort algorithms as they are shown to
outperform EDF [25] during overloaded situations and
perform the same as EDF during underloaded situations
where EDF is optimal [15], [22], [24].
Given the application, adaptation, and system models
described in Sections 2, 3, and 4, respectively, our objective
is to maximize the aggregate task benefit and minimize the
aggregate task missed deadline ratio during the future time
window of the task adaptation functions.
We define the aggregate task benefit as the sum of the
benefit accrued by the execution of each task during the
future time window. We define the aggregate task missed
deadline ratio as the ratio of the number of task executions
during the future time window that missed their deadlines
to the total number of task executions during the window.
Note that, during the future time window, each task may
execute multiple times.
Thus, the problem that we are solving in this paper can
be informally stated as follows:
Given adaptation functions for each task in the application that
may have arbitrary shapes and thus define an arbitrary task
workload, what is the number of replicas needed for each subtask
(of each task) for each possible execution? Furthermore, what is the
processor assignment for executing the replicas such that the
resulting resource allocation will maximize the aggregate task
benefit and minimize the aggregate task missed deadline ratio,
during the future time window of the task adaptation functions?
We show that this problem is NP-hard in [13]. Thus,
RBA* and OBA are heuristic algorithms that solve the
problem in polynomial-time, but do not necessarily
determine the number of subtask replicas and their
processor assignment that will yield the maximum aggregate
task benefit and minimum aggregate task missed
deadline ratio.
Since the objective of resource allocation is to maximize
aggregate benefit and minimize aggregate missed deadline
ratio, the desired properties of the RBA* algorithm include:
1. Allocate resources in the decreasing order of task benefits.
By doing so, we increase the possibility of maximizing
aggregate benefit as the task selected next for
resource allocation is always the one with the largest
benefit among the unallocated tasks.
2. Allocate resources for each task until its deadline is
satisfied. By doing so, we maximize the possibility of
minimizing the aggregate task missed deadline ratio.
Furthermore, there is no reason to allocate resources
for a task once its deadline is satisfied since the task
benefit functions are step-functions that yield zero
benefit after the deadline.
3. Deallocate resources for a task if its deadline cannot be
satisfied. By doing so, we save system resources which
can be potentially used for satisfying deadlines of
HEGAZY AND RAVINDRAN: USING APPLICATION BENEFIT FOR PROACTIVE
Fig. 3. The RBA* Algorithm.
lower benefit tasks. This will increase the possibility
of satisfying the deadlines of greater number of
lower benefit tasks, resulting in potential contributions
of nonzero benefit from them toward aggregate
task benefit.
4. Deallocate resources for a task at any point in time during
the resource allocation process if timeliness of a higher
benefit task is adversely affected. Observe that, when
resources are being allocated for a task, we may reach
apointbeforethesatisfactionofthetaskdeadlineafter
which any more increase in resources for the task may
negatively affect the timeliness of higher benefit tasks,
decreasingtheaggregatetaskbenefitthatisaccruedso
far. At such points, it is not obvious what choice?-
whether to continue the allocation for the task or to
stop and deallocate?can yield higher aggregate
benefit. For example, it may be possible that continuing
the resource allocation for the task may eventually
satisfy its deadline (at the expense of one or more
higher benefit tasks). Furthermore, this may also
satisfy the deadlines of a greater number of lower
benefit tasks, resulting in greater aggregate task
benefit than the benefit that would be achieved if we
were to deallocate the task and proceed to the next
lower benefit task. At such points of ?diminishing
returns,? RBA* makes the choice of deallocating all
resources allocated to the task so far. The rationale
behindthischoiceisthat,sinceitisnotclearhowmany
higher benefit tasks will have to ?pay? for satisfying
the task deadline, it may be best not to ?disturb? the
aggregate benefit that is accrued so far. Moreover,
since resources are always allocated in decreasing
order of task benefits, the chances of obtaining a
higher aggregate benefit is higher by satisfying as
many high benefit tasks as possible.
5. Decompose task-level resource allocation problem into
subtask-level resource allocation problems. The rationale
behind this heuristic is that solving a task-level
resource allocation problem such as determining the
replica needs of subtasks of a task and their end-hosts
analysis of the system. This can be computationally
expensive. Therefore, by decomposing the problem
into subproblems and solving the subproblems, we
seek to reduce the overhead of computing a near-optimal
solution. Since we are focusing on step-
benefit functions for tasks, the decomposition can be
RESOURCE ALLOCATION IN ASYNCHRONOUS REAL-TIME. 949
donebyassigningdeadlinestosubtasksandmessages
of a task from the task deadline in such a way that if all
subtasks and messages of the task can meet their
respective deadlines, then the task will be able to meet
its deadline. Using this heuristic, we can now
determine the replica needs of a task that will satisfy
the task deadline by determining the replica needs of
subtasks of the task that will satisfy the subtask
deadlines.
Thus, RBA* performs resource allocation according to
the heuristic choices discussed here. We now summarize
the algorithm as follows:
RBA* performs resource allocation when user-modifica-
tions to adaptation functions of application tasks are
detected. Since the anticipated workload may be different
for different task periods in the time window specified by
the adaptation functions, the algorithm allocates resources
for each period in the time window of the adaptation
functions, starting from the earliest period and proceeding
to the latest.
When triggered, the algorithm first sorts all tasks
according to their benefits. For each task and for each
adaptation period (in decreasing order of task benefits and
period occurrences, respectively), RBA* determines the
number of replicas needed for each subtask of the task
and their processor assignment that will satisfy the subtask
deadline for the current period. While computing the
number of replicas for the subtask of a task, if the timeliness
of a higher benefit task is affected or if the task is found to
be infeasible, the algorithm deallocates all allocated replicas
and proceeds to the next adaptation period.
The pseudocode of RBA* at the highest level of
abstraction is shown in Fig. 3.
To efficiently determine the next task with the highest
benefit for resource allocation, the algorithm initially
constructs a heap for the task set, which has task benefit
as key values of the heap nodes. This enables the algorithm
to (efficiently) determine the next task for allocation by
performing an ?Extract-Max? operation on the heap.
We now discuss how RBA* determines the number of
replicas for a subtask and their processor assignment in the
subsections that follow. Section 6.1 discusses how RBA*
assigns deadlines to subtasks and messages from the task
deadline. To determine the number of replicas needed for a
subtask that will satisfy the subtask deadline, RBA*
analyzes subtask response times. We discuss the steps
involved in determining subtask response times in
950 IEEE TRANSACTIONS ON COMPUTERS, VOL. 51, NO. 8, AUGUST 2002
Section 6.2 and present a response time analysis algorithm ? ? Xm
in Section 6.3. Finally, we present the algorithm that
determines the number of subtask replicas and their #
Xm
processors in Section 6.4. ? eex stk;d
6.1 Deadline Assignment of Subtasks and 2 3
Messages ecd mk;d
The problem of subtask and message deadline assignment 4P Pi 5:
from task deadlines has been studied in a different context j?i j j?i?1 j
[26]. The equal flexibility (EQF) strategy presented in [26]
Besides assigning deadlines to subtasks and messages
assigns deadlines to subtasks and messages from the task
from the task deadline, we also need to map task-level
deadline in a way that is proportional to subtask execution
benefit into benefit values for subtasks and message-packets
times and message communication delays, respectively.
of the task. This is because best-effort scheduling algorithms
The (relative) deadline of a subtask (or that of a message)
such as DASA, LBESA, and RED that we are considering in
is simply the sum of the execution time of the subtask (or
this work use benefit values of subtasks and message-
the communication delay of the message) and a slack value.
packets in making their scheduling decisions. Thus, we
EQF defines the slack value for a subtask (or that of a
define the benefit of a subtask and that of a message-packet
message) as a percentage of the total available slack for the
as simply the benefit of its parent task.
subtask (or the message).
The total available slack for a subtask (or a message) is 6.2 Estimating Subtask Response Time
simply the difference between the task deadline and the The response time of a subtask stji of a task Tj under fixed
sum of the execution times and communication delays of all priority schedulers is given by the classical equation
subtasks and messages that ?succeed? the subtask (or the Rj ? Cj ? Ij, where Rj is the subtask response time, Cj is
message) in the task structure. (Recall that we are assuming the subtask execution time, and Ij is the interference that the
a ?serial? structure for the task). The execution times and subtask experiences from other subtasks [27]. However, this
communication delays of subtasks and messages that equation is insufficient for best-effort real-time schedulers
precede the subtask (or the message) in the task structure such as DASA and LBESA that we are considering in this
are not considered in the total available slack since these workastheymakedecisionsateachschedulingeventthatare
latencies would already be incurred by the time the subtask functions of the remaining subtask execution times at the
starts execution (or the message starts transmission). event. The remaining execution time of a subtask at a given
Now, the slack value for a subtask (or that of a message) is time instant is the difference between the total execution time
defined as a percentage of the total available slack for the of the subtask and the time that the subtask has already spent
subtask (or the message), where the percentage is the ratio of being executed on the processor up to the time instant.
the subtask execution time (or the message communication To determine the response time of a subtask on a
delay) to the sum of the execution times and communication processor, we need to know the scheduling events, which
delays of all subtasks and messages that succeed the subtask are the time instants at which the scheduler has to select a
(or the message) in the task structure. Thus, the higher the subtask from the ready queue. The scheduling events include
subtask execution time (or message communication delay), the arrival times and completion times of the subtasks.
the higher will be the ratio, the higher will be the percentage, To determine subtask arrival times, we assume the
the higher will be the slack value, and the higher will be the following:
subtask (or message) deadline.
RBA* uses EQF in the following way: The algorithm . A1: Each periodic task arrives at the beginning of its
estimates subtask execution times and message commu- period;
nication delays using application-profile functions for an . A2: Each aperiodic task arrives when the triggering
user-anticipated workload. The estimated execution times message from its triggering periodic task arrives;
and message delays are then used to assign subtask and . A3: The response time of a subtask is the longest
message deadlines, according to EQF, respectively. response time among all its replicas;
Thus, the deadline of a subtask stk for a workload of d is . A4: A message is assumed to arrive at its destination
given by: processor by its deadline assigned using EQF;
" . A5: The first subtask of a task will arrive at the
beginning of the period of its parent task; every
dl stki ? eex sti ;d ? dl?Tk?? eex stkj ;d other subtask of the task will arrive after the elapse
# of an interval of time (since the task period) that is
Xm
k equal to the sum of the message delays and subtask
response times of all predecessor messages and all
predecessor subtasks of the subtask, respectively.
4P eex stPki ;d 5: A1 and A2 are straightforward assumptions as they are
directly derived from the application model. Recall that the
application model (see Section 2) assumes that an aperiodic
The deadline of a message mk for a workload of d is task is triggered upon the completion of the execution of its
given by: triggering periodic task.
HEGAZY AND RAVINDRAN: USING APPLICATION BENEFIT FOR PROACTIVE RESOURCE ALLOCATION IN ASYNCHRONOUS REAL-TIME. 951
Assumption A3 is reasonable as all data objects passed to The algorithm now estimates the subtask response times
a subtask will be processed by the longest response time of by determining the scheduling events that occur during the
the replicas of the subtask. time window and by applying the scheduling algorithm at
A4 is a pessimistic assumption, as it implies that all each scheduling event to determine the scheduling decision.
messages incur their worst-case communication delays (if Note that it is impossible to determine the subtask response
they were to arrive by their deadlines). However, it is times without determining the scheduling events (and the
important to observe that the exact delay incurred by a decision made at each event) for algorithms such as DASA
message will depend upon, among other factors, the and LBESA as their decisions at each event depends on the
contention that the message experiences at the outgoing remaining subtask execution times at the event.
queue at the sender processor and at the switch. To
6.3 The Subtask Response Time Analysis
determine this, we would need to determine all messages
Algorithm
that are present at the sender processor and at the switch at
the time instants when the message is generated at the The pseudocode of the subtask response time analysis
sender processor and arrives at the switch, respectively. algorithm is shown in Fig. 4.
This would require a holistic analysis of the system, which The procedure RBA_AnalyzeResponse accepts a subtask s,
can be computationally expensive. Thus, to reduce the a task period p, a processor q on which the response time of
computational overhead, we make the simplifying assump- s needs to be determined, and the workload of the subtask
tion that all messages arrive by their deadlines. as its arguments. It computes the response time of the
Assumption A5 is straightforward as it is directly subtask s during the period p on the processor q.Asa
derived from the precedence relationship between subtasks byproduct, the procedure determines the response times of
and messages of a task (see Section 2). all subtasks that are assigned to processor q. It then
Thus, the arrival time of a subtask can be determined as compares the subtasks' response times with the subtasks'
the sum of the response times of all subtasks and deadlines deadlines. If all subtasks satisfy their deadlines, the
of all messages that precede the subtask (under considera- procedure returns the response time of the subtask s.If
tion) and the arrival time of the parent task of the subtask. any subtask is found to miss its deadline, the algorithm
Thus, given the arrival time of a task Ti, the arrival time of a returns a ?failure? value, indicating that replicating
subtask sti of the task is given by subtask s on processor q will either not satisfy the deadline
of s or affect the timeliness of higher benefit tasks.
ArrivalTime sti ? Note that, whenever the procedure RBA_AnalyzeResponse
is invoked for a subtask s for a processor q, all existing
Xj?1 ?i ?i subtasks on q will belong to higher benefit tasks than the
ArrivalTime?Ti?? ResponseTime stk ? dl mk ;
task of s, since RBA* allocates replicas to tasks in decreasing
order of their benefits.
where ArrivalTime?x?denotes the arrival time of a subtask
or a task x, ResponseTime?x?denotes the response time of a 6.4 Determining Number of Subtask Replicas and
subtask x, and dl?x?denotes the deadline of a message x. Their Processors
The arrival time of each subtask on a processor can thus To determine the number of replicas that are needed for a
be determined and an arrival list can be constructed. Note subtask and their processors, RBA* first analyzes the
that the algorithm considers subtasks within a task response time of the subtask on its current processor. If
according to their precedence-order. Therefore, when the the subtask response time is found to be less than the
algorithm determines the arrival time of a subtask, the subtask deadline and the timeliness of subtasks of higher
response times of its predecessor subtasks would already benefit tasks are not found to be affected, the algorithm
have been determined. concludes that the single replica of the subtask on its
Our eventual goal is to determine the subtask response current processor is enough to satisfy the subtask deadline.
times by examining the arrival list in increasing order of On the other hand, if the subtask response time is found
arrival times and applying the scheduling algorithm at each to be larger than the subtask deadline or if executing the
arrival time. For this purpose, the arrival list must be sorted subtask on its current processor is found to cause one or
according to the arrival times. This can be accomplished by more subtasks of higher benefit tasks to miss their dead-
inserting the arrival time of a subtask into an integer- lines, RBA* reduces the workload of the subtask by
ordered list at an integer position that corresponds to the replication. The algorithm considers a second replica for
subtask arrival time. Thus, when all subtask arrival times the subtask which will reduce the workload of the existing
are determined and inserted into the list, the list auto- replica by half.
matically gets ordered according to arrival times. To determine the processor for executing the second
Once the arrival times of subtasks are determined, RBA* replica, RBA* analyzes the subtask response time for
estimates the anticipated workload during each task processing half the subtask workload on each of the
adaptation period using the task adaptation functions. For processors, excluding the processor of the first replica,
aperiodic tasks, the algorithm uses the period of their using the subtask response time analysis algorithm
triggering periodic tasks as the task period. The anticipated described in Section 6.3. The processor that gives the
workloads are then ?plugged into? the application-profile shortest response time is selected for the second replica.
functions to estimate the subtask execution times during the The algorithm now recomputes the response time of the
task periods. first replica (on its processor) for processing half the
Fig. 4. The RBA_AnalyzeResponse Procedure.
workload since the second replica will now process the
other half of the workload. If the response times of both the
replicas are found to be less than the subtask deadline and
the execution of the replicas on their respective processors
are not found to affect the timeliness of higher benefit tasks,
then two replicas are considered to be sufficient by the
IEEE TRANSACTIONS ON COMPUTERS, VOL. 51, NO. 8, AUGUST 2002
algorithm. Otherwise, the algorithm considers a third
replica and repeats the process.
RBA* repeats the process until each replica is able to
satisfy the subtask deadline. Note that, as the number of
replicas increases, the workload share of each replica will be
reduced. Furthermore, every time the algorithm considers
HEGAZY AND RAVINDRAN: USING APPLICATION BENEFIT FOR PROACTIVE
Fig. 5. The RBA*_DetermineReplicasProcessors Procedure.
adding a new replica, it checks whether the existing ones
will be able to satisfy their deadlines under the reduced
workload without affecting the timeliness of higher benefit
tasks. If the algorithm determines that executing the
maximum possible number of replicas for a subtask (which
is equal to the number of processors in the system for
exploiting maximum concurrency) does not satisfy the
subtask deadline, it assumes that the subtask and, hence,
the task, will miss their deadlines. Then, RBA* deallocates
all replicas allocated to the task as discussed in Section 6.
Fig. 5 shows the pseudocode of the algorithm that
determines the number of subtask replicas and their
processor assignment. The procedure RBA*_DetermineRepli-
casProcessors accepts a subtask s, a period i, an anticipated
workload l during the period i, and determines the number
of replicas for s and their processors. Recall that the
procedure RBA*_Algorithm (Fig. invokes the procedure
RBA*_DetermineReplicasProcessors for all subtask executions
during the future adaptation window.
7WORST-CASE COMPLEXITY OF RBA*
To analyze the worst-case computational complexity of
RBA*, we consider n tasks, p processors, a maximum of
subtasks for a task (thus, in the worst-case, all n tasks will
have m subtasks), a smallest task period of k (thus, in the
worst-case, all n tasks will have a period k), and an
adaptation window of length W.
The worst-case complexity of the RBA*_Algorithm procedure
depends upon the complexity of the procedure
RESOURCE ALLOCATION IN ASYNCHRONOUS REAL-TIME. 953
RBA*_DetermineReplicasProcessors. The complexity of
RBA*_DetermineReplicasProcessors depends on the procedure
RBA_AnalyzeResponse that determines the response
time of a subtask.
We now discuss the complexity of each of these
procedures in the subsections that follow.
7.1 Complexity of RBA_AnalyzeResponse
The complexity of RBA_AnalyzeResponse consists of two
components. First, given a subtask and a processor on
which the subtask response time needs to be determined,
procedure RBA_AnalyzeResponse constructs an arrival list
for all subtasks on the processor. Second, for each subtask in
the constructed arrival list, the procedure then invokes the
scheduler for each of its arrivals and departures within the
length of the adaptation function. Thus, the cost of
RBA_AnalyzeResponse is simply the sum of the cost of
constructing the arrival list and the cost of invoking the
scheduler for each scheduling event, i.e., for each arrival
and termination event of a subtask.
7.1.1 Arrival List Construction
Since a subtask can be replicated for a maximum of p times
and since RBA* does not assign two or more replicas of the
same subtask on the same processor, the maximum number
of subtask replicas that can be assigned by RBA* to a
processor is mn, i.e., all m subtasks of a task n tasks. Each
of the mn subtasks can arrive during all the periods of its
parent task throughout the adaptation function window W.
The largest possible number of arrivals of a subtask is
954 IEEE TRANSACTIONS ON COMPUTERS, VOL. 51, NO. 8, AUGUST 2002
therefore dW=ke. Thus, the largest arrival list will have a p times to determine the response time of the replica
size of mndW=ke. (considered in the step) on all p processors. Thus, the
To construct the arrival list, RBA_AnalyzeResponse deter- procedure RBA*_DetermineReplicasProcessors invokes the
mines the arrival time of each replica on the processor. To procedure RBA_AnalyzeResponse p2 number of times and
determine the arrival time of a replica, RBA_AnalyzeResponse has a complexity of p2 O?m3n3dW=ke3?, which is
examines each predecessor subtask and message of the O?p2m3n3dW=ke3?.
replica. Thus, the cost of determining the arrival time of a
7.3 Complexity of RBA*_Algorithm
single replica involves examining d predecessor subtasks
and d predecessor messages, incurring a total cost of O?d?, The complexity of the RBA*_Algorithm has two compo-
where d is the number of the predecessor subtasks of the nents. First, the RBA*_Algorithm constructs a heap that
subtask under consideration. uses task benefits as the key values. Second, it invokes
Once the arrival time of a subtask is determined, the RBA*_DetermineReplicasProcessors for each subtask (of each
procedure RBA_AnalyzeResponse inserts the subtask arrival task) and for each period.
time into a heap using a key value that corresponds to the The cost of building the heap for n tasks is O?n?.
subtask arrival time. Recall that the largest list size was Given n tasks, a maximum of m subtasks per task, and a
determined to be mndW=ke. The insertion cost for a heap is minimum period of k for each task, RBA*_DetermineReplicas
O?log?mndW=ke?. Thus, the cost of constructing the ordered Processors is invoked mndW=ke times by the RBA*_Algo-
arrivallistforallthemndW=kesubtaskarrivalsonaprocessor rithm. Before invoking RBA*_DetermineReplicasProcessors,
is given by mndW=keO?d? log?mndW=ke?. This cost the next highest benefit task needs to be extracted from
becomes O?mdndW=ke?mndW=ke log?mndW=ke?. the heap. The cost of an ?Extract-Max? heap operation is
O?log n?. Therefore, the cost of the second component is
7.1.2 Response Time Analysis
The response time analysis is performed by invoking mndW=ke O log n ? p2m3n3dW=ke3
the scheduler at each subtask arrival and departure. The
cost of invoking the scheduler is obviously dependent
on the scheduling algorithm employed. If we consider The worst-case complexity of RBA*_Algorithm is the
the DASA/ND algorithm (i.e., DASA when subtasks have sum of the cost of the two components, which is
no dependencies), then the cost of computing a scheduling O?n??O?p2m4n4dW=ke4?. This becomes O?p2m4n4dW=ke4?.
decision, given r processes in the ready queue of the
processor is O?r2?[15], [13].
8AMORTIZED COMPLEXITY OF
Since we can have up to mndW=ke arrivals on a processor
in the worst-case, the cost of invoking DASA/ND for a RBA_ANALYZERESPONSE
single scheduling event is O?m2n2dW=ke2?. Before invok- We now analyze the amortized complexity of the RBA_
ing DASA/ND, the next subtask arrival must be extracted AnalyzeResponse procedure since it is the most computa-
from the heap, which costs O?log?mndW=ke??. Thus, the tionally intensive component of the RBA* algorithm. We
sequence of extracting the next subtask arrival from the consider the amortized complexity to get a more ?realistic?
heap and invoking DASA/ND algorithm is repeated sense of the cost of the RBA_AnalyzeResponse procedure and
2mndW=ke times. The total cost of such scheduler invoca- for comparing this cost with that of the counterpart
tions becomes procedure of the OBA algorithm.
Recall that the RBA_AnalyzeResponse procedure invokes
mndW=ke log?mndW=ke??m2n2dW=ke2 the procedure LocalScheduler (which represents the underlying
scheduling algorithm) for determining scheduler-
decisions (see Fig. 4). In analyzing the amortized
complexity of RBA_AnalyzeResponse, we consider DASA/
The complexity of RBA_AnalyzeResponse is the sum of the
ND as the underlying scheduling algorithm.
cost of the arrival list construction and the cost of the
Given r processes in the ready-queue, the total cost of the
scheduler invocations. This is given by
DASA/ND algorithm is O?r2?[15], [13].
O?mdndW=ke? mndW=kelog?mndW=ke?As discussed in Section 7, the cost of RBA_Analyze
Response consists of two components: 1) constructing the
arrival times as key values and
analyzing subtask response times.
7.2 Complexity of Given N subtask arrivals, the total number of steps
RBA*_DetermineReplicasProcessors required for constructing the subtask arrival time heap is
The procedure RBA*_DetermineReplicasProcessors deter- Nk?1 log k steps because it costs O?log k? to insert the kth
mines the number of replicas and processors needed for element in the heap.
a given subtask in an iterative manner by starting with a Foranalyzingsubtaskresponsetimes,DASA/NDiscalled
single replica and incrementing until the maximum 2N times. The worst-case occurs when none of the N
possible number of replicas (equal to the number of processes terminate until all of them arrive. In such a
processors, p) is reached. During each iterative step, the situation,thequeuesizeincreaseswheneveraprocessarrives
procedure invokes RBA_AnalyzeResponse a maximum of until it becomes N. Then, the first termination occurs. At that
HEGAZY AND RAVINDRAN: USING APPLICATION BENEFIT FOR PROACTIVE RESOURCE ALLOCATION IN ASYNCHRONOUS REAL-TIME. 955
time, DASA/ND will be invoked for N processes in the ready the whole procedure can be repeated in a way similar to
queue. Then, the number of processes in the queue decreases that of RBA*. We call this new algorithm Overload
until the queue becomes empty. Analysis-Based Best-Effort Resource Allocation (or OBA).
The cost of extracting the kth element and then invoking Given N subtask arrivals on a processor that are deadline-
DASA/ND with no process leaving the ready queue until ordered, we can perform the overload test in O?N?time [23],
the N processes arrive is given by N ?log?N ? k?? k2?, [15]. Thus, the cost of performing the overload test on a
where k2 is the cost of invoking DASA/ND for k processor, given mndW=ke subtask arrivals on the processor
processes and log?N ? k?is the cost of extracting the in the worst-case is given by O?mndW=ke?, assuming that we
kth element from the heap. The cost of invoking DASA/ are given a deadline-ordered subtask arrival list. Recall from
ND at each of the terminations is P1 k2 steps. Thus, Section 7 that the complexity of RBA_AnalyzeResponse
includes 1) the complexity of arrival list construction and
the total number of steps performed by RBA_Analyze
the complexity of response time analysis. Thus, the
Response is N ?log?N ? k??k2??1 k2.
k?1 k?N complexity of OBA's counterpart procedure to RBA_Analyze
The amortized complexity of RBA_AnalyzeResponse is
Response becomes equal to the sum of the cost of constructing
given by ?1=N? times the total number of steps performed.
the deadline-ordered subtask list and the cost of performing
This becomes
the overload test.
"#
XN XN X1 WecaneasilymodifytheprocedureRBA_AnalyzeResponse
so that it constructs the subtask arrival list that is ordered by
deadlines instead of arrival times at a cost of
The dominant term in the numerator here is 1 k2, which
is O?N3?. Thus, the amortized complexity of kR?BNA_Analyze cost of OBA's version of the RBA_AnalyzeResponse procedure
Response is given by ?1=N?O?N3?, which is O?N2?. becomes O?mndW=kelog?mndW=ke??. This cost will significantly
speed up OBA with respect to RBA*, which had a cost
of O?m3n3dW=ke3? for the procedure RBA_AnalyzeResponse
9THE OBA ALGORITHM:HEURISTICS AND when DASA/ND is used as the underlying scheduling
RATIONALE algorithm at all end-host processors.2
Thus, at the highest level of abstraction, OBA follows the
A careful observation of the RBA* algorithm reveals that the
exact same steps as that of RBA*. The pseudocode of OBA at
algorithm is computationally complex. In fact, the procedure
the highest level of abstraction is shown in Fig. 6. OBA
thatcosts RBA*themost is the subtaskresponse time analysis
differs from RBA only in the way in which it determines the
procedure, i.e., RBA_AnalyzeResponse.RecalthatRBA_
number of replicas needed for each subtask (of each task)
AnalyzeResponse analyzes the response time of a subtask on
and their processor assignment.
a given processor and for a given workload by constructing
We now discuss how OBA performs the overload test
an arrival list for all subtasks on the processor and invoking
and how it determines the number of subtask replicas and
the scheduling algorithm for each subtask arrival and
their processor assignment in the subsections that follow.
completion during the length of the adaptation window.
Here, the complexity of the procedure is dominated by the 9.1 Overload Analysis
complexity of invoking the scheduling algorithm for all the
To determine whether the presence of a subtask on a
scheduling events, i.e., O?m3n3dW=ke3?.
processor will result in an overload on the processor, OBA
Thus, we now would like to design a much faster
first constructs a list of subtask arrival times similar to the
algorithm that achieves the same objectives as that of RBA*.
one constructed by RBA*'s RBA_AnalyzeResponse procedure
A careful observation again reveals that we can avoid the
(Section 6.2), except that the list is deadline-ordered. As
?scheduler-execution? performed by RBA*. Instead, we can discussed in Section 6.2, the algorithm constructs a dead-
conduct an overload test on the processor. RBA*'s objective of line-ordered list by inserting a subtask arrival event into an
invoking the scheduler is to determine the subtask feasi- integer-ordered list at the subtask deadline position once it
bility, which is done by determining the subtask response determines the arrival time of a subtask.
time and comparing the response time against the subtask Once the deadline-ordered arrival list is constructed,
deadline. We can also determine the subtask feasibility by OBA examines the subtask deadlines in the arrival list in
doing an overload test on the processor. increasing order of deadlines. For each subtask deadline di,
If a processor is underloaded, then, clearly, the subtask the algorithm computes the sum of the remaining execution
must be able to complete its execution by its deadline as times of all subtasks having deadlines less than di and
best-effort real-time scheduling algorithms ?mimic? EDF compares the sum against di. If the sum is greater than the
during underloaded situations, where EDF guarantees all deadline di for any deadline, then there exists an overload
deadlines. So, if a processor is underloaded, we can on the processor as it indicates that there exists at least one
conclude that the processor is a ?good? candidate for the subtask on the processor that is unable to complete before
subtask for the workload that the subtask has to process. its deadline (i.e., the subtask demand exceeds the available
On the other hand, if a processor is overloaded, then it processor-time). If the sum is less than the subtask deadline
implies that one or more subtasks will miss their deadlines. for each deadline, then the processor is underloaded as all
We can then reduce the workload share of the subtask by subtasks can complete before their deadlines (i.e., the total
considering a replica for the subtask. The subtask feasibility
can then again be determined through the overload test and 2. We analyze OBA's entire complexity later in Section 10.
Fig. 6. The OBA Algorithm.
processor-time demand of the subtasks is less than the
available processor time).
Fig. 7 shows the pseudocode of OBA's overload-test
procedure called OBA_OverloadCheck, which determines
whether executing a subtask replica s on a processor q
during the subtask period p will cause an overload situation
on q. The procedure starts by constructing the subtask
arrival list similar to the way RBA_AnalyzeResponse constructs
its arrival list. After the list is constructed, the
overload test is run in a single pass over the list. The
procedure returns a SUCCESS value if no overload is
detected. Otherwise, it returns a FAILURE value.
9.2 Determining the Number of Subtask Replicas
and Their Processors
To determine the number of replicas that are needed for a
subtask and their processors, OBA first checks whether there
is an overload on the processor where the subtask is currently
assigned. If no overload is detected, the algorithm concludes
that the (single replica of the) subtask can process the entire
subtask workload on its current processor, complete its
execution before the subtask deadline (since no overload is
detected on the processor, all subtasks must be able to
compete by their deadlines), and thus cannot affect the
timeliness of higher benefit tasks.3 Thus, by detecting an
underload on a processor, OBA makes the same conclusions
as that made by RBA* regarding subtask feasibility and
interference on timeliness of higher benefit tasks.
On the other hand, if an overload is detected on the
processor of the subtask, OBA reduces the workload of the
subtask by replication. The algorithm considers a second
replica for the subtask on a processor that does not have the
existing subtask replica assigned to it. Note that by
considering a second replica for the subtask, we reduce
the workload share of each of the two replicas and thereby
reduce the execution times of the replicas. This may resolve
the overload situation on the processors of the replicas.
The algorithm now tests for overload on the processors.
If no overload is detected on both of the processors of the
replicas, the algorithm concludes that two replicas are
sufficient to satisfy the subtask deadline. Otherwise, OBA
considers yet another replica for the subtask.
3. Note that, whenever we consider the execution of a subtask replica s
on a processor q and test for overload on q, all existing subtasks on q must
belong to higher benefit tasks than the task of s since OBA allocates replicas
to tasks in decreasing order of their benefits.
IEEE TRANSACTIONS ON COMPUTERS, VOL. 51, NO. 8, AUGUST 2002
The algorithm thus repeats the process of replicating and
overload testing until either 1) no overload is detected on
any of the processors of the subtask replicas or 2) the
maximum possible number of replicas for the subtask
(equal to the number of processors in the system, for
exploiting maximum concurrency) is reached. If executing
the maximum number of replicas for a subtask does not
resolve the overload situation and thus does not satisfy the
subtask deadline, then OBA deallocates the task, as
discussed in Section 6.
Fig. 8 shows the pseudocode of the procedure OBA_
DetermineReplicasProcessors that determines the number of
replicas necessary for each subtask and their processors.
This procedure calls the procedure OBA_OverloadCheck
(Fig. 7) to test processor overloads during the resource
allocation process. Recall that the main procedure of the
OBA algorithm, OBA_Algorithm (Fig. invokes the
procedure OBA_DetermineReplicasProcessors for each sub-task
execution during the future time window.
The analysis of the worst-case computational complexity of
OBA is similar to that of RBA*. OBA's complexity depends
upon the complexity of the procedure OBA_Determine
ReplicasProcessors. The complexity of OBA_DetermineReplicas
Processors depends upon the complexity of the procedure
OBA_OverloadCheck.
As discussed in Section 9, the complexity of OBA_
OverloadCheck is equal to the sum of the cost of constructing
the heap using subtask deadlines as key values and the cost
of performing the overload test. The cost for constructing
the subtask-deadline heap is O?mndW=ke log?mndW=ke??
since we use the same approach used by procedure
RBA_AnalyzeResponse. Note that the term mdndW=ke does
not appear here because the algorithm does not need to
compute the arrival times of the subtasks. It only needs the
absolute deadlines to perform the overload test.
Given the deadline heap, OBA tests for overload by
making a single pass. Each subtask deadline is examined in
its increasing order and the cumulative sum of the
remaining execution times of all subtasks with lesser
deadlines is compared to the current deadline. It costs
log?mndW=ke? to extract and delete the earliest deadline
subtask from the heap. Since this process is repeated for all
the mndW=ke nodes of the heap, the cost of the overload
HEGAZY AND RAVINDRAN: USING APPLICATION BENEFIT FOR PROACTIVE RESOURCE ALLOCATION IN ASYNCHRONOUS REAL-TIME. 957
Fig. 7. The OBA_OverloadCheck procedure.
test is O?mndW=ke log?mndW=ke??. Thus, the total cost of The cost of the main procedure OBA_Algorithm has two
OBA_OverloadCheck is given by components. First, OBA_Algorithm constructs a heap using
task benefits as key values. Second, it invokes OBA_
O?mndW=kelog?mndW=ke??mndW=kelog?mndW=ke?
DetermineReplicasProcessors for each subtask (of each task)
and for each period.
The procedure OBA_DetermineReplicasProcessors deter- The cost of building a heap for n tasks is O?n?.
mines the number of replicas and their processors that are Given n tasks, a maximum of m subtasks per task, and a
needed for a given subtask in an iterative manner by minimum task period of k, the procedure OBA_Determine
starting with a single replica and incrementing until the ReplicasProcessors is invoked mndW=ke times by OBA_
maximum possible number of replicas (equal to the number Algorithm. Before invoking OBA_DetermineReplicasProcessors,
of processors, p) is reached. During each iterative step, the the next highest benefit task needs to be extracted from the
procedure invokes OBA_OverloadCheck a maximum of p
heap.Thecostofan?Extract-Max?heapoperationisO?log n?.
times to test for overload on all p processors, for the replica
Therefore, the cost of the second component becomes
considered in the step. Thus, the procedure OBA_Determine
ReplicasProcessors invokes the procedure OBA_Overload- mndW=ke O log n ? p2mndW=kelog?mndW=ke?
Check p2 number of times and has a complexity of
?2 The worst-case complexity of OBA_Algorithm is the
sum of the cost of the two components, which is
Fig. 8. The OBA_DetermineReplicasProcessors procedure.
O?n??O?p2m2n2dW=ke2 log?mndW=ke??. This becomes
O?p2m2n2dW=ke2 log?mndW=ke??.
We now analyze the amortized complexity of OBA's
OBA_OverloadCheck procedure. Recall that the OBA_
OverloadCheck procedure is OBA's counterpart procedure
to RBA*'s RBA_AnalyzeResponse procedure, which was
found to be the most computationally expensive
component of RBA*.
The cost of OBA_OverloadCheck consists of two compo-
nents: 1) constructing the heap with subtask deadlines as
values and 2) overload testing.
Given N subtask arrivals, the total number of steps
required for constructing the subtask-deadline heap is
log k.
The overload testing process takes a total of N iterations.
During each iteration, the next earliest deadline subtask
needs to be extracted from the heap, which costs
O?log?N ? k?. Thus, the overload testing component costs
The amortized complexity of OBA_OverloadCheck is
therefore ?1=N? times the total number of steps performed.
This becomes ?1=N?? N log k ? 1 log?N ? k??. Note
that both terms in the numerator yield O?Nlog N?. Thus,
the amortized complexity of OBA_OverloadCheck is
O?N log N?=N ? O?log N?.
IEEE TRANSACTIONS ON COMPUTERS, VOL. 51, NO. 8, AUGUST 2002
We thus note that OBA is faster than RBA*. Though OBA
is faster than RBA*, we hypothesize that OBA may perform
worse than RBA*, especially during overload situations
?conditions we are clearly interested in due to the
asynchronous nature of the applications that we consider.
Our hypothesis is based on the fact that response times
of subtasks accurately match the subtask behavior under all
situations. Thus, RBA* exploits this knowledge and
determines resource allocations that accurately match the
application-needs under all situations.
OBA, on the other hand, determines allocations by
identifying overloaded processors and avoiding such
processors. Thus, if there are no underloaded processors,
the algorithm stops allocating resources and proceeds to the
next task adaptation period. This can cause the algorithm to
effectively allocate resources for a smaller range of work-load
situations than that of RBA*.
In experimentally evaluating RBA* and OBA, our goal is to
determine:
1. how RBA* performs under different best-effort real-time
scheduling algorithms (for process scheduling
and packet scheduling) such as DASA, RED, LBESA,
and RHD;
2. the relative performance of RBA* and OBA; and
3. how RBA* and OBA perform when the anticipated
workloads specified using adaptation functions
differ from the actual workloads.
HEGAZY AND RAVINDRAN: USING APPLICATION BENEFIT FOR PROACTIVE RESOURCE ALLOCATION IN ASYNCHRONOUS REAL-TIME. 959
Fig. 9. Performance of RBA* under DASA and RED schedulers and increasing ramp/ramp workloads. (a) Aggregate accrued benefit. (b) Missed
deadline ratio.
We conducted application-driven simulation studies to
evaluate the performance of RBA* and OBA. Details of
the application parameters used in our experiments were
derived from the DynBench real-time benchmark described
in [28].
We now discuss the experiments and the results in the
subsections that follow.
12.1 Performance of RBA* under Different
Scheduling Algorithms
To evaluate the performance of RBA* under different
scheduling algorithms, we considered two adaptation
functions that specified two workload patterns: 1) an
increasing ramp periodic workload with an increasing
ramp aperiodic workload, denoted as ?ramp/ramp? work-
load, and 2) a constant periodic workload with an
increasing ramp aperiodic workload, denoted as ?con-
stant/ramp? workload.
Recall that the workload of a periodic task during a task
period is the number of data objects generated during the
period. The workload of an aperiodic task during a period of
its triggering periodic task is the number of triggering events
generated by its triggering periodic task during the period.
To evaluate the performance of RBA* under the ramp/
ramp workload, we first defined a baseline ramp/ramp
adaptation function. The baseline ramp/ramp function is
defined by a particular slope and a window length, thus
defining a maximum workload for all the periodic and
aperiodic tasks for the function. We then conducted an
experiment for the baseline ramp/ramp function and
measured the total benefit accrued by the execution of all
tasks and the missed deadline ratio under RBA* during the
experiment, with DASA and RED as the underlying
schedulers. This constituted a single data point.
The baseline experiment was then repeated by increasing
the slope of the baseline ramp/ramp function and thus
generating ?increasing ramp/ramp workloads.? For each
such experiment, we measured the aggregate accrued benefit
and the missed deadline ratio. The results of the experiments
are shown in Fig. 9. Note that each data point in the plots was
obtained by a single experiment. Thus, the maximum
workload of the individual experiments is shown on the
x-axis of the plots. The aggregate accrued benefit is shown in
Fig. 9a and the missed deadline ratio is shown in Fig. 9b.
Fig. 10 shows the performance of RBA* under DASA and
RED, under increasing const/ramp workloads. Again, each
data point in the plots was obtained by a single experiment
and the maximum workload of the individual experiments
is shown on the x-axis of the figures. The aggregate accrued
benefit is shown in Fig. 10a and the missed deadline ratio is
shown in Fig. 10b.
We also measured the aggregate benefit and missed
deadline ratio of RBA* under increasing ramp/ramp and
const/ramp workloads with LBESA and RHD as the
underlying scheduling algorithms. We observed that the
performance of RBA* under LBESA and under RHD was
very close to that under RED. Therefore, for clarity, we omit
the performance of RBA* under LBESA and under RHD
from the figures.
From Fig. 9 and Fig. 10, we observe that RBA* under
DASA produces higher aggregate benefit and lower missed
deadline ratio than that under RED, LBESA, and RHD.
Thus, the experimental results illustrate the superiority of
RBA* under the DASA algorithm.
We believe that this is due to two reasons:
1. RBA* determines its resource allocation decisions by
significantly relying on the behavior of the underlying
scheduling algorithm. For example, RBA*
computes allocations by determining subtask response
times, which clearly depends upon how the
scheduler makes scheduling decisions. Thus, we
conjecture that the performance of RBA* depends
upon, to a large extent, the performance of the
underlying scheduling algorithm. To verify this
hypothesis, we conducted several experiments to
study the relative performance of DASA, RED, and
RHD [13].4 The experiments revealed that DASA
outperforms RED and RHD, thereby validating our
intuition. Thus, RBA* performs better under DASA
than under other scheduling algorithms.
2. RBA* ?mimics? DASA at a higher level of abstraction
(for resource allocation). For example, RBA*
allocates resources to tasks and tests the feasibility of
tasks in decreasing order of task benefits. DASA also
examines process-phases (or subtasks) and tests
schedule-feasibility in decreasing order of benefit
densities of process phases. This symmetry in
behavior contributes to the better performance of
RBA* under DASA than under other algorithms.
4. DASA is shown to outperform EDF and LBESA in [15].
960 IEEE TRANSACTIONS ON COMPUTERS, VOL. 51, NO. 8, AUGUST 2002
Fig. 10. Performance of RBA* under DASA and RED schedulers and increasing const/ramp workloads. (a) Aggregate accrued benefit. (b) Missed
deadline ratio.
Fig. 11. Performance of RBA* and OBA under DASA and increasing ramp/ramp workloads. (a) Aggregate accrued benefit. (b) Missed deadline ratio.
12.2 Relative Performance of RBA* and OBA
Since DASA performed the best among all the scheduling
algorithms that we considered, we compared the performance
of RBA* and OBA only under DASA. The same
experiments of RBA* described in Section 12.1 were
repeated for OBA using DASA as the underlying scheduling
algorithm at the processors and at the switch.
Fig. 11 and Fig. 12 show the performance of OBA-DASA
and RBA*-DASA under increasing ramp/ramp workloads
and const/ramp workloads, respectively. We observe that
RBA*-DASA produces higher aggregate benefit and lower
missed deadline ratio than OBA-DASA.
The results shown in Fig. 11 and Fig. 12 thus validate our
hypothesis (described in Section 11) that although OBA is
faster than RBA*, OBA may perform worse than RBA*.
12.3 Performance of RBA* and OBA under Error in
Anticipated Workloads
To study how RBA* and OBA perform when the actual
workloads differ from the anticipated workloads specified
by the adaptation functions, we define a relative load error
term. The relative load error term is defined as
er ??actual load ? anticipated load?=anticipated load.
Fig. 13a shows the performance of RBA* under a range of
relative load errors from ?0:9 to ?0:9, under a fixed
anticipated workload. A load error of 0:9 means that the
actual load is 190 percent of the anticipated load. The y-axis
shows the relative change in aggregate benefit. We define the
change in aggregate benefit for a certain value of er as the
difference between the aggregate benefit under this value of
er and the aggregate benefit under zero relative load error.
The relative change in aggregate benefit is defined as the
ratio of the change in aggregate benefit to the aggregate
benefit under zero relative load error.
The figure shows that RBA* generally performs better
under error when DASA is used as the underlying
scheduling algorithm than when RED is used. We attribute
this better performance of RBA*-DASA under errors to the
same reasons described in Section 12.1.
Fig. 13b shows how OBA-DASA performs with respect
to RBA*-DASA when the actual workloads differ from the
anticipated workloads. From the figure, we observe that
RBA* performs better under errors in anticipated workloads
than OBA. We regard this better performance of RBA*
under errors as a further validation of our hypothesis
discussed in Section 11.
HEGAZY AND RAVINDRAN: USING APPLICATION BENEFIT FOR PROACTIVE RESOURCE ALLOCATION IN ASYNCHRONOUS REAL-TIME. 961
Fig. 12. Performance of RBA* and OBA under DASA and increasing const/ramp workloads. (a) Aggregate accrued benefit. (b) Missed deadline ratio.
Fig. 13. Effect of error in anticipated load on the performance of RBA* and OBA. (a) RBA* under error. (b) OBA under error.
13 CONCLUSIONS AND FUTURE WORK
In this paper, we present two resource allocation algo-
rithms, called RBA* and OBA, for proactive resource
allocation in asynchronous real-time distributed systems.
The algorithms are proactive in the sense that they allow
user-triggered resource allocation for user-specified, arbitrary,
application workload patterns.
The algorithms consider an application model where
application timeliness requirements are expressed using
Jensen's benefit functions. Further, we propose adaptation
functions to describe the anticipated application workload
during future time intervals. Furthermore, we consider an
adaptation model where subtasks of application tasks are
replicated at runtime for sharing workload increases and a
switched real-time Ethernet network. Given such applica-
tion, adaptation, and system models, our objective is to
maximize aggregate application benefit and minimize
aggregate missed deadline ratio.
In [13], we show this problem to be NP-hard. Thus, RBA*
and OBA heuristically compute near-optimal resource
allocation decisions in polynomial-time. The heuristics
employed by the algorithms include allocating resources to
higher benefit tasks before lower benefit tasks, not allowing
lower benefit tasks to affect timeliness of higher benefit tasks,
and decomposing task-level allocation problem into subtask-
level allocation problems. The algorithms differ in the way
they solve the subtask-level allocation problem.
WhileRBA*solvesthesubtask-levelallocationproblemby
analyzing subtask response times, OBA solves the problem
by testing processor overloads. RBA* incurs a worst-case
computational complexity of O?p2m4n4dW=ke4? under the
DASA scheduling algorithm and an amortized complexity of
O?N2? for its most computationally expensive component.
OBA,ontheotherhand,incursabetterworst-casecomplexity
of O?p2m2n2dW=ke2 log?mndW=ke?? and an amortized complexity
of O?log N? for the procedure that corresponds to
RBA*'s most computationally expensive component.
To study the performance of the algorithms, we conduct
benchmark-driven experiments. The experimental results
reveal that RBA* produces higher aggregate benefit and
lower missed deadline ratio when DASA is used for
process-scheduling and packet-scheduling than when other
scheduling algorithms are used. Furthermore, we observe
that RBA* produces higher aggregate benefit and lower
missed deadline ratio than OBA.
Thus, the major contribution of the paper is the RBA*
and OBA algorithms that seek to maximize aggregate
benefit and minimize aggregate missed deadline ratio in
asynchronous real-time distributed systems through proactive
resource allocation. To the best of our knowledge, we
are not aware of any efforts that solve the problem solved
by RBA* and OBA.
Several aspects of this work are under further investiga-
tion. RBA* and OBA are centralized resource allocation
algorithms, which may potentially affect their scalability.
Furthermore, the adaptation functions that we propose are
deterministic in the sense that the user anticipates the future
workload without uncertainties (though we experimentally
study the algorithm's performance in the presence of
uncertainties). It may be possible to define adaptation
functions in a probabilistic setting, thereby enabling
probabilistic decision-making for adaptation. Furthermore,
fault tolerance is a key requirement in asynchronous real-time
distributed systems, besides timeliness. All these
issues are currently being studied.
ACKNOWLEDGMENTS
This work was supported by the US Office of Naval Research
under Grant N00014-99-1-0158 and N00014-00-1-0549.
--R
IEEE Trans.
US Naval Surface Warfare Center
Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications
--TR
Improved algorithms for synchronizing computer network clocks
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
Resource Management Middleware for Dynamic, Dependable Real-Time Systems
Engineering Dynamic Real-Time Distributed Systems
Hard Real-Time Computing Systems
Guest Editors'' Introduction to Special Section on Asynchronous Real-Time Distributed Systems
Deadline Assignment in a Distributed Soft Real-Time System
An Adaptive, Distributed Airborne Tracking System ("process the Right Tracks at the Right Time")
End-Host Architecture for QoS-Adaptive Communication
A Dynamic Real-time Benchmark for Assessment of QoS and Resource Management Technology
On Quality of Service Optimization with Discrete QoS Options
An Automated Profiling Subsystem for QoS-Aware Services
On adaptive resource allocation for complex real-time applications
A Dynamic Quality of Service Middleware Agent for Mediating Application Resource Usage
Specification and Modeling of Dynamic, Distributed Real-Time Systems
decision-making for real-time scheduling
Scheduling dependent real-time activities
On quality of service management (resource allocation)
--CTR
robust resource allocation in dynamic real-time systems, Journal of Systems and Software, v.77 n.1, p.55-65, July 2005
Peng Li , Binoy Ravindran, Proactive QoS negotiation in asynchronous real-time distributed systems, Journal of Systems and Software, v.73 n.1, p.75-88, September 2004
Peng Li , Binoy Ravindran, Efficiently tolerating failures in asynchronous real-time distributed systems, Journal of Systems Architecture: the EUROMICRO Journal, v.50 n.10, p.607-621, October 2004
Peng Li , Binoy Ravindran, Fast, Best-Effort Real-Time Scheduling Algorithms, IEEE Transactions on Computers, v.53 n.9, p.1159-1175, September 2004
Lin Wujuan , Bharadwaj Veeravalli, An object replication algorithm for real-time distributed databases, Distributed and Parallel Databases, v.19 n.2-3, p.125-146, May 2006 | switched real-time Ethernet;asynchronous real-time distributed systems;distributed real-time systems;best-effort resource allocation;benefit functions;best-effort real-time scheduling;proactive resource allocation;adaptive resource allocation |
627258 | Cycle-Time Properties of the Timed Token Medium Access Control Protocol. | AbstractWe investigate the timing properties of the timed token protocol that are necessary to guarantee synchronous message deadlines. A tighter upper bound on the elapse time between the token's lth arrival at any node i and its (l+v)th arrival at any node k is found. A formal proof to this generalized bound is presented. | Introduction
In a distributed system for hard real-time applications, communication through message
exchange between tasks residing on different nodes must happen in bounded time, in order to
guarantee that end-to-end deadline requirements are met. This motivates the use of medium
access control (MAC) communication protocols suitable for hard real-time communications,
which provide the guaranteed connection and guaranteed amount of channel bandwidth to
support timely delivery of inter-task messages. With the special timing property of bounded
time between any number of consecutive visits of the token to a node, which is necessary for
real-time communication, the timed token protocol becomes one of the most suitable and
attractive candidates for hard real-time applications. The timed token protocol has been
incorporated into many network standards including the Fiber Distributed Data Interface
IEEE 802.4, the High-Speed Data Bus (HSDB), the High-Speed Ring Bus (HSRB)
and the Survivable Adaptable Fiber Optic Enbedded Networks (SAFENET), which are used
as backbone networks in many embedded real-time applications [1].
The important concept of the timed token protocol was first proposed by Grow in [2] where
the framework (the basic idea) of the timed token protocol, adaptable to either a physical
or a logical ring, was described. Ulm [3] then studied the protocol proposed by Grow and
its performance characteristics. The timing properties of the timed token protocol were first
formally analyzed by Johnson and Sevcik in [4, 5] where it is shown that the average token
rotation time is bounded by the Target Token Rotation Time (TTRT ) and the maximum
token rotation time cannot exceed twice the TTRT . Chen et al [1, 6, 7, 8] made a detailed
study on the timing behavior of the timed token protocol and generalized the upper bound
derived by Johnson and Sevcik on the maximum token rotation time. That is, they extended
the upper bound on the time possibly elapsed between any two successive token's arrivals at a
node (i.e., the maximum token rotation time) to between any v (v is a positive integer no less
than two) successive token's arrivals at a node. Their general result is important for studies
on real-time communications in any network where the timed token protocol is employed,
and has already been used extensively by many researchers [1, 8, 9, 10, 11, 12, 13, 14, 15, 16]
in studying (analyzing) various kinds of synchronous bandwidth allocation (SBA) schemes.
Unfortunately, this general upper bound derived by Chen et al, although very important,
may not keep tight when v grows large enough, and consequently the SBA schemes previously
developed and analyzed based upon this general upper bound may not be as satisfactory as
they should be. Han et al [17] also derived a generalized Johnson and Sevcik's result, which
makes the previous results by Johnson and Sevcik and by Chen et al become special cases.
But, the result given by Han et al is almost the same, in nature, as (but not tighter than)
that first derived by Chen et al, for the upper bound on any number of successive token
arrivals to a particular node of the ring. Zhang and Burns [18, 19] investigated the inherent
timing properties of the timed token protocol and found, as a result, that the upper bound
derived by Chen et al can be replaced by a tighter one. Their generalized upper bound
expression, which is more complex than either of those derived by Chen et al and Han et al,
may produce a tighter upper bound when the number of successive token rotations becomes
large.
It should be noticed that exploring the inherent cycle-time properties of the timed token
MAC protocol is particularly important for research on hard real-time communication in any
timed token ring network. For example, a tighter upper bound on the time possibly elapsed
in the worst case between any given number of successive token's arrivals at a particular
node will lead to a derivation of a tighter (or larger) lower bound on the minimum available
time (that can be used by that node for transmission of real-time messages)
for any given length of time, which in turn brings a better chance (i.e., a larger possibility)
for real-time messages to be transmitted before their deadlines. Also, a generalized upper
bound on elapse time during any number of successive token visits between any two nodes
(say, the source node and the destination node) helps guarantee the end-to-end deadlines of
time-constrained messages to be transmitted between two network nodes.
In this report the derivation of a more generalized result on the cycle-time properties
of the timed token protocol is presented. Specifically, an upper bound on the elapse time
between the token's l th arrival at any node i and the token's (l th arrival at any node k
(where v can be any non-negative integer) is derived. The derived new result generalizes all
the previous findings on the cycle-time properties and is better than any previously published
result in the sense that it is more general and/or tighter.
The rest of this report is organized as follows: In Section 2 a description of the network
model is given. The timed token protocol is then briefly introduced in Section 3. The previous
results of related protocol timing properties is formally described in Section 4. In Section 5,
a concise formal proof to a new generalized result on the cycle-time properties is presented.
In Section 6 it is shown how the generalized result generalizes all the previous findings on
the cycle-time properties and why it is better than any of the existing results. An example
is given in Section 7 to show the importance of the generalized cycle-time property for hard
real-time communication with the timed token protocol. Finally, the report concludes with
Section 8.
Network Model
The network is assumed to consist of n nodes connected to form a logical ring and be free
from any hardware or software failures. A special bit pattern called the token, which grants
permission/right to its holder to transmit among the contending nodes, rotates around the
ring in a pre-determined order. The message transmission is controlled by the timed token
protocol. The node holding the token transmits its frames for as long as the protocol allows
then passes the token to its downstream neighbor 1 .
Let i denote the maximum portion of the time which is unavailable for message transmission
between the token's arrival at node i and the token's immediately subsequent arrival
at node i's downstream neighbor (i.e., node i 1). That is, i represents the sum of various
overheads possibly incurred during the above-mentioned time interval (between node i and
its downstream neighbor), which includes node bit delay, node latency buffer delay, media
propagation delay, and various protocol dependent overheads 2 . Then, the maximum fraction
of the time unavailable for message transmission during one complete token rotation,
denoted as , can be expressed by the sum total of all above portions of time between every
two neighboring nodes, i.e.,
1 The downstream neighbor of node i is node i similarly, the upstream
neighbor of node i is node
Various overheads possibly involved have been identified by Johnson and Sevcik in [4, 5]. For example,
protocol dependent overheads include token capture delay, token transmission delay, etc.
3 Timed Token MAC Protocol
The basic ideas of the timed token protocol were first presented by Grow [2]. With this
protocol [20], messages are distinguished into two types: synchronous and asynchronous.
Synchronous messages, such as voice or video traffic, are periodic messages which come to
the system at regular intervals and have delivery time constraints. Asynchronous messages
are nonperiodic messages which have no time constraints.
In network initialization time, all nodes negotiate a common value for the TTRT , an
important protocol parameter which gives the expected token rotation time, since each node
has different synchronous transmission requirements to be satisfied. The TTRT should be
chosen small enough to meet responsiveness requirements of all nodes, i.e., the negotiated
value for TTRT should be fast enough to satisfy the most stringent response time requirements
of all nodes. Each node is assigned a fraction of the TTRT , known as its synchronous
bandwidth (denoted as H i for node i), which is the maximum amount of time for which the
node is allowed to transmit its synchronous messages every time it receives the token [1, 8].
Whenever a node receives the token, it first transmits its synchronous messages, if any, for
a time period up to its allocated synchronous bandwidth. The asynchronous messages can
then be sent (if any), but only if the token has rotated sufficiently fast that it arrives earlier
than expected since the token's last arrival at the same node. That is, synchronous traffic
is assigned a guaranteed bandwidth while the leftover bandwidth (unallocated, unused or
both) is dynamically shared among all the nodes for the asynchronous traffic.
Each node has two timers and one counter:
ffl Token Rotation Timer of node i (TRT i This timer is initialized to TTRT and is
always enabled. TRT i counts down until it either expires (i.e., TRT or the
token is received early (i.e., earlier than expected since the token's last arrival at node
i). In either situation, the TRT i is reinitialized to TTRT and enabled again (starting
the counting down process ).
ffl Late Counter of node i (LC i This counter is initialized to zero and used to record
the number of times that TRT i has expired since the token last arrived at node i. LC i
is incremented each time TRT i expires and is reset to zero whenever node i receives
the token. The token is said to arrive early at node i if LC i is zero when the token
arrives at node i. Otherwise, if LC i is one, the token is considered to be late 3 .
ffl Token Holding Timer of node i (THT i This timer is set to the current value of TRT i
on token's arrival at node i (only if the token arrives early). This timer also counts
down, but is enabled only during asynchronous transmission in order to control the
amount of time for which the node i can transmit asynchronous messages.
When the token arrives early at node i, the current value of TRT i is placed in THT i and
TRT i is then reset to TTRT . Synchronous frames, if any, can be transmitted for a time
not to exceed its allocated synchronous bandwidth (H i ). The node may then transmit its
asynchronous frames (if any) until THT i or TRT i expires (i.e., as long as both THT i and
TRT i are both greater than zero). On the other hand, when the token is late on its arrival
at node i (i.e., LC 1), the LC i is reset to zero. In this case, node i is still permitted to
synchronous frames for a time no more than H i but no asynchronous frames are
allowed to transmit. Refer to [2, 20, 21, 22] for a more detailed description of the timed
token protocol.
Due to inevitable overheads involved, the total bandwidth available for synchronous message
transmission during one complete traversal of the token around the ring is less than the
actual token rotation time. Because forms part of the token rotation time which is unavailable
for message transmission and synchronous transmission with the guaranteed bandwidth
allocated precedes asynchronous transmission, it is clear that as a protocol constraint on the
allocation of synchronous bandwidth, the sum total of the synchronous bandwidths allocated
to all nodes in the ring should not exceed the available portion of the expected token rotation
time (i.e., TTRT ). That is,
The protocol constraint (1) is assumed to hold in the rest of this report.
3 To be exact, the token should be considered to be "not early" (which includes the case that the token
arrives on time) since the token could arrive at a node exactly when the Token Rotation Timer expires (for
the first time). But, for convenience/simplicity of presentation, "late" will be used instead of "not early" in
the situation that the token arrives when LC i
that no asynchronous transmission is allowed).
Timing Properties
In this section, a formal statement and a brief review of the previous relevant work on the
protocol timing property are given. In particular, the previous related results are presented
on the cycle-time properties of the timed token protocol, derived respectively by Johnson
and Sevcik [4, 5], Chen et al [1, 6], Han et al [17] and Zhang and Burns [19]. The following
notations are needed for a formal description of the related results.
t c;i the time when the token makes its c th arrival at node i.
d i (l) the time when the token makes its l th departure from node i, i.e., the time
when node i finishes the transmission of its synchronous and/or asynchronous
messages, if any, in the token's l th visit to node i and starts the transmission of
the token to its downstream neighbor [17].
c) the time difference between a reference time point d b (l) and the time when the
token departs from node i the c th time after d b (l) [17]. That is,
(2)
Theorem 1: (Johnson and Sevcik's Theorem [4, 5])
For any positive integer l and any node i (1 i n), under the protocol constraint (1),
The above theorem was first formally proved by Johnson and Sevcik in [4, 5]. This
theorem shows a well-known fact that the maximum time that could possibly elapse between
any two successive token arrivals to a node is bounded by 2 \Delta TTRT . This result can be used
to obtain a lower bound on the minimum number of token visits to a node within any given
time interval. Unfortunately, the lower bound is not tight when the time interval is longer
than 3 \Delta TTRT [1]. Chen et al [6] made a detailed study on timing behavior of the timed
token protocol. As a result, they extended the previous result by Johnson and Sevcik on the
bounded token rotation time to a general one. In particular, they generalized the analysis
by Johnson and Sevcik to give an upper bound on the time possibly elapsed between any v
(v is a positive integer no less than two) consecutive token's arrivals at a particular node, in
which the previous result by Johnson and Sevcik becomes a special case when 2. Their
generalized theorem is re-stated as follows:
Theorem 2: (Generalised Johnson and Sevcik's Theorem by Chen et al [1, 6])
For any integer l and any node i (1 i n), under the protocol constraint (1),
The above general result is very important and has been extensively used by many
researchers [1, 8, 9, 10, 11, 12, 13, 14, 15, 16] in their studies on SBA schemes. However,
the generalized upper bound (on the time possibly elapsed in the worst case between any
successive token arrivals at a particular node) may not be tight when v
is absolutely true when
To obtain a protocol-parameter-based
upper bound (i.e., an upper bound that is a function of protocol parameters only) which
keep tight even when v n+ 2 and
Zhang and Burns[19] enhanced
the previous work by Chen et al and derived another generalized upper bound expression
(as shown in Theorem 3 below) which is better than either of the upper bound expressions
first given by Chen et al and later derived by Han et al in the sense that their expression
may produce a tighter upper bound when v grows large. Their tighter upper bound can be
used to derive a better lower bound on the available transmission time for any given time
interval [19].
Theorem 3: (Generalised Johnson and Sevcik's Theorem by Zhang and Burns [19])
For any integer l and any node i (1 i n), under the protocol constraint (1),
al [17] also studied the cycle-time properties of the timed token MAC protocol,
and consequently derived a generalized upper bound on the elapse time between the token's
l th departure from node b and the token's (l th departure from node i, which makes the
previous results by Johnson and Sevcik [4, 5] and by Chen et al [1, 6] become special cases
of their result. But, for the elapse time of any given number of successive token arrivals to
a particular node, their more generalized upper bound is almost the same as (but no tighter
than) that first derived by Chen et al, although they are different in form of the derived upper
bound expressions. The following theorem shows their generalized upper bound expression.
Theorem 4: (Generalised Johnson and Sevcik's Theorem by Han et al [17])
For the timed-token MAC protocol, under the protocol constraint (1), for any l 1 and
c 1,
where
is subject to the definition of
shown below:
f
proof on the above generalized result can be found in [17] where it is shown how the
previous results by Johnson and Sevcik and by Chen et al become special cases of their result.
Although the upper bound expression derived by Han et al is more general, it may produce
an upper bound that is not as tight as that produced using the upper bound expression
given by Zhang and Burns (as shown in Theorem 3) on the elapse time between successive
token arrivals to a particular node, due to the fact that their upper-bound is the same as
that first derived by Chen et al on the time elapsed during any certain number of successive
token rotations. In this report a generalized result on the cycle-time properties of the timed
token MAC protocol will be derived. The generalized result is better than any previously
published (i.e., any of Theorems 1, 2, 3 and 4 above) in the sense that the upper bound
expression derived in this report is more general and/or tighter. This new generalized result
is shown by the following theorem whose formal proof is presented in the next section.
Theorem 5: (Generalised Johnson and Sevcik's Theorem)
For any integers l and v (l nodes i and k (1 i n; 1 k n), if
under the protocol constraint (1),
\Delta@ n
where
is subject to the definition 4 of
shown below (where e and f are
f
Before formally proving Theorem 5, we need to define some terms and to show a lemma, as
given in the following subsection.
5.1 Preliminaries
The definitions of all the terms to be used later on are summarized in Table 1. For the
convenience of an easy comparison with and an easy understanding of similar studies, Some
of the notations adopted by Chen et al [6] and by Sevcik and Johnson [5] in their proofs on
the related timing properties are retained and quoted. Notes to some of the notations are
given below:
Token visits to nodes are indexed by a pair of subscripts, say, "c; i", where c and
indicate respectively the token cycle and the node being visited. That is, visit c; i
denotes the token's c th visit to node i. The following natural ordering will be frequently
used in the proofs later on: visit c; i is followed by visit c; or by visit
when the subscript pair c; used to denote the visit
before c; should be taken to be c \Gamma 1; n. Similarly, if
should be taken to be c + 1; 1. These pairs of index visits will also be used
in summations. For instance, the sum total of the quantity q for all the visits starting
with the token's j th visit to node k, and ending with the token's w th visit to node z,
4 Note that the definition of
in Theorem 5 is not exactly the same as that (defined by Han et
al [17]) in Theorem 4.
Table
1: Glossary of Terms
n The number of nodes on the ring.
TTRT The Target Token Rotation Time.
Synchronous bandwidth allocated to node i.
i The maximum amount of the time unavailable for message transmission between
the token's arrival at node i and the token's immediately subsequent arrival at
node i's downstream neighbor (i.e., node (i+1) ), i.e.,
The maximum amount of the time unavailable for message transmission (i.e., the
tightest upper bound on the sum total of various overheads possibly involved) in
one complete token rotation.
of subscripts used to index token's visit to nodes, c indicating the token
cycle and i indicating the node being visited. That is, c; i indexes the token's c th
visit to node i.
h c;i The time spent transmitting synchronous messages on the token's c th visit to node
a c;i The time spent transmitting asynchronous messages on the token's c th visit to
node i. 0 a c;i TTRT \Gamma .
c;i The overheads involved (which is unavailable for message transmission) between
the token's c th arrival at node i and the token's immediately subsequent arrival at
node i's downstream neighbor (i.e., node (i+1) ).
c;i The duration of the token's c th visit to node i, i.e., the sum of h c;i , a c;i and c;i .
c;i The time spent in one complete token rotation ending with the token's c th visit to
node i, i.e., B
t c;i The time when the token makes its c th arrival at node i.
can be expressed as follows:
w;z
ffl Signs "=", "!", "", "?" and "" can be used to link two visits. "x; means
that visit x; y is the same as visit c; i (i.e., means that
visit x; y is earlier than visit c; i (in this case we also say visit x; y is before visit c; i).
"x; y c; i" means that visit x; y is no later than visit c; i (i.e., either x;
Similarly, "x; y ? c; i" means that visit x; y is later than visit c; i; and
"x; y c; i" means that visit x; y is no earlier than visit c; i.
ffl Let h c;i and a c;i respectively represent the times spent in transmitting synchronous
and asynchronous traffic on the token c th visit to node i, and let c;i denote the various
overheads possibly involved (which is unavailable for message transmission) between
the token's c th arrival at node i and its immediately subsequent arrival at node i's
downstream neighbor (i.e., c th arrival at node i 1). Then the duration of the token's
c th visit to node i, denoted as v c;i , can be expressed as the sum of h c;i , a c;i and c;i , i.e.,
Further, let B c;i be the length of a complete token rotation ending with its c th visit to
node i, we have,
c;i
Note that according to the timed token MAC protocol described in Section 3, each
node i can transmit its synchronous messages for a time interval at most up to its
assigned synchronous bandwidth H i , and can transmit its asynchronous messages only
up to the amount of time by which the token arrived early. So, for c 1 and 1 i n,
a c;i max(0;
Further, with this protocol, any node is not allowed to keep holding the token for more
than TTRT units of time. So, we have
a c;i
Combining (5) and (6) into one, we get
a c;i min f
Because no nodes have messages, either synchronous or asynchronous,
to send during the immediately preceding token rotation ending with the visit
the maximum bandwidth possibly available for transmit asynchronous mes-
sages, according to (5) is bounded by TTRT \Gamma .
ffl Similar to the definition of
shown in Theorem 5 (which represents the sum
of the allocated synchronous bandwidths of less than n successive nodes), we define
as the sum of various overheads incurred during less than n successive token
visits to nodes, as follows:
f
The following lemma is needed for the proof of Theorem 5.
Lemma 1: If the token is early on visit c; i (i.e., early on its c th arrival at node i), then
c;i
Proof: Since the token arrives at node i early on visit c; i, we have
Suppose that the token makes its c th arrival at node i earlier than expected by ffi
according to the timed token protocol, that
With (7) we obtain that
a c;i minf
Thus, we get,
c;i
5.2 Proof of Theorem 5
Theorem 5: (Generalised Johnson and Sevcik's Theorem)
For any integers l and v (l nodes i and k (1 i n; 1 k n), if
under the protocol constraint (1),
\Delta@ n
where
calculated according to the definition of
shown below (where
e and f are integers and " mod n" represents "modulo n" operation):
f
Proof: The time interval of t exactly corresponds visits from l; i inclusive to l +v; k
exclusive. There are two cases to consider:
1: The token is late on all visits from visit l; i inclusive to visit l
As the token is late on every visit, there is no asynchronous transmission. Thus, the time
elapsed during any complete token rotation, if any, is bounded by
. Since there
are totally (v inclusive and visit l +v; k exclusive (i.e., from
time t l;i to time t l+v;k ) and each token rotation consists of n successive visits, the number of
complete token rotations incurred in the above interval is given by
The remaining visits (which starts with node i and ends with node (k \Gamma 1), if any, is bounded
by
should be calculated according to the
definition of
shown in Theorem 5). Based on the above analysis, we have,
because x;y y ,
On the other hand, from the generalized upper-bound given in Theorem 5, we get the
following derivations:
\Delta@ n
\Delta@ n
\Delta@ n
(by the protocol constraint (1))
Clearly, Theorem 5 follows in this case with Inequalities (11) and (12).
2: There is at least one early visit from visit l; i inclusive to visit l
early visits in between visit l; i inclusive and visit l exclusive, subject to the following
conditions:
(a) for all s (1 s ! m),
(b) is the last early visit before visit l
(c) visit m) is the last early visit before visit p any.
The following observations can be made from the above definitions:
(A) From (a) above, we see that for 1 s ! m, between visit exclusive and visit
inclusive, there are at least (n+1) successive visits. It is easy to check
that there are totally v inclusive and visit l
exclusive, therefore, the maximum possible number of m is bounded by
From (b) above, we know that any visit between visit
if exists, is a late visit. That is, if any visit x; y, under
, is a late visit. Thus, asynchronous message transmission is
not allowed on visit x; y. So we have v
x;y=pm ;q m+1
x;y=pm ;q m+1
(C) Similar to (B) above, from (c) above, we know that any visit between visit
(l s ! m) and visit p exclusive, if exists, is a late visit. Thus we
have, for l s ! m,
x;y=ps ;q s+1
x;y=ps ;q s+1
(D) From the above definitions we see that if then there are no early visits
between visit l; i and visit inclusive. Hence, no asynchronous messages
are allowed to transmit on all these visits (if exist). So, if
(E) By Lemma 1 we see that whenever there is an early visit, the time possibly elapsed
during the (n visits ending with the early visit is bounded by TTRT
plus the synchronous bandwidth used and the amount of various overheads incurred
in this early visit. For the convenience of proof, this upper bound (on the elapse time
of (n successive visits) can be easily formed by supposing that the time elapsed
during the first n successive visits (that form one complete token rotation) is upper
bounded by TTRT and there is only synchronous transmission in the (n th visit.
Note that this is just a supposed equivalent situation (scenario) (and therefore it may
not match what happens in reality mostly), but it leads to the same upper-bound
as theoretically derived and help simplify the proof and help follow derivations to be
presented later.
It should be stressed here that we are interested in obtaining an upper bound on
the elapse time of (n visits ending with an early visit, and one such
upper bound can be obtained by using TTRT instead of the first n successive visits
(i.e., one complete token rotation) plus the synchronous bandwidth consumed and the
overhead incurred in the (n th (early) visit.
Note that replacing (removing) any n successive visits does not break the neighboring
relationship between nodes because any n successive visits make up one complete
token rotation. That is, the node corresponding to the visit immediately before these
visits and the node corresponding to the visit immediately after these n visits neighbor
each other (the latter is the immediately subsequent node of the former) although
these two corresponding visits are one-token-rotation apart. Realizing this is important
for understanding well the later derivations. In particular, according to the above
analysis and with the above definitions (a)-(c), we see that for any early visit
m), the node corresponding to visit and the
node corresponding to visit p s ; q s (i.e., node q s ) neighbor each other (since node q s is
the immediately subsequent node of node (q s \Gamma 1)). Here the removed n visits (that
are replaced by TTRT ) are visits between
Based upon how far the visit 1 is from the visit l; i, several different cases are
considered in the later derivations. The following analysis helps follow derivations in
different cases:
In this case, all (n successive token visits connected with all m early visits
(subject to the above definitions (a)-(c)), i.e., totally m \Theta (n+1) visits, fall within
the visits from l; i inclusive to l exclusive. According to (E) above, we see
that each one early visit
one replacement of n successive visits (i.e., one rotation)
by TTRT ". So, the final derived upper bound (on the elapse time from visit l; i
inclusive to visit l exclusive), for this case, should include "m \Delta TTRT ".
Further, since each early visit p s ; q s causes a removal of n visits (i.e., one
rotation) which is replaced by TTRT , the total number of the remaining visits
will be total number of all visits minus the number of removed visits, i.e., (v:n
that when seeking an upper bound for all these remaining
visits, we should only consider transmission of synchronous messages in any of
these remaining visits because any of these visits is either a late visit x; y (if x; y 6=
m) or has been assumed/supposed (in the imaginary equivalent
scenario where only synchronous transmission is considered (accounted for) in
the (n th visit of (n visits ending with an early visit (i.e., visit
analyzed in (E) above), for convenience of proof, (if x;
to the unbroken feature of neighboring relationship between nodes (whenever the
removal of n successive nodes happens) as analyzed in (E) above, these remaining
visits, if any, can be treated as q imaginary equivalent token rotations and r
remaining visits (where 0 r ! n) where
(v
Thus, the elapse time during the q equivalent token rotations (during either the
first or the last q visits of the remaining visits) is bounded by
because only synchronous transmission happens (or only the transmission of synchronous
messages is accounted) on any of these visits, as indicated in (E) above,
and the synchronous bandwidth actually used in any visit never exceeds the allocated
amount. Clearly, the above bound given in (17) should also be part of the
final derived upper-bound expression.
As for the r remaining visits, with the unbroken feature of neighboring relationship
between nodes, it is easy to check that the elapse time during r remaining
visits is bounded by
should also appear in the final expression
In this case, to simplify derivations of the proof, all the (n (from
inclusive to l are divided into the following two groups:
Group 1: visits from "
Group 2: visits from " l; i" to "
We now discuss the upper-bounds for visits in these two groups respectively.
For visits in Group 1, we can do exactly the same analysis as that adopted in
above. Since there are totally (m \Gamma 1) early visits (i.e., "p
and each of them is related to (n successive visits that fall into the
visits in Group 1, the final upper-bound expression (for Group 1) should include
". Because there are totally [(l visits in
Group 1 and among all these visits, (m\Gamma1) \Delta n visits are replaced by (m\Gamma1) \Delta TTRT ,
the number of remaining visits is
to above, we can now calculate q (the number of (imaginary) equivalent
rotations from the remaining visits) and r as follows:
ng \Gamma
(l
Similarly, the time possibly elapsed during the q equivalent rotations (i.e., q \Delta n
visits) is upper bounded by
(l
\Delta@ n
and r remaining visits can never exceed
Also, both of these
two bounds, together with " should appear in the final upper
bound expression for Group 1.
For visits in Group 2, by Lemma 1, we can easily find an upper bound on
the elapse time during all [(p visits (in this group) between
inclusive, as follows:
We further notice that visit l; i becomes the only visit in Group 2 when
According to the timed-token MAC protocol we see that the band-width
consumed in any single visit (say, visit x; y) for transmission of synchronous
and/or asynchronous messages, can never exceed TTRT (because by (7) we have
a x;y TTRT \Gamma h x;y ). So the time elapsed in visit l; i (when
bounded by TTRT (and therefore by "T TRT
With the above definitions (a)-(c) and observations (A)-(F), we can now formally derive
the upper bound given in the above Theorem 5 as follows:
x;y=pm ;q m+1
x;y=pm ;q m+1
( by (14) and (15) of the observations [B] and [C] above )
since
( by (15) of the observation [C] above )
x;y=pm ;q m+1
m)
(by observation [E] above) if
(by observation [F] above) if l \Gamma
(by observation [F] above) if
because x;y y ,
\Delta@ n
( by (13) of the observation [A] above and the fact that the above
upper bound is an increasing function of m
From the above proof process, we see that the derived upper bound is independent of
the actual synchronous bandwidth used by each node, as long
as the protocol constraint (1) holds. That is, the bound still works even when h
some x; y. Realizing this fact is important for real-time communication with the timed token
protocol.
The generalized upper-bound expression (given in Theorem 5) is useful for determining
the worst-case delivery time of a real-time message (i.e., from its arrival at the source node
till its arrival at the destination node) and is therefore helpful for guaranteeing end-to-
end (application-to-application) deadline constraints (say, a synchronous message produced
by an application at the source node i will be sent to another application running at the
destination node k).
6 Comparison with Previous Results
In this section we shall show how the generalized result given in Theorem 5 generalizes all
the previous findings on the cycle-time properties of the timed token MAC protocol and why
the upper bound derived in this report is more general and/or tighter than any of existing
results (upper bounds) shown in Theorems 1-4.
To achieve an effective comparison of related results, we shall only compare the generalized
upper bound expression with that derived by Zhang and Burns (see Theorem
the case of successive token arrivals to a particular node) and with that derived by Han et
al (see Theorem 4) for a more generalized case of any number of successive visits in between
any two network nodes. This approach is taken because in [18, 19] Zhang and Burns have
demonstrated how the previous findings by Johnson and Sevcik (see Theorem 1) becomes a
special case of their generalized result and why their generalized upper bound is tighter than
that derived by Chen et al (see Theorem 2) when the number of consecutive token rotations
grows large enough under
in [17] Han et al have shown how their
result generalizes the previous results by Johnson and Sevcik and by Chen et al.
(1) Comparing Theorem 5 with Theorem 3
It is easy to show that Theorem 5 becomes Theorem 3 when Theorem 5 we
get, when the following derivations:
\Delta@ n
(since d v \Deltan
c)
That is,
This is the same result as that shown in Theorem 3 and can be re-stated in exactly the same
form as the upper bound expression of Theorem 3 where it is assumed that v 2.
(2) Comparing Theorem 5 with Theorem 4
The following lemma is needed for comparison of Theorems 4 and 5.
Lemma 2 For any two visits "l (where l and v are
any integers subject to l 1 and v 0; and i and k are any two nodes, 1 i n and
So we get
0 and
Consider the following two cases:
Case 1: if since in this case, we have 1 k i n)
In this case, Lemma 2 follows under the following facts:
d
Case 2: if since in this case, we have 1
In this case we have:
and
Clearly, Lemma 2 also follows in this case. 2
To show that the generalized upper bound given in Theorem 5 is tighter than that derived
by Han et al (given in Theorem 4), we can relax the generalized upper bound (of Theorem
as follows:
\Delta@ n
(by (18) of Lemma 2 and
That is,
As will be shown below, even the above relaxed upper bound (given by (20)) is still
tighter than that derived by Han et al. To enable comparison, we need to represent
(exactly according to the definition of \Delta b;i (l; c), given by Han et al [17]) using the notation
t c;i defined in this report. For simplicity, let ' be the delay between the departure of the
token from any node i and its immediate arrival to the downstream neighbor of node i (i.e.,
node (i 1)). So we have
Further, with (21) and from the definition of \Delta b;i (l; c) (see (2) in Section 4), we can do the
following conversions:
From the above converted equivalent result and with the relaxed upper bound (20), we
can go on to obtain an upper bound for \Delta b;i (l; c) as follows:
(b
(b
(b
(b
(b
(by (20))
c
c
c
(by using
defined in Theorem 5)
On the other hand, according to Theorem 4, we have,
Comparing the each upper bound (obtained from Theorem 4) of each case with those of
its corresponding cases (obtained from the relaxed upper bound (20)), we see clearly that
Theorem 5 can produce a tighter upper bound (for \Delta b;i (l; c)) than that given by Theorem 4.
7 An Example
In this section a simple example is given to show the importance of the new generalized
cycle-time property for distributed real-time applications.
Consider a network (that supports the timed token protocol) with three nodes (numbered
1,2,3). Assume that each node i n) has a periodic/synchronous message stream S i
characterized by a period P i , a maximum transmission time C i and a relative deadline D i .
Messages from stream S i arrive at regular intervals with period P i and have deadlines D i
by which they must be received by the destination node (i.e., if a message from S i of source
node i arrives at time t, it must completely be received by the destination node k (k 6= i)
Assume that the token circulates around the ring from node 1 to nodes 2,
3, and then back to node 1 again to repeat the order, and that messages from source nodes
are sent respectively to destination nodes 3, 3 and 1. Parameters of synchronous
messages (for all three message streams), together with the synchronous bandwidth (H i )
allocated to each node i on the network, are listed in Table 2. To simplify calculation we
also assume that
Table
2: Message and Network Parameters
Clearly, the given allocation of synchronous bandwidths satisfies the protocol constraint
(1) (since
We shall check below if
the given setting of network parameters (i.e., H 1 , H 2 , H 3 , TTRT and ) can ensure that
all synchronous messages will arrive at their destination nodes before their deadlines, by
respectively using the new generalized cycle-time property (Theorem 5) and that previously
derived by Han et al (Theorem 4). As will be clear, all message deadlines, which can be
guaranteed when judged with Theorem 5, are wrongly judged with Theorem 4 as failing to
be guaranteed. The reason for this is the worst-case message response time for a message
in source node i to reach destination node k (defined as the longest possible time from
the instant of the message being available for transmission at source node i till the instant
when the whole message arrives at the destination node k), denoted as R(i; k), could be
much different when calculated with Theorems 4 and 5 respectively. The value of R(i;
calculated with Theorem 5 could be much shorter than that obtained with Theorem 4.
(v; be the time difference between a reference time point t l;i (the token's l th
arrival at node i) and the time when the token arrives at node k the v th time after t l;i . Assume
a synchronous message from S i comes to the transmitting buffer of node i immediately after
some time t l;i . That is, at time t l;i , there is not a synchronous message waiting to be
sent in the transmitting buffer of node i and thus the transmission right (token) is either
internally passed to asynchronous transmission at the same node i (if the token is early on
visit l; i and there is asynchronous traffic waiting to be sent) or externally forwarded to the
subsequent node (i.e., node i 1). But just at that moment when the transmission right
(token) is internally passed or externally forwarded, the message arrives and becomes ready
for transmission. This is actually the worst case situation for transmission of a message from
because the message misses the first chance of being transmitted on visit l; i. Because C i
units of time are needed for transmission of a whole message from S i and node i can use at
most H i time units for transmitting synchronous messages whenever it receives the token, a
total of dC i =H i e times' token arrivals is expected in order to finish transmission of the whole
message that is divided into dC i =H i e frames (to be transmitted separately on each of token
arrivals). Since the message misses the first chance at time t l;i in the worst case, we can
estimate R(i; k) by calculating the time difference between t l;i and the token's (dC i =H i e+1) th
arrival at the destination node k (because the token is appended to all transmitted and/or
forwarded messages, according to the timed token MAC protocol), as follows:
(d C i
To facilitate the calculation of R(i; Theorems 4 and 5, we convert T t l;i
(v;
(exactly according to its definition) to the following equivalent forms:
(for use with Theorem 5) (23)
use with Theorem
With (22), (23), (24), and message and network parameters shown in Table 2, we can
now calculate R(1; 3) according to Theorems 4 and 5 respectively as follows:
ffl Based upon Theorem 4,
(d C1
Theorem
Thus, i.e., the time possibly elapsed in the worst case
before the last frame of a whole message from S i reaches the destination node k is
larger than the required deadline. That is, the message deadline (of stream
when judged with Theorem 4. However, as will be shown next, this is not the case.
ffl Based upon Theorem 5,
(d
d
d C1
(b
e
e
(by Theorem 5)
d
d 150
Thus, That is, the message deadline (of stream S 1 ) can be
guaranteed when judged with Theorem 5.
Similar to the calculation of R(1; above, we can calculate R(2; 3) and R(3; 1), and
otain the following results (interested readers can check this themselves):
ffl Based on Theorem 4: R(2;
ffl Based on Theorem 5: R(2;
From the above analysis we see that message deadlines are misjudged as failing to be
guaranteed (for every synchronous message stream examined) when based upon Theorem
4 although in fact no synchronous messages will miss their deadlines when judged with
Theorem 5.
Because R(i; calculated with Theorem 5 could be much shorter than that calculated
with Theorem 4, using the new generalized result (Theorem 5) instead of the previous one
(Theorem substantially increase the chance for synchronous message deadlines to be
guaranteed.
8 Conclusion
The key to success in using a distributed system for real-time application is the timely execution
of computational tasks that usually reside on different nodes and communicate with
one another to accomplish a common goal. End-to-end deadline guarantees are impossible
without a communication network that supports the timely delivery of inter-task messages.
The timed token ring networks such as FDDI are suitable for distributed real-time application
due to its inherent timing property of bounded elapse time between any number of
successive token rotations.
In this report a concise formal proof to a generalized result on the cycle-time properties of
the timed token MAC protocol has been presented for the first time. In particular, an upper
bound on the elapse time from the token's l th arrival at any node i till the token's (l th
arrival at any node k (where v is a non-negative integer), is derived. The generalized upper
bound expression, which is particularly important for studies on real-time communications
in any timed token ring network, is better than any of previous related findings on the cycle-time
properties due to the fact that it is more general and may produce a tighter upper
bound.
--R
"Guaranteeing synchronous message deadlines with the timed token medium access control protocol,"
"A Timed Token Protocol for Local Area Networks,"
"A Timed Token Ring Local Area Network and its Performance Characteristics,"
"Proof that Timing Requirements of the FDDI Token Ring Protocol are Satisfied,"
"Cycle Time Properties of the FDDI Token Ring Protocol,"
"Properties of the Timed Token Protocol,"
"Guaranteeing Synchronous Message Deadlines with the Timed Token Protocol,"
"Optimal synchronous capacity allocation for hard real-time communications with the timed token protocol,"
"Local synchronous capacity allocation schemes for guaranteeing message deadlines with the timed token protocol,"
"Selection of timed token parameters to guarantee message deadlines,"
"Deferring Real-Time Traffic for Improved Non-Real-Time Communication in FDDI Networks,"
"Performance Evaluation of a Bandwidth Allocation Scheme for Guaranteeing Synchronous Messages with Arbitrary Deadlines in an FDDI Net- work,"
"Transmitting time-dependent multimedia data in FDDI net- works,"
"Guaranteeing Synchronous Messages with Arbitrary Deadline Constraints in an FDDI Network,"
"The Timed-Token Protocol for Real-Time Communications,"
"Synchronous Bandwidth Allocation in FDDI Networks,"
"On non-existence of optimal local synchronous bandwidth allocation schemes,"
"Timing Properties of the Timed Token Protocol,"
"An optimal synchronous bandwidth allocation scheme for guaranteeing synchronous message deadlines with the timed-token MAC protocol,"
Fibre Distributed Data Interface Ring Media Access Control (MAC)
FDDI Handbook - High-Speed Networking Using Fiber and Other Media
--TR
--CTR
Sijing Zhang , Alan Burns , Jing Chen , E. Stewart Lee, Hard Real-Time Communication with the Timed Token Protocol: Current State and Challenging Problems, Real-Time Systems, v.27 n.3, p.271-295, September 2004 | timed token medium access control MAC protocol;real-time communications;FDDI networks;protocol timing properties;timed token networks |
627400 | The LDL System Prototype. | The logic data language (LDL) system provides a declarative logic-based language and integrates relational database and logic programming technologies so as to support advanced data and knowledge-based applications. A comprehensive overview of the system and a description of LDL language and the compilation techniques employed to translate LDL queries into target query execution plans on the stored data are presented. The architecture and runtime environment of the system and the optimization techniques employed in order to improve the performance and assure the safety of the compiled queries are given. The experience gained so far with the system and application areas where the LDL approach appears to be particularly effective are discussed. | Introduction
The objective of the Logic Data Language (LDL) System is to develop the
technology for a new generation of database systems that support the rapid
development of sophisticated applications-such as expert systems and advanced
scientific and engineering applications. This objective is not new,
since there has been considerable interest in database languages [BaBu],
which have been proposed as the vehicle for facilitating the development
of complex data intensive applications, and bridging the gap between the
database and the programming language-this gap is often described as
Now at Bell Communication Research, Morristown, N.J, 07960
an 'impedance mismatch' [CoMa]. Yet, the approach favored by previous
researchers has been that of interfacing relational DBMSs to traditional
languages [RIGEL, Sch77] More recently, major efforts have been made to
integrate databases and programming languages under the Object-Oriented
paradigm [KiLo]. These approaches tend to abandon relational databases in
favor of an object-oriented one-often supporting a limited query capability
and the navigational query style of pre-relational systems. In contradistinction
with these approaches, the LDL research has taken the viewpoint that
full programming capabilities can and should be achieved through extensions
of relational query languages, and through technology advances that
provide efficient support for this as an integral part of the database management
system. It is also believed that such a system represents an important
way-station toward future Knowledge Management Systems, which
will have to combine efficient inference mechanisms from Logic with efficient
and secure management of large information banks from Database Systems.
Toward this goal, the LDL project, which began in 1984, has produced a
new language, new techniques for compilation and query optimization and
an efficient and portable prototype. This paper recounts this experience and
various lessons learned in this effort.
1.1
Overview
From the beginning, LDL was designed as a rule-based extension to relational
domain calculus based languages. (In a domain calculus, variables
stand for values, rather than tuples as in tuple-oriented calculus.) This was
largely due to the influence of Prolog, and also to QBE (in-line version).
It was felt that the expressive power of the former and the ease of use of
the latter provided more desirable beacons for our endeavor than a straight
extension of SQL. Yet, domain calculus and tuple calculus are known to be
equivalent [Ull], and the overall techniques used for implementing LDL can
be easily applied to suitable SQL extensions.
The basic research challenge faced was to provide a system that combined
the expressive power of Prolog with the functionality and facilities
of Data Base Management Systems (DBMSs), such as, support for trans-
actions, recovery, schema-based integrity, and efficient management of secondary
storage. It soon became clear that an approach based on coupling
Prolog with relational databases [Boc, CeGW, KuYo, Li, JaCV] would not
support the level of functionality, performance and ease of use that we were
seeking. We realized that a fully integrated system is required, where there
is no distinction between query language and application language, and that
arduous research challenges stood in the way of realizing such a goal.
The first issue that came into focus was that of users' responsibility for
execution control. In the '70s and early '80s, the database field had witnessed
a dramatic evolution from navigational systems into relational ones. In navigational
systems, such as Codasyl-compliant DBMSs, the programmer must
explicitly navigate through the maze of database records, paying careful attention
to the sequential order in which these records are visited-the key
to efficiency. In relational DBMSs, instead, the user is only responsible for
the formulation of a correct query (using logic-based languages of limited
expressive power, such as SQL or QUEL [Ull]). A special system module,
called the query optimizer, then compiles each query into an efficient execution
plan. By contrast, in Prolog, the programmer must carefully order
rules and goals to ensure efficient execution and termination. This basic
mismatch, from which all systems coupling Prolog with relational DBMSs
suffer, also challenged LDL's quest for a harmonious integration, leaving
two alternative paths open [Zan1]. One consisted of adding navigational
database facilities to a Prolog-like language; the other of rejecting the navigational
(procedural) semantics of Prolog, in favor of a purely declarative
one, whereby the order of goals and rules in a program becomes immaterial.
In the fall of 1984, the critical decision was taken to pursue the second
solution, with the expectation that it would provide better usability
and suitability for massive parallelism, and it will lead to more exciting
research problems and technology break-throughs. As described in the following
paragraphs, this early decision had profound repercussions on both
the design of the language and its implementation.
A Prolog programmer must be keenly aware of its sequential execution
model (SLD-resolution where the leftmost goal and the first rule is
selected [vEKo, Llo]), not only because the termination and performance of
the program will depend on it, but also because the very semantics of the
many non-Horn constructs -primarily cuts, and updates, but also negation
and 'set-of' predicates- are based on such an execution model. These non-Horn
constructs were introduced in Prolog to obtain the expressive power
needed for application development. Having decided to divorce execution
from the order of rules and goals in the program, the first technical challenge
facing LDL research was to provide a clean design and a formal declarative
semantics for the non-Horn constructs that were needed in the language for
reasons of expressive power. The result is a language that is very different
from Prolog in terms of the constructs and programming style it entails.
Most design choices regarding the LDL implementation approach were
dictated by the need for supporting database applications efficiently. Thus,
in LDL only rules are compiled. The fact base is described at compile
time by a schema, and can then be updated freely at run time with no
need for program interpretation or recompilation. This is a first difference
from Prolog systems where facts and rules are treated in the same way
(thus requiring interpretation when facts are changed). Furthermore, we
concluded that the implementation technology of Prolog and systems based
on backward-chaining, which is based on efficient implementations of SLD-resolution
and unification [Llo, War], was too dependent on main memory,
and a different approach was needed to obtain maximum performance on
secondary-storage resident data. Thus, a simpler execution model was selected
that is based upon the operations of matching and the computation of
least fixpoints through iterations. A benefit of this approach is that matching
operators on sets of facts can be implemented using simple extensions
to the Relational Algebra [Zan2, Zan3] used by many relational databases.
A second advantage is that since recursion has been replaced by iteration,
we can now use a simpler and more static environment for execution.
Having chosen a simpler target language, the LDL designers were faced
with the challenge of designing a more sophisticated compiler to support the
full functionality of the source language. The approach chosen is built on
two pillars:
ffl the use of global analysis to infer the bindings induced by a specific
query in rules and goals, and
ffl the compilation methods which rewrite recursive programs that, as
such, are not efficient or safe to implement by fixpoint computations
into equivalent programs that are.
The first LDL implementation, completed in 1987, was based on a compiler
using an early version of a language called FAD as the target language,
and on an interpreter for this language [DaKV]. FAD is a language based
on relational algebra that is supported by a massively parallel database machine
designed at MCC. While this experiment produced a fully functional
system, FAD was then dropped as the target language for the following
reasons. The FAD interpreter that was available was not robust and fast
enough to support serious experimentation. Furthermore, the only FAD implementation
which was to be made available was for a large and expensive
parallel system-hardly an affordable and portable vehicle for the release
of LDL. This led to the decision of designing and developing SALAD-an
efficient and portable LDL system for UNIX. This implementation assumed
a single-tuple get-next interface between the compiled LDL program and
the underlying fact manager. The single-tuple framework created an opportunity
for refinements and optimization that was not available in the
framework of relational algebra. The implementation included a fact manager
for a database residing in virtual memory that supported efficient access
to the complex and variable record structures available in LDL.
The completion of the SALAD prototype in 1988 made it possible to
start developing interesting applications in LDL. Various extensions and
improvements were added to the system as a result of this experience. As
the system improved, we have expanded the domain of its applications beyond
traditional database applications. Owing to its open architecture and
its compiling into C, SALAD finds applications as a rule-based system for
rapid prototyping of applications in the C environment. An incipient understanding
of a paradigm for programming in LDL has also emerged from
this experience, along with various ideas for desirable improvements.
1.2 Structure of the Paper
Section 2 summarizes key techniques and concepts implemented in the system-
most of them novel and untried techniques developed by the LDL researchers
or by parallel efforts, such as [Meta]. Thus, Section 2.1 gives a brief survey
of the novel features of the language, while 2.2 summarizes the rule
compilation techniques for constant pushing and efficient implementation of
recursion. Section 2.3 describes the various execution schemes supported by
the system, while 2.4 describes the optimizer that, at compile time, selects
a safe and efficient execution for the given query.
Section 3 describes the architecture and implementation of SALAD, including
a discussion of the main modules (Section 3.1), various techniques
for peephole optimization (Section 3.2) and the fact manager (Section 3.3).
Section 4 recounts our experience with LDL and SALAD and with using
them in novel application areas.
Enabling Technology
2.1 Language Design
The language was designed to combine the declarative style of relational
languages with the expressive power of Prolog. Concretely, that meant
using Horn Clauses as Prolog did, and rejecting all the remaining Prolog
constructs, such as negation, set of, updates, cuts, etc. These constructs
were added to Prolog to obtain the expressive power necessary for writing
general applications. While Horn Clauses have a well-defined declarative
semantics, these additional constructs only had an operational semantics
which is based on Prolog's execution model. Thus, a first challenge in our
work was to design into the language proper constructs for negation, sets,
updates and non-determinism and give them a formal semantics that extends
that of Horn Clauses. This semantics can be formally defined using
the notion of minimal model; an alternative but equivalent definition based
on the notion of least fixpoint is also possible [vEKo, Llo]. A detailed discussion
of the LDL design is outside the scope of this paper which focuses on
implementation issues. Thus, we will only provide a brief discussion of the
main constructs to illustrate the richness of the language and the complexity
of compilation and optimization issues posed by its implementation. The
reader interested in a detailed discussion of LDL and its formal semantics
is referred to [NaTs].
Languages such as DATALOG support rules and recursion. A full Horn
Clause language also supports complex terms through the use of function
symbols. Thus, for instance, the record of an employee could have the
following
employee(name(joe, doe), admin,
education(high school, 1967))
Along with the employee name we find the department where he works
(admin) and his education. While admin is a simple term, the other two are
complex terms, entailing an internal structure of unrestricted complexity.
For instance, in the education field, one may want to keep more detailed
information (such as school name, level and major) for people with college
degrees, and, for instance, have a record of the following format:
employee(name(joe, cool), sales,
education(college(harvard, bs, math), 1971))
Each sub-argument can be further refined into a more detailed descrip-
tion, thus enabling the modeling of objects of arbitrarily complex structure-
including recursive structures such as lists and trees. LDL has enhanced
this complex term capability by providing for set terms and nested rela-
tions. Thus, we can now have a complete education record for a person as
follows:
employee(name(joe, smart), mts,
education(f(high school, 1967),
(college(harvard, bs, math), 1971)
(college(harvard, ms, engr), 1973) g)).
Set terms in LDL are first class citizens, having the well-known properties
of sets, such as commutativity and idempotence- but not associativity
[BNST, ShTZ]. In addition to nested relations, LDL provides simple
constructs for nesting and unnesting these relations.
The problem of negated goals in recursive rules represents one of the
main research challenges in defining a declarative semantics for LDL. This
problem has been resolved with the introduction of the rather natural concept
of stratification [ApBW, Naq, Prz]. Informally speaking, this result
disallows the circular definition of a predicate using the negation of the
same. Similar constraints must also be observed when defining the nesting
of sets [BNST, ShNa].
Updates were defined so as to allow the full use of these constructs
in rules and to support the notion of database transactions [NaKr]. The
difficult problem of formalizing their semantics was solved through the use
of dynamic logic [Har]. The semantics so defined reduces to first order logic
in the absence of updates.
Finally, the notion of functional dependencies was used to support non-determinism
through a construct called choice [KrN1].
2.2 The Compilation Problem
The LDL compiler performs several functions, beginning with the parsing
of the rules into a Predicate Connection Graph (PCG) [KeOT] and ending
with the code generation phase. Some details of this complex process
are discussed in Section 3, others are beyond the scope of this paper. In
this section, we describe the rule rewriting phase which is the conceptual
kernel of the compiler. The objective of this is to specialize and refine the
original program into one that is specialized for the particular constraints
resulting from the query and rules at hand. To a large extent, this process
can be viewed as a generalization of the well-known principle of pushing
selection and projection operations into relational expressions. This compilation
phase begins when a query form is given, i.e., a query with mode
declarations specifying the arguments that will be given (ground) at actual
query time. Then, the constant migration step for non-recursive predicates
is performed. For instance, consider the query form
?grandma($X,Y).
(where $X denotes that a value is to be supplied at actual query time) and
the following set of rules:
The constant migration step will actually insert $X (since this value is known
at run time, it is treated as a constant by the compiler) into the corresponding
arguments and variables in the rules, yielding
?grandma($X,Y).
This set of rules can be further simplified by dropping the first argument in
grandma and parent:
?grandma'(Y).
Thus, the original program has been specialized for the given query form.
Furthermore, since $X has been migrated from the query form into the
database predicates (father and mother), the corresponding selection operation
has been pushed from the root of the relational algebra tree representing
the query to the leaf nodes, where the selection is applied against
the database tuples [Ull]. This 'selection pushing' operation, which is the
linchpin of the query processing strategy of relational systems [KrZa, Ull],
is implemented here by simple rule transformation techniques.
The treatment of recursive predicates is, in general, more complex. The
program specialization approach described above works for some simple
cases of recursive predicates. For instance, the following query
?anc(marc, Z).
anc(X, Z) / anc(X, Y), parent(Y, Z).
can be supported by specializing the anc rules into
anc(marc, Z) / anc(marc, Y), parent(Y, Z).
anc(marc, marc) / person(marc).
and then dropping the constant argument from anc to yield:
anc'(marc) / person(marc).
A single fixpoint iteration computes this transitive closure efficiently.
The original query condition is now applied directly to the datum parent
relation and not the derived anc relation, i.e., selection has been pushed
inside recursion. Furthermore, a refinement of fixpoint known as semi-naive
fixpoint is used to solve this problem [Ban, BaR, Ull, SaZ4]. The seminaive
fixpoint iteration basically will begin by computing the parents of marc and
then the parents of the parents, and so on until no new ancestor is found.
More complex rewriting is required, however, before the following query
can be mapped into a single fixpoint:
?anc(X, brian).
Here, the recursive rule must be first rewritten in its right-linear form, as
follows:
anc"(X, Z) / parent(X,Y), anc"(X, Z).
Then, the specialization approach can be applied, resulting in linear transitive
closure kind of rules that are easily mapped into a single seminaive
fixpoint.
Because of the frequency with which simple transitive-closure type of
rules are encountered, the LDL compiler performs some sophisticated analysis
to recognize cases where the recursion can be supported efficiently
through a single fixpoint computation.
However, there are many situations were constants cannot be pushed
into recursion [AhUl], and, therefore, a recursive goal with bound arguments
cannot be computed efficiently or safely by a single fixpoint computation.
(The problem of detecting when constants can be pushed into recursion is
in general undecidable [Bet2]).
Thus, more complex rewriting techniques are used to handle the general
case. Take, for instance, the well-known same generation example (two
individuals are of the same generation if their parents are, and everyone is
of the same generation as him/herself).
sg(X,X).
A query such as,
?sg(marc, X).
cannot be supported by the rules obtained by replacing X by marc. More-
over, a bottom-up computation is impossible since the exit rule, sg(X,X),
could qualify an infinite number of tuples. Similar problems occur in computational
procedures, such as list-append, where taking advantage of bound
arguments is essential for a safe and efficient implementation.
A considerable amount of research has been devoted to this key problem
and the reader is referred to [BaRa] for an overview of these techniques. The
LDL compiler uses the magic set method [BMSU, SaZ2] and the generalized
counting method [SaZ3], which are expressible by rule rewriting scripts and
lead to efficient implementations using fixpoint computations. In a nutshell,
these methods take a recursive clique that, for the given query, cannot be
supported well by means of a fixpoint computation and recast it into a pair
of connected recursive cliques, each amenable to efficient fixpoint implementation
This transformation can be illustrated by the example where people of
the same generation as marc are sought. One alternative way to find these
people consists of
ffl deriving the ancestors of marc and counting the levels as we go up
(marc being a zero level ancestor of himself).
ffl once an ancestor of marc, say X, is found, then the descendants of X
are computed, while levels are counted down. Descendants for which
the level counter is zero are of the same generation as marc.
We can express the previous computations as follows (J+1 and J-1 denote
the respective successor and predecessor of the integer J):
sg.up(0, marc).
?sg.down(0,X).
Thus, the initial recursive clique has been reformulated into a pair of recursive
cliques connected via the index J. Each recursive clique can now be
implemented efficiently and safely using a fixpoint computation (indeed each
is basically a transitive closure operation).
The equivalence preserving transformation that we have just introduced
using the intuitive semantics of ancestry, can be performed with full generality
on a purely syntactic basis. Indeed, observe that in the succession
of recursive calls generated by the goal sg(marc, X), X and XP are bound
whereas Y and YP are not. Thus, the recursive sg.down rule is basically constructed
by dropping the bound arguments and retaining the others, while
a new argument is added to perform the count-down. The recursive rule
for sg.up is instead built by retaining the bound arguments and then exchanging
the recursive predicate in the head with that in the tail of the rule
(indeed, we want to simulate a top-down computation by a bottom-up one),
and then adding the count-up indexes. Also observe that the original exit
rule is used to glue together the up and down computations. Finally, the
bound part of the query goal becomes the new exit rule for sg.up, while
the unbound part becomes the new query goal. The generalized and formal
expression of these rule rewriting techniques, known as the generalized
counting method are given in [SaZ3].
The counting method is very efficient for acyclic databases, but will loop
forever, as Prolog does, for cyclic databases, e.g., for the same-generation
example above, if the parent relation has cycles. The magic set method
can be used to solve the cycle problem and also for complex recursive situations
[BMSU, SaZ2].
While no function symbols were present in the previous examples, all the
compilation techniques just described apply when these are present. This
entails the manipulation of trees, lists and complex structures.
Another area of considerable innovation in the LDL compiler is the support
for set terms. Set terms are treated as complex terms having the commutativity
and idempotence properties. These properties are supported via
compile time rule transformation techniques, that use sorting and various
optimization techniques to eliminate blind run-time searches for commutative
and idempotent matches [ShTZ].
2.3 Modes of Execution
Even though LDL's semantics is defined in a bottom-up fashion fashion (e.g.,
via stratification), the implementor can use any execution that is faithful to
this declarative semantics. In particular, the execution can be bottom-up
and top-down as well as hybrid executions that incorporate memoing [Mi68].
These choices enable the optimizer/compiler to be selective in customizing
the most appropriate mode for the given program.
As a first approximation, it is easy to view the LDL execution as a
bottom up computation using relational algebra. For instance, let p(.)
be the query with the following rule, where p1 and p2 are either database
or derived predicates:
Then, this query can be answered by first computing the relations representing
p1 and p2 and then computing their join followed by a projection.
In actuality, the LDL optimizer and compiler can select and implement the
rule above using four different execution modes, as follows:
ffl Pipelined Execution computes only those tuples in p2 that join
with tuples of p1 in a pipelined fashion. This avoids the computation
of any tuple of p2 that does not join with p1 (i.e., no superfluous
work), whereas, if a tuple in p2 joins with many tuples in p1 then it
is computed many times.
ffl Lazy Pipelined Execution is a pipelined execution in which, as the
tuples are generated for p2, they are stored in a temporary relation,
say rp2, for subsequent use. Therefore, any tuple in p2 is computed
exactly once even if it is used many times (i.e., amortized work as well
as no superfluous work of pipelined execution). Further, as both these
pipelined executions compute p2-tuples one at a time, it is possible to
avoid residual computation in the case of intelligent backtracking-this
will be called backtrackable advantage.
ffl Lazy Materialized Execution proceeds as in the lazy pipelined case
except that, for a given Z-value, all tuples in p2 that join with the tuple
in p1 are computed and stored in a relation before proceeding. The
main advantage of this execution is that the execution is reentrant (a
property that is important in the context of recursion), whereas the
above two pipelined execution are not as they compute tuples of p2
one at a time. On the other hand, this execution does not have the
backtrackable advantage.
Materialized Execution computes all tuples in p2 and stores them
in the relation, say rp2. Then, the computation proceeds using the
tuples from rp2. Note this has the amortized work and reentrant advantages
but lacks the backtrackable and superfluous work advantage.
Note that the above discussion can be generalized to any OR-node with a
(possibly empty) set of bound arguments.
In conclusion, the pipelined execution is useful if the joining column is a
for p1, whereas the materialized execution is the best if all the Z-values
of p2 are joined with some p1 tuple. Note that in both of these cases, the
respective lazy evaluation incurs more overhead due to the checking that is
needed for each p1 tuple. The reentrant property is especially useful if the
predicate is in the scope of a recursive query that is being computed top-
down. Therefore, in such cases, lazy materialized execution is preferred over
lazy pipelined execution. Otherwise, lazy pipelined execution is preferred to
exploit the backtrackable property.
Even though we have limited our discussion here to a single non-recursive
rule, this can be generalized to include arbitrary rules with recursion. This
is presented in detail in [CGK89b].
2.4 The Optimization Problem
The query optimizer is delegated the responsibility of choosing an optimal
execution -a function similar to that of an optimizer in a relational database
system. The optimizer uses the knowledge of storage structures, information
about database statistics, estimation of cost, etc. to predict the cost of
various execution schemes chosen from a pre-defined search space and select
a minimum cost execution.
As compared to relational queries, LDL queries pose a new set of problems
which stem from the following observations. First, the model of data is
enhanced to include complex objects (e.g., hierarchies, heterogeneous data
allowed for an attribute). Secondly, new operators are needed not only to operate
on complex data, but also to handle new operations such as recursion,
negation, etc. Thus, the complexity of data as well as the set of operations
emphasize the need for new database statistics and new estimations of
cost. Finally, the use of evaluable functions (i.e., external procedures), and
function symbols in conjunction with recursion, provide the ability to state
queries that are unsafe (i.e., do not terminate). As unsafe executions are
a limiting case of poor executions, the optimizer guarantees the choice of a
safe execution.
We formally define the optimization problem as follows: "Given a query
Q, an execution space E and a cost model defined over E, find an execution
in E that is of minimum cost." We discuss the advances in the context of this
formulation of the problem. Any solution to this problem can be described
along three main coordinates: (1) execution space, (2) search strategy, and
(3) cost model.
2.4.1 Search Space and Strategies
The search space for optimal executions is defined by set of all allowable
executions. This in turn is is defined, by a set of (i) execution graphs and
(ii) for each graph, a set of allowable annotations associated with its nodes.
An execution graph is basically a structure of nested AND/OR graphs.
This representation is similar to the predicate connection graph [KeOT], or
rule graph [Ull], except that we give specific semantics to the internal nodes
as described below. The AND/OR graph corresponding to a nonrecursive
program is the obvious graph with AND/OR nodes having one-to-one correspondence
to the head of a rule and predicate occurrence. A recursive
predicate occurrence, p, has subtrees whose roots correspond not only to
the rules for this predicate but also to the rules in the recursive clique containing
p. Intuitively, the fixpoint of all the rules below this OR node (i.e.,
predicate occurrence for p) need to be computed, to compute p.
The annotation provides all other information that is needed to model
the execution. Intuitively, a parameter or property is modeled as an annotation
if, for a given structure of the execution graph, the optimal choice of
that information can be greedily chosen. For example, given the ordering
(i.e., the structure) of the joins for a conjunctive query, the choice of access
methods, creation of indices, and pushing of selection are examples of choices
that can be greedily decided. On the other hand, the pushing of selection
into a recursive clique is not a property that can be greedily chosen.
For instance, annotations define which of the four execution methods
previously described are to be used. Each predicate occurrence (i.e., OR
node) is annotated with an execution method. In addition, annotations
describe which indexes should be used and whether duplicate elimination
should be performed at the particular node.
Much effort has been devoted in devising efficient search strategies and
enabling the optimizer to use alternative strategies, including exhaustive
search, stochastic search and polynomial algorithms.
The traditional DBMS approach to using exhaustive search is to use the
dynamic programming algorithm proposed in [Seta]. It is well known that
even this is rendered useless if there is a join of 15 relations. In [KrZa] we
propose an exhaustive search for optimizing LDL programs over the execution
space. This approach is feasible as long as the number of arguments
and the number of predicate occurrences in the body are reasonably small
(i.e., 10).
Stochastic approaches provide effective means to find a near-optimal
solution. Intuitively, near-optimal executions can be found by picking, ran-
domly, a "large" sub-set of executions from the execution space and choosing
the minimum cost execution. Simulated Annealing [IoWo], and variation
thereof [SG88], are very effective in limiting the subset which must be
searched before a reasonable approximation is found.
Polynomial search algorithms can be obtained by making some simplifying
assumptions on the nature of cost functions. In [KrBZ], we presented a
time algorithm that computes the optimal ordering of conjunctive
queries when the query is acyclic and the cost function satisfies a linearity
property called the Adjacent Sequence Interchange (ASI) property. Further,
this algorithm was extended to include cyclic queries and other cost models.
2.4.2 Cost Estimates and Safety
The cost model assigns a cost to each execution, thereby ordering them.
Intuitively, the cost of an execution is the sum of the cost of its individual
operations. Therefore, the cost function must be capable of computing the
cost of each operation based on the descriptors of the operands. Three
major problems are faced in devising such cost functions: 1) computation
of the descriptors, 2) estimating the cost of external predicates, 3) safety of
recursive queries.
In the presence of nested views, especially with recursion and complex
objects, estimating the descriptor for a relation corresponding to a predicate
is a very difficult problem. This is further complicated by the fact that logic
based languages allow the union of non-homogeneous sets of objects. The
net effect is that the estimation of the descriptor for any predicate is, in
effect, computing the query in an algebraic fashion. That is, the program
is executed in the abstract domain instead of the concrete domain. For
instance, the age attribute may take on values such as 16 in the concrete
domain whereas, in the abstract domain, it takes on values such as integer
between 16 to 65. Obviously, computation in this domain is very difficult
and approximations to such computation had to be devised that are not
only efficient but are also effective.
In LDL, external procedures (e.g., 'C' programs) are treated in an interchangeable
manner with any predicate. Intuitively, the external procedure is
viewed as an infinite relation satisfying some constraints. Therefore, a concise
descriptor of such an infinite relation must be declared in the schema,
and the cost functions for the operations on these infinite relations must be
devised. The abstraction of the approach taken in LDL has been presented
in [CGK89c]. This approach integrates it with the traditional optimization
framework in a seamless fashion.
The cost model must associate an infinite cost for an execution that
computes an infinite answer or that never completes. Such unsafe queries
are to be detected so that the Optimizer can avoid choosing them. For
example consider the following definition of all integers from zero to a given
integer K.
As intended, the above program is unsafe when all arguments are free. So
let us discuss the safety of this predicate when the first argument is bound
and the second is free. Note that for each iteration of the recursive rule, the
value of J is increasing and there is an upper bound on the value, which is
the given value of K. Thus it can be concluded that the number of iterations
is finite and each iteration produces only finite tuples. Consequently, the
rule is safe.
In general, the problem of checking for safety is undecidable. The safety
checking algorithm proposed in [KrRS] is to find a well-founded formula that
can be used as a sufficient condition to guarantee safety. This algorithm is
an enumerative algorithm that exhausts an exponential number of cases, to
ensure the existence of a well-founded formula for each recursive cycle. The
enumerative algorithm guesses well-founded formulae and checks each one
of them until one is found to be satisfied.
3 System Architecture
Figure
1 shows the conceptual architecture for the current LDL prototype 1 .
There are six basic components or modules in the current prototype: the
User Interface, the Fact Manager, the Schema Manager, the Query Manager,
the Rule Manager and the Query Form Manager. Section 3.1 provides a
brief overview of the functionality of the different modules and section 3.2
discusses a few details pertaining to the system architecture and relevant to
the compilation process
3.1 Main Modules
The User Interface receives and processes user commands, i.e., it invokes
various procedures in the appropriate manager modules. The commands
available from the User Interface are described in [CG89]. The Fact Manager
is responsible for maintaining the various data structures associated with
the extensional database as well as for providing run-time support for LDL
queries. The Fact Manager data structures are collectively referred to as
the Internal Fact Base. The Schema Manager receives the schema definition
file from the User Interface and records the information in an internal form.
Type, index and key constraints are subsequently used by the Fact Manager
to verify the database. Base relation specifications are used by the Rule
1 The current implementation contains approximately 70,000 lines of code, of which half
is in Prolog and half is in C.
Figure
1: Conceptual architecture
Manager to verify consistency. The Query Manager receives queries from
the User Interface, determines which compiled query form is appropriate for
the query, and invokes the corresponding C program, passing any constants
from the query as arguments.
The Rule Manager is responsible for processing the intentional database,
i.e., the rule base. During the initial processing, the rules are parsed and various
syntactic consistency checks are performed. Each parsed rule is stored in
the Internal Rule Base and then sent to the Global PCG Generator, which is
responsible for transforming the rule set into a Predicate Connection Graph
(PCG). The Global PCG is a tabular data structure with entries specifying
the rule/goal index for all predicates occurring in the rule base. It provides
an efficient means of accessing the rules during subsequent query form pro-
cessing. After all rules have been processed, the Recursive Clique Analyzer
is invoked to identify maximal recursive cliques, detect cliques with no exit
rules, and create the necessary internal structures to represent the cliques
(RC-Boxes). The strongly connected components (predicates) of the PCG
define its recursive cliques. Additional data structures for representing LDL
modules and externals [CGK89a] are also produced by the Rule Manager.
The Query Form Manager embodies the bulk of the LDL compilation
technology. It receives a query form from the User Interface and is responsible
for producing the compiled version of the query form. Figure'2 shows
the organization of the Query Form Manager.
The Relevant PCG Generator generates a Relevant PCG (RPCG) which
is an AND/OR graph containing only those rules relevant to the query form.
The data structure generated is actually a tree instead of a graph since
common sub-expression elimination is not currently part of the compiler
design. During the RPCG extraction process, constant migration, i.e., the
process of substituting deferred constants from the query form or constants
from the relevant rules for variables wherever possible is also performed.
Note that constants are not migrated into recursive rules.
The Optimizer transforms the RPCG and its associated recursive cliques
as necessary to choose an optimal execution. It performs safety analysis and
reorders goals (OR nodes) in the PCG appropriately. The nodes of the PCG
are annotated by the Optimizer to reflect, among other things, adornment,
pre-selection, post-selection and execution strategies to be employed. The
transformed RPCG is termed the Controlled PCG (CPCG).
The Pre-Enhancer is responsible for providing the program adornment
when the Optimizer is not used (AS-IS compilation). Miscellaneous rewriting
optimizations, e.g., for choice, are also handled by the Pre-Enhancer.
The Enhancer is responsible for rewriting recursive rules such that the re-
Figure
2: Query form manager architecture
cursive cliques are recast into a form that guarantees efficient execution via
fixpoint operators. Various recursive query processing strategies are supported
including a stack based implementation of the generalized counting
method, the magic set method, and the semi-naive fixpoint method. The
output of the Enhancer is the Enhanced PCG (EPCG).
The Set Rewriter uses rule transformation techniques to produce a revised
but equivalent PCG where set objects have been mapped into first
order terms in order to avoid set unification at run-time. The set properties
of commutativity and idempotence are supported via this rule rewriting. In
the process, the context of the rule is used to constrain the set of alternatives
that must be explored [ShTZ].
Finally, the Code Generator traverses the PCG, generating C code, ultimately
resulting in a complete C Program which is then compiled and
linked to form the final compiled query form. The Code Generator is actually
quite sophisticated in that it performs various peephole optimizations,
e.g., intelligent backtracking and existential query optimization, provides
default annotations in the case of AS-IS compilation, and supports various
execution strategies, all on the fly as code is generated.
3.2 Compilation Techniques
In addition to the rule transformations described in Section 2.2, the LDL
compiler applies a number of techniques to improve the efficiency of the
run-time code.
3.2.1 Pruning the Execution Graph
Much of the unification required to support complex terms is performed at
compile-time. Consider, for instance, the following rules:
Compile-time rewriting of these rules will result in the function f(X,Y)
being migrated into the rules for p such as to replace all occurrences of V.
Subsequently, the first rule for p will be deemed false and will be thrown
out of the relevant rule set. Furthermore, the second rule for p will result in
the unification of X with Y and the substitution throughout the rule of X for
U. At compile-time it will be determined whether an assignment (value of X
assigned to Y) or a check (value of X is the same as value of Y) is required,
based on whether the given variables are bound or not. Note that the code
generator would choose the entry to the rule for p as the appropriate place for
the check in order to detect early failure, whereas the assignment would be
placed at the success of the rule in order to avoid an unnecessary assignment
should the rule fail. Thus, the run-time effort is reduced by eliminating rules
and performing compile-time unification such that only simple matching and
assignments are necessary at run-time. This same philosophy is employed
for set unification such that set objects are mapped into first order terms at
compile-time so that only ordinary matching is required at run-time.
3.2.2 Static Variables
One of the goals of the rewriting performed by the system is to rename
variables, so that the scope of each variable is global with respect to the
program. The purpose of this rewriting is run-time efficiency. By making
each variable global, space for the variables can be allocated statically, as
opposed to dynamically as offsets from a frame pointer. Moreover, assigning
a variable can be done more efficiently in a global framework, as parameter
passing becomes unnecessary. On the other hand, non-recursive rules that
are invoked from more than one predicate are duplicated, thus resulting in
larger object code.
3.2.3 Adornment
For each query form, the compiler constructs an adorned program using the
notion of sideways information passing (SIP) as defined in [Ull], marking
each argument of each predicate as either bound (instantiated to a particular
constant value at run-time), free (the current predicate occurrence will
instantiate it at run-time) or existential (it does not appear elsewhere in the
rule, except possibly in the head as an existential argument). Note that the
rules, in some cases, are duplicated (and renamed) for different adornments
of the predicate occurrence (referred to as stability transformation [SaZ2]).
Thus, each predicate in the adorned program is associated with an unique
binding pattern and every occurrence of that predicate conforms to that
binding pattern. The program segment generated for a predicate can exploit
the bound/existential arguments to generate efficient code. This approach
of generating code for a particular predicate with respect to a given binding
pattern is an important deviation from the approach taken in Prolog and it
results in an improved performance.
3.2.4 Intelligent BacKtracking
The nested-loop join operation which is implied by pipelined execution
presents significant opportunities for avoiding computation that cannot generate
new results. In the literature this is know as the intelligent backtracking
problem. Two types of intelligent backtracking been addressed in the com-
piler: get-next and get-first. Consider, again, the LDL rules given above.
Let us assume that the rules are compiled for the query ?r(X,Y). After computing
a tuple for r, backtracking to get the next tuple for b2 is unnecessary
since it will not yield any new tuples for r. The compiler will choose the
predicate p as the get-next backtrack point for the rule since the variable Y
is bound there 2 . To illustrate get-first intelligent backtracking, consider the
predicate b2. If the attempt to get the first tuple in b2 fails, it is unnecessary
to backtrack to p since it does not change the bound argument for
b2. Therefore, if no tuples are found for b2, the backtrack point will be b1
since that is where the variable X is bound. Hence, by doing compile-time
analysis, intelligent backtracking is implemented with little (if any) overhead
incurred at run-time, and results in the elimination of unnecessary run-time
processing.
The rule for r also serves to illustrate an additional optimization utilized
by the compiler with respect to existential arguments. In the predicate
b2, the variable Z is a don't care or existential variable. Therefore, the
assignment of a value to Z is unnecessary. While this might seem an inconsequential
optimization, experience has shown that the avoidance of a single
assignment in the innermost loop can have a great influence on execution
time. Again, compile-time analysis has avoided unnecessary overhead at
run-time.
3.2.5 Implementation of Recursion, Updates and Choice
Above, we have discussed backtracking assuming a pipelined execution ala
Prolog. In order to efficiently compile some of the advanced constructs of
LDL, additional execution strategies, i.e. materialized, lazy materialized,
lazy pipelined, snapshot and stack-based executions, must be used. These
different execution methods along with their respective advantages and disadvantages
are described in detail in [CGK89b]. The LDL code generator
is capable of selectively applying any execution strategy (chosen by the op-
It is interesting to note that if the query were ?r(X, ) such that the variable Y was
existential with respect to the rule for r, then the get-next backtrack point for that rule
would be the predicate b1.
timizer) to any predicate in the rule set. Moreover, some language features
dictate the appropriate execution strategy that must be applied. Set grouping
and recursion are examples where materialization is essential to a correct
execution. Full materialization, however, does not allow for selection pushing
and is, therefore, very inefficient in the presence of bound arguments.
Therefore, a lazy materialized execution is applied such that the bindings
can be utilized. Additionally, for recursion, rewriting strategies are employed
at compile-time which recast the recursion into a form that guarantees efficient
execution via fixpoint operators. The magic set rewriting method,
which uses the lazy materialized execution strategy, is applied when there is
a possibility of cyclic data. The user can compile with an option which states
that there no need to detect cycles, in which case the compiler can choose a
stack-based implementation of the counting method for better performance
With this approach, a pipelined or lazy-pipelined execution strategy can
be employed. Hence, an appropriate execution strategy can be chosen in
context at compile-time to ensure an efficient run-time execution.
Because the semantics of LDL ascribe a dynamic logic interpretation
to updates[NaKr], a snapshot may be required for every update operation.
The compiler does, however, recognize instances where snapshots are not
necessary and, thus, sequences of updates can be collapsed.
The implementation of LDL's nondeterministic choice construct requires
materialization to store the functional dependencies. In the following rules,
a table with X and Y values will be materialized due to the choice construct.
The chosen Y value for a particular X will only be committed, however, at
the success of the query. In the rule for r, it is possible that the goal X
resulting in backtracking into the rule for p and obtaining
a new choice, i.e., value for Y. This may be contrasted with the Prolog
cut with which a bad choice will result in failure for the query 3 . After
values have been committed, the materialized table can be used to avoid
unnecessary recomputation. Thus, after the binding for X is obtained from
the predicate b, a check is performed to determine if a value for Y has already
been committed and, if so, the remainder of the rule need not be executed.
Again, compile-time techniques have been used to reduce the computation
effort at run-time.
3 The Prolog cut also does not provide for functional dependencies to be expressed.
3.3 The Fact Manager
The fact manager provides the run-time environment for an LDL program.
It supports LDL objects, such as atoms, sets and lists, as well as database
objects, such as tuples and base and derived relations. In the current im-
plementation, all objects are kept in virtual memory.
The LDL data types are directly supported by the fact manager, which
implements them as C abstract data types. That is, the fact manager provides
definitions as well as a set of routines that operate on objects
of these types. This is the level of abstraction maintained by the translator.
The fact manager itself, on the other hand, is free to take advantage of the
data representation for the sake of efficiency. For example, complex objects
are stored as one-dimensional arrays, where the first (zeroth in C) component
is the functor name. The function fm get functor arg(object,i) is
used by the translator to select the i th component of a complex object. The
fact manager implements this in-line (i.e., in the preprocessor) as the array
lookup object[i]. Similarly, the fact manager stores sets as sorted arrays,
so that set operations such as union and intersection can be implemented
efficiently.
Efficient support for base and derived relations is provided at the tuple
level, with calls such as fm get first and fm get next. A key consideration
in the design of the fact manager was the number of operations performed at
the inner-most loop of an execution (i.e., nested join); for example, getting
the next tuple from a base relation and post-selecting for bound arguments.
Thus, relations are stored so that the call to fm get next is reduced to
following a linked list, and is hence suitable for in-line implementation. This
is possible, because the database is kept in-memory, thus it is never necessary
to access the next tuple from disk.
In order to speed up equality comparisons, used in post-selection, for
example, each object in the database is assigned an unique representation
that can be compared using the hardware (integer) compare instruction. In
the case of numeric constants, the unique representation is quite natural.
For strings and complex objects, the memory address of the actual object
is used as the unique representation. Whenever a new complex object is
created, the fact manager guarantees this address is unique by first checking
whether the object already exists, an efficient operation, since all objects
are kept in memory. This unique representation can also be used by the
fact manager to perform other database operations more efficiently. For
example, when an index is used, the hash function operates directly on
the unique representation rather than the LDL object itself - this can
be a substantial savings, since LDL objects can be arbitrarily complex.
Moreover, once a bucket is selected, searching to find the matching tuples
involves an equality comparison, so the unique representation is exploited
here as well. Intuitively, the unique representation allows the fact manager
to reduce the cost of subsequent index lookups by partially hashing an object
when it is created.
Derived relations are used by the translator to support some LDL language
features, such as recursion and grouping. Recursion can be implemented
using a semi-naive fixpoint operation, after rewriting for magic sets,
etc, has taken place. Thus, the efficient execution of recursion depends on
the efficient implementation of the semi-naive operation. Therefore, the fact
manager supports this operation directly by partitioning recursive relations
into delta and cumulative components, and returning tuples only from the
component when a semi-naive scan is desired. Since tuples are inserted
sequentially, the delta component is implemented easily by maintaining a
"high-water" mark. Similarly, the fact manager provides efficient support
of grouping by converting a relation into a set, given a pattern describing
the specific "group-by" operation desired.
Experience
4.1 Experience in Using the Language
Since LDL was designed to be both a query language and a rule-based application
language we need to evaluate its functionality and usability starting
from these two domains.
An independent comparison of LDL as a database query language, suggested
that all but the simplest queries are easier to express in LDL than
SQL. This hardly represents an endorsement of LDL, since the inordinate
difficulty of expressing sophisticated queries in SQL is well-known. Yet, our
experience suggests that even the most complex of queries can be readily
expressed as short LDL programs. This is consistent with our experience
that in LDL, any distinction between complex queries and simple applications
is arbitrary and blurred. We found it easy to develop rapidly complex
database applications, including the "Computer Science Genealogy" [NaTs]
and programs for parts explosion, inventory control, shop scheduling and
The other side of the coin involves comparing LDL with other rule-based
systems. As our reader may have noticed, a coarse description of the LDL
compiler is that it maps the functionality of a backward chaining system
(top-down) into the mechanisms of forward chaining (bottom-up). Indeed,
we felt that the former is conducive to more expressive and powerful languages
while the the second is conducive to more efficient implementations
in the database context. Thus, programming in LDL is more similar to programming
in Prolog than in OPS5. Yet, the differences between LDL and
Prolog are significant, and often baffling to experienced Prolog programmers.
Prolog is more powerful than LDL in many respects, such as built-in
predicates, including metapredicates. Moreover, Prolog variables can be
instantiated in a dynamic fashion-e.g., different goals instantiate variables
in a complex term. LDL is more restrictive since, although goals can be
reordered at compile time, the execution of a goal is assumed to bind all
its variables. Also, the fact that Prolog streams through one answer at the
time provides the programmer with more opportunities for fine control than
in LDL.
On the other hand, LDL provides a more structured programming para-
digm, a cleaner syntax and semantics, and an optimizer that excuses the user
from thinking too hard about an execution sequence. The benefits become
apparent in challenging areas such as non-determinism and recursion. For
instance, a recursive procedure, to generate all integers between zero and a
given integer K, can be expressed as follows in LDL:
This represents a very natural rendering of Peano's inductive definition
of integers, augmented with a condition on K in the second rule to ensure
termination, and one in the first rule to ensure that no answer is returned
for a negative K. A second formulation is also possible in LDL, as follows:
This is a less clear and intuitive definition, but it is the only one that can
be handled by Prolog (the equal signs would also have to be replaced by 'is').
Also, when writing a recursive predicate to traverse a graph with possible
cycles, the Prolog programmer must make provisions for termination (e.g.,
carrying around in a bag all answers produced so far). Cycles can be easily
handled by the LDL compiler through a specific option.
Finally, the ability of storing efficiently partial results that can be retrieved
by later computations is a major plus for LDL, and so is the ease of
dealing with externals and modules.
Therefore, a plausible argument can be made for the ease-of-use of LDL
over Prolog; but this is hardly a reason for jubilation. Much more work
is needed to bring the system to a level of usability and ease-of-use that
will entice non-professional programmers to develop complex applications,
in analogy to what many 4GL users now do with simple applications. We
are currently working on two major extensions directed toward enhancing
the ease of use. One is a debugger that, given the nature of the system,
is tantamount to an answer justification capability. A traditional debugger
that retraces the execution of the program would be of little help to the
unsophisticated user, since the compiler and optimizer completely transform
the original program. What is instead planned, is an answer justification
capability capable of carrying out a dialogue with a user asking questions
such as, "Why did you (or did you not) return this answer?" and through
this dialogue directing the user to the incorrect rule or missing fact that was
the source of the problem. We also plan to add visual interfaces both for
data entry and display and for program visualization. While in procedural
languages, the focus of visualization is on the changes to the program state,
for a declarative language such as LDL the focus is on displaying the static
relationships defined by the rules.
We now briefly describe some aspects that affect the performance of
the current implementation of LDL. One important feature of LDL is the
elimination of duplicate results in the processing of recursion-that is, an
"all answers" as opposed to "all proofs" approach. The duplicates need
to be eliminated in certain cases to guarantee termination, such as when
traversing a cyclic graph. Moreover, the elimination of duplicates can speed
up the execution in many cases. For example, in the computation of the
same generation query, it was discovered that removing duplicates resulted
in a major performance improvement, since for most siblings in the database,
there are two ways to prove their relation (through the father or mother), and
this becomes even more significant as more distant relations (e.g., through
great-grandparents) are explored. A timing comparison using a database of
500 tuples showed that the system computed same generation in roughly 4
seconds, whereas Quintus Prolog needed over 2 minutes, resulting in a ratio
of over 1:30.
On the other hand, there are also recursive queries where no duplicates
are ever generated, for example, when appending two lists. In these queries,
the overhead of duplicate elimination is wasted, hence the LDL implementation
does not compare favorably with, say, Prolog. In particular, for list
append, we found a ratio of between 6:1 and 10:1 in favor of Prolog. Another
factor contributing to this result is the uniqueness check performed at
the creation of each new object, i.e., temporary list. When this check was
removed, the ratio was reduced to 2:1.
4.2 LDL Applications
In this section we will report on the experience that we have gained with the
LDL system so far. We recognized that the only way in which the utility of
this new technology can be assessed is by application development. In this
process it is useful to distinguish between two classes of applications:
ffl "Old" applications such as database ones that have traditionally been
implemented by a procedural application program with embedded
query calls to the underlying database.
ffl "New" applications. Applications, that thus far were never implemented
at all or, if they were implemented, then this was accomplished
without any use of database technology.
As described in the previous section, the experience with traditional
database applications has been positive. Here we will concentrate on two
new promising application area; these are data dredging and harnessing
software.
4.2.1 Data Dredging
This is a class of applications in which the source of data is typically (but
not exclusively) a very large set of empirical observations or measurements,
organized into one or more base relations. Additional data may be added
over time but existing data are seldom updated. In fact, they are only
updated when found to be erroneous. Typical sources are measurement
data of empirical processes or data, recorded during simulation experiments.
The problem is to interpret this data, i.e., to use it for the verification of
certain hypotheses or, to use it for the formulation of new concepts. In both
cases the hypotheses or concepts may be conceptually far removed from the
level of the recorded data and their crystalization or definition entails an
interactive human/system process as follows:
1. Formulate hypothesis or concept;
2. Translate (1) into an LDL rule-set and query;
3. Execute query against the given data and observe the results;
4. If the results do not verify or deny (1) then, reformulate and goto (2);
otherwise exit.
Obviously, the decision to exit the process is entirely subjective and is decided
by the programmer. At this stage he/she may have either decided that
the concept is now properly defined or, that the data does not support this
concept and that it should be abandoned or tried out with different data.
While this process could be carried out using any programming language,
the use of LDL has the advantage that the formulation can be done at an
abstract level and hence, the "iteration time" through this process is significantly
shortened as compared to the traditional way, in which each iteration
involves the usual programming/compile/debug cycle.
We experimented with data dredging in two different application do-
mains: computer system performance evaluation and scientific data analysis
in the area of Microbiology. The first application [NaTs] involved the
formulation of the "convoy" concept in a distributed computing system. In-
tuitively, a convoy is a subset of the system entities (processes, tasks) that
move together for some time from one node to the other in the network of
processors and queues. The recorded data is low-level and consists of ar-
rival/departure records of individual entities at certain nodes. The concept
was defined in LDL using a small set of rules, and actual instances were
detected in the simulation data that were used. The second instance of data
dredging involves the identification of DNA sequences from (very) low-level,
digitized autoradiographs, that record the results of the experiments that
are performed in the sequencing of the E.Coli bacteria [GENE88]. Again,
the task is to extract the definitions for the four DNA bases A,C,G,T from
this low-level, noisy and often imperfect data. A large number of heuristics
need to be applied in this case and the use of LDL has the additional advantage
that it is simple to add special definitions, that need to be used within
narrow contexts, to the general definitions. It is thus relatively simple to
add "smarts" to the system as the experience with its use increases.
4.2.2 Harnessing Software
We mentioned that external C procedures can be used in the definition
of LDL programs. In the LDL context, these are regarded as evaluable
predicates. While normally, we expect the use of external code to be the
exception rather than the rule, reserved for special purposes e.g., graphical
routines, we can think of situations that lay at the other extreme: the bulk
of the software is written in standard, procedural code and only a small
fraction of it is rule-based and encoded in LDL. In this situation the rule-set
forms the "harness" around which the bulk of the code is implemented.
The rule portion forms a knowledge base that contains:
1. The definition of each of the C-module types used in the system.
2. A rule set that defines the various ways in which modules can be
combined: export/import relationships between modules, constraints
on their combinations etc.
The advantage of this organization is that the knowledge base can be
used in decisions that pertain to the reuse of software. Subsets of instances
of the existing module types can now be recombined, subject to the rule-
restrictions, to support different task-specifications. An added advantage is
that each of the individual module-types can be verified using any of the
existing verification methods and their global behavior is controlled by the
rule-set.
We are currently experimenting with this application type in the domain
of Banking software.
5 Conclusion
Perhaps, the most significant result of the LDL experience is proving the
technical feasibility of building a logic-based application language as an extension
of relational databases technology. The realization of this objective
has required solving technical challenges on many fronts-language design
and formal definition, compilation, optimization and system implementa-
tion. In the five years since the beginning of the project, problems have
been solved trough the combined efforts of a group of six to eight people.
Perhaps the most encouraging aspect of the whole experience is that while
a wide spectrum of interests and backgrounds-from a very theoretical one
to a very applied one-was represented in the group the effort remained
foucused and generated a remarkable degree of synergism. The result is a
system that supports the theoretical declarative semantics of the language
completely and efficiently.
Our experience suggests that it is reasonably easy to develop applications
using the LDL programming paradigm. But this conclusion is based
on a small sample of forward-looking programmers who are leaning toward
to declarative languages and logic programming. Whether the language
incorporating concepts such as recursion can attract large throngs of main-stream
practitioners is still to be seen. But it also clear that LDL has much
more to offer than current SQL-based 4GLs that are widely used for rapid
prototyping [DM89]. Thus, LDL shows some real potential as a powerful
rule-based language for the rapid development of data intensive applications
and applications in the C environment. A main thrust of our current efforts
is to improve the usability of the system by supporting interfaces for visual
programming and answer justification.
Acknowledgments
The authors would like to recognize the contribution the following persons:
Brijesh Agarwal, Fran-cois Bancilhon, Catriel Beeri, Charles Kellogg, Paris
Kanellakis, Tony O'Hare, Kayliang Ong, Arshad Matin, Raghu Ramakrish-
nan, Domenico Sacc'a, Oded Shmueli, Leona Slepetis, Peter Song, Emilia
Villarreal, Carolyn West.
--R
" Universality of Data Retrieval Lan- guages,"
"Towards a Theory of Declarative Knowledge,"
"Workshop on Database Programming Languages,"
"Naive Evaluation of Recursively defined Relations"
"A Differential Approach to Query Optimization in Recursive Deductive Databases"
"An Amateur's Introduction to Recursive Query Processing Strategies,"
"Sets and Negation in a Logic Data Language (LDL1)"
"Bound on the Propagation of Selection in Logic Pro- grams"
"Bound on the Propagation of Selection into Logic Programs"
"Set Constructors in a Logic Database Language"
"Magic sets and other strange ways to implement logic programs"
"On the Evaluation Strategy of Educe,"
"Interfacing Relational Databases and Prolog Efficiently,"
"An Overview of the LDL System,"
"The SALAD Cookbook: A User's Guide,"
"Using Modules and Externals in LDL,"
"Abstract Machine for LDL,"
"Towards an Open Architecture for LDL,"
"Making SMALLTALK a Database Sys- tem,"
"FAD-A Database Programming Language. Rev 2"
"The Rapid Prototyping Conundrum"
"Modelling Queries and Updates in Deductive Databases"
"Logic and Databases: a Deductive Approach,"
"MAPPING OUR GENES Genome Projects: How Big, How Fast?"
"First-Order Dynamic Logic,"
"Query Optimization by Simulated Annealing"
"An Optimizing Prolog Front End to a Relational Query System,"
''Optimizing the Rule Data Interface in a KMS
"Prolog and Relational Databases for 5th Generation Computer Systems,"
"Optimization of Non-Recursive Queries,"
"Non-Deterministic Choice in Data- log,"
"Towards a Real Horn Clause Lan- guage,"
"A Frame-work for Testing Safety and Effective Computability,"
"Optimization in a Logic Based language for Knowledge and Data Intensive Applications,"
"A Prolog Database System,"
Foundations of Logic Programming
"Semantics of Updates in logic Pro- gramming"
"A Logic for Negation in Database Systems,"
"A Logical Language for Data and Knowledge Bases,"
"On the Semantics of Stratified Deductive Databases and Logic Programs"
"Optimizing Existential Datalog Queries,"
"Data Abstraction, Views and Updates in RIGEL"
''On the implementation of a simple class of logic queries for databases
''Implementation of Recursive Queries for a Data Language based on Pure Horn Logic
''The Generalized Counting Method for Recursive Logic Queries
''Differential Fixpoint Methods and Stratification of Logic Programs
"Some High Level Language Constructs for Data of Type Relations"
"Access Path Selection in a Relational Database Management System,"
"Set Grouping and Layering in Horn Clause Programs,"
"Rewriting of Rules Containing Set Terms in a Logic Data Language (LDL),"
"Optimization of Large Join Queries"
"LDL: A Logic-Based Data Language,"
Database and Knowledge-Based Systems
"The semantics of Predicate Logic as a Programming Language"
"An Abstract Prolog Instruction Set,"
"Prolog: a database query language for all seasons,"
"The Representation and Deductive Retrieval of Complex Objects,"
"Safety and Compilation of Non-Recursive Horn Clauses,"
--TR
On the implementation of a simple class of logic queries for databases
Naive evaluation of recursively defined relations
Magic sets and other strange ways to implement logic programs (extended abstract)
An amateur''s introduction to recursive query processing strategies
On the evaluation strategy of EDUCE
Prolog: a database query language for all seasons
Bounds on the propagation of selection into logic programs
A generalization of the differential approach to recursive query evaluation
Query optimization by simulated annealing
Foundations of logic programming; (2nd extended ed.)
Optimization of large join queries
A framework for testing safety and effective computability of extended datalog
Towards a theory of declarative knowledge
On the declarative semantics of deductive databases and logic programs
Parallelism in bubba
Object-oriented concepts, databases, and applications
A logical language for data and knowledge bases
Towards an open architecture for LDL
Set constructors in a logic database language
Rewriting of rules containing set terms in a logic data language LDL
Optimizing existential datalog queries
Database updates in logic programming
Some high level language constructs for data of type relation
The Semantics of Predicate Logic as a Programming Language
Logic and Databases: A Deductive Approach
PROLOG Database System
Universality of data retrieval languages
Access path selection in a relational database management system
Data abstraction, views and updates in RIGEL
Making smalltalk a database system
Optimization in a Logic Based Language for Knowledge and Data Intensive Applications
Optimizing the Rule-Data Interface in a KMS
Optimization of Nonrecursive Queries
Towards a Real Horn Clause Language
--CTR
Arie Segev , J. Leon Zhao, A Framework for Join Pattern Indexing in Intelligent Database Systems, IEEE Transactions on Knowledge and Data Engineering, v.7 n.6, p.941-947, December 1995
Qing Zhou , Ligong Long, SEDatalog: a set extension of datalog, Intelligent information processing II, Springer-Verlag, London, 2004
Raghu Ramakrishnan , Divesh Srivastava , S. Sudarshan , Praveen Seshadri, Implementation of the CORAL deductive database system, ACM SIGMOD Record, v.22 n.2, p.167-176, June 1, 1993
Arie Segev , J. Leon Zhao, Efficient maintenance of rule-derived data through join pattern indexing, Proceedings of the second international conference on Information and knowledge management, p.194-205, November 01-05, 1993, Washington, D.C., United States
Raghu Ramakrishnan , Divesh Srivastava , S. Sudarshan, CORAL - Control, Relations and Logic, Proceedings of the 18th International Conference on Very Large Data Bases, p.238-250, August 23-27, 1992
Linda Sirounian , William I. Grosky, A Knowledge Model For Unifying Deductive and Non-Deductive Heterogeneous Databases, IEEE Transactions on Knowledge and Data Engineering, v.7 n.1, p.82-105, February 1995
Jeffrey D. Ullman , Carlo Zaniolo, Deductive databases: achievements and future directions, ACM SIGMOD Record, v.19 n.4, p.75-82, Dec. 1990
Jiawei Han, Chain-Split Evaluation in Deductive Databases, IEEE Transactions on Knowledge and Data Engineering, v.7 n.2, p.261-273, April 1995
Alanoly J. Andrews , Nematollaah Shiri , Laks V. S. Lakshmanan , Iyer N. Subramanian, On implementing
SchemaLog
Arie Segev , J. Leon Zhao, Data Management for Large Rule Systems, Proceedings of the 17th International Conference on Very Large Data Bases, p.297-307, September 03-06, 1991
R. G. G. Cattell, Next-generation database systems, Communications of the ACM, v.34 n.10, p.30-33, Oct. 1991
Jiawei Han , Ling Liu , Zhaohui Xie, LogicBase: a deductive database system prototype, Proceedings of the third international conference on Information and knowledge management, p.226-233, November 29-December 02, 1994, Gaithersburg, Maryland, United States
Michael Stonebraker , Jim Frew , Kenn Gardels , Jeff Meredith, The SEQUOIA 2000 storage benchmark, ACM SIGMOD Record, v.22 n.2, p.2-11, June 1, 1993
Haixun Wang , Carlo Zaniolo, Nonmonotonic reasoning in LDL++, Logic-based artificial intelligence, Kluwer Academic Publishers, Norwell, MA, 2000
R. Ramesh , Weidong Chen, Implementation of Tabled Evaluation with Delaying in Prolog, IEEE Transactions on Knowledge and Data Engineering, v.9 n.4, p.559-574, July 1997
Konstantinos Sagonas , Terrance Swift , David S. Warren, XSB as an efficient deductive database engine, ACM SIGMOD Record, v.23 n.2, p.442-453, June 1994
Carlo Zaniolo, Data and knowledge in database systems: deductive databases, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
M. Stonebraker, The Integration of Rule Systems and Database Systems, IEEE Transactions on Knowledge and Data Engineering, v.4 n.5, p.415-423, October 1992
F. Nihan Kesim , Marek Sergot, A Logic Programming Framework for Modeling Temporal Objects, IEEE Transactions on Knowledge and Data Engineering, v.8 n.5, p.724-741, October 1996
Hasan M. Jamil, Belief reasoning in MLS deductive databases, ACM SIGMOD Record, v.28 n.2, p.109-120, June 1999
Mengchi Liu, Design and Implementation of the ROL Deductive Object-Oriented Database System, Journal of Intelligent Information Systems, v.15 n.2, p.121-146, Sept./Oct. 2000
Vincenzo Ambriola , Giovanni A. Cignoni, A distributed virtual machine to support software process, ACM SIGSOFT Software Engineering Notes, v.20 n.1, p.85-89, Jan. 1995
Antonella Guzzo , Domenico Sacc_aff1n2, Semi-Inflationary DATALOG: A declarative database language with procedural features, AI Communications, v.18 n.2, p.79-92, April 2005
Jess M. Almendros-Jimnez , Antonio Becerra-Tern, Database query languages and functional logic programming, New Generation Computing, v.24 n.2, p.129-184, January 2006
Raghu Ramakrishnan , Divesh Srivastava , S. Sudarshan , Praveen Seshadri, The CORAL deductive system, The VLDB Journal The International Journal on Very Large Data Bases, v.3 n.2, April 1994
Alexandra Poulovassilis , Carol Small, A Domain-theoretic Approach to Integrating Functional and Logic Database Languages, Proceedings of the 19th International Conference on Very Large Data Bases, p.416-428, August 24-27, 1993
Nicola Leone , Pasquale Rullo , Antonella Mecchia , Giuseppe Rossi, A Deductive Environment for Dealing with Objects and Nonmonotonic Reasoning, IEEE Transactions on Knowledge and Data Engineering, v.9 n.4, p.539-558, July 1997
Paolo Ciancarini, Coordinating rule-based software processes with ESP, ACM Transactions on Software Engineering and Methodology (TOSEM), v.2 n.3, p.203-227, July 1993
Alexandra Poulovassilis , Carol Small, A Functional Programming Approach to Deductive Databases, Proceedings of the 17th International Conference on Very Large Data Bases, p.491-500, September 03-06, 1991
Yuh-Ming Shyy , Javier Arroyo , Stanley Y.W. Su , Herman Lam, The design and implementation of K: a high-level knowledge-base programming language of OSAM*.KBMS, The VLDB Journal The International Journal on Very Large Data Bases, v.5 n.3, p.181-195, August 1996
Vincenzo Ambriola , Reidar Conradi , Alfonso Fuggetta, Assessing process-centered software engineering environments, ACM Transactions on Software Engineering and Methodology (TOSEM), v.6 n.3, p.283-328, July 1997
Barbara Catania , Elisa Bertino, Static Analysis of Logical Languages with Deferred Update Semantics, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.386-404, February
Guy M. Lohman , Bruce Lindsay , Hamid Pirahesh , K. Bernhard Schiefer, Extensions to Starburst: objects, types, functions, and rules, Communications of the ACM, v.34 n.10, p.94-109, Oct. 1991
Shalom Tsur, Deductive databases in action, Proceedings of the tenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, p.142-153, May 29-31, 1991, Denver, Colorado, United States
using deductive object-relational databases in CAD, SoftwarePractice & Experience, v.33 n.2, p.143-172, 1 February
Mengchi Liu, Deductive database languages: problems and solutions, ACM Computing Surveys (CSUR), v.31 n.1, p.27-62, March 1999
Mihalis Yannakakis, Perspectives on database theory, ACM SIGACT News, v.27 n.3, p.25-49, Sept. 1996 | knowledge based systems;LDL language;runtime environment;compiled queries;relational database;knowledge-based applications;optimization techniques;high level languages;relational databases;compilation techniques;stored data;target query execution plans;logic programming technologies;logic programming;LDL queries;logic data language;LDL approach;declarative logic-based language;program compilers;LDL system prototype |
627416 | A Knowledge-Based Environment for Modeling and Simulating Software Engineering Processes. | The design and representation schemes used in constructing a prototype computational environment for modeling and simulating multiagent software engineering processes are described. This environment is called the articulator. An overview of the articulator's architecture identifying five principal components is provided. Three of the components, the knowledge metamodel, the software process behavior simulator, and a knowledge base querying mechanism, are detailed and examples are included. The conclusion reiterates what is unique to this approach in applying knowledge engineering techniques to the problems of understanding the statics and dynamics of complex software engineering processes. | Introduction
Modeling the process of software engineering represents a promising approach toward understanding
and supporting the development of large-scale software systems. The software
process is the collection of related activities, seen as a coherent process subject to reason-
ing, involved in the production of a software system [Wil86]. A software process model is
a prescriptive representation of software development activities in terms of their order of
execution and resource management A software process meta-model is a representation formalism
which provides necessary components to create various types of software process
models [Wil86].
A meta-model of the software process should possess the capability to include major properties
of contemporary software development practice. Recent evaluations on software process
models [CKSI87, SFG85] suggest that effective software process models should address
organizational and technical dimensions including 1) detailed descriptions of software pro-
cesses, products and settings; 2) their interactions; management and exception handling
during the performance of software processes; and
and project-specific processes. We present a meta-model which uses a knowledge representation
language to specify all these aspects and further provides mechanisms to investigate
the interactions among these dimensions.
An automated modeling environment for software development should be powerful enough
to support model validation and verification. By simulating a specified software process, its
environment and its users can collectively detect faults, inconsistencies, or anomalous behavior
in a process prescription. Emerging conflicts in time schedule and resource allocation,
for example, are some common anomalies in multi-agent process plans [DLC87]. Complex
faults, on the other hand, may concern the configuration of task decomposition and organizational
settings as well. The environment should also assist in determining possible solutions
for contingencies encountered in task execution. These solutions are based on particular resource
and knowledge configurations, hence they should be setting-specific, project-specific,
agent-specific and time-specific. As such, by simulating task execution, this enables a user
to predict development progress on a more realistic basis and compare different process
models. We describe the design of such an environment which utilizes our software process
meta-model.
In the next section, we provide some background to our approach. In Section 3, we present
the system architecture of the Articulator and discuss important issues covered in the design
and use of the Articulator. Following this, we will discuss some of the subsystems in turn:
Section 4 discusses the knowledge base, which stores our meta-model of software processes;
Section 5 gives accounts for the simulation of the Articulator meta-model; and Section 6
presents the query mechanism. We then conclude with a summary of novel contributions of
the Articulator project.
Background
As we noted earlier, there has been growing interest focused on the problem of modeling,
analyzing, and automating various aspects of software engineering processes [Sca88]. Wileden
[Wil86] suggested a modeling framework based upon use of a software process meta-model.
Osterweil followed with a paradigmatic approach which cast the software process meta-model
into what he called a process programming language - a language for programming
prescriptive process models into a software development environment [Ost87].
Since then, much research effort has been directed to the design and implementation of
languages for software process automation, and to the construction of more realistic mod-
els. For example, many researchers have introduced process language constructs including
rules and pattern matching [Kai88], behavioral patterns [Wil88], graphic finite-state machines
[HK89] and agent-task-product relations [Gar89]. But none provides a direct means
for querying the status or state of a modeled software process. Others including [SFG85]
and [HL88] use knowledge representation languages and deductive planning mechanisms for
software process modeling. But overall, these efforts lead to closed, single agent (i.e, globally
controlled) systems. Further, with the exception of [HK89] and [Gar89], most efforts do not
explicitly reference or use empirical sources for their software process models.
Modeling and simulating complex organizational processes performed by people requires
an empirically based, multi-agent open systems framework [GS86, Hew86]. For instance,
[CKI88] and [BS89] are examples of recent empirical studies aimed at providing more realistic
descriptions of multi-agent software processes. But their modeling efforts have not been cast
in the form of a language or computational environment.
We seek to resolve these shortcomings in modeling and automating (ie, simulating) software
processes. This allows us to identify what is new about our work presented here. Our
approach uses a software process meta-model derived from an established approach for empirical
studies of computing in organizational settings. In addition, it allows us to model
multi-agent software processes in an open systems manner, meaning that process conflicts
can arise that must be resolved locally (ie, through agent-agent interactions), rather than
through automated global control. The environment supports the simulation of these multi-agent
process models. The meta-model, individual process models, and process simulation
traces can each be queried both directly and deductively. In the sections that follow, we will
describe each.
3 The Architecture and Users of the Articulator
The Articulator is a knowledge-based environment for studying software processes. It provides
a meta-model of software processes, an object-based language to specify models of software
processes and an automated simulation mechanism. The system architecture of the Articulator
consists of five subsystems (Figure 1): the knowledge base, the behavioral simulator,
the query mechanism, the instantiation manager, and the knowledge acquisition manager.
The Articulator has been prototyped over a two-year period using the KnowledgeCraft TM
knowledge engineering environment on a T I \Gamma ExplorerII TM [Car86].
The knowledge base implements the Articulator meta-model by an object-based approach.
The meta-model consists of the web of resources and situations, which is a model of software
development, the representation of agent's task performance skills. The Software Process
Specification Language (SPSL) is a user interface enabling users to define customized software
process models based on the meta-model. The knowledge base is defined within an
object-based knowledge cluster. Section 4 provides more details of the Articulator knowledge
schema.
The instantiation manager manages the relationships between the meta-model, the customized
software process models and their instances. It maintains all these relationships
according to creation time and lines of inheritance, as well as retrieving the correct instance
when requested. In contrast to conventional database systems, there is no explicit boundary
among the meta-model, the software process model and their instances. All of them can be
manipulated and modified as an associated instance to the old one whenever needed.
The behavioral simulator controls the simulation of a given software process model and
creates a process trajectory 2 over a development period. In this regard, behavioral simulation
is a symbolic execution of a software process model described in SPSL notation.
Mechanisms to perform software process activities are implemented within the behavioral
simulator. In terms of the representation of software processes, there are three types existing
in the Articulator: the prescriptive process model before execution, the simulated trajectory
of the prescription, and the descriptive recorded development history. Each of them serves
a different purpose in the Articulator, but bears the same form of representation. Section 5
provides a more detailed view of the behavioral simulator.
The query mechanism supports logical rules for various types of deductive queries. It
helps users access information in the Articulator efficiently. The information sources are the
knowledge base, i.e., the meta-model, the software process models and their instances. Using
object inheritance techniques and backward-inference mechanisms, the query mechanism
process trajectory is a sequence of snapshots of software development over a period of time.
AN ADVANCED CASE ENVIRONMENT
PROJECT MANAGERS, SOFTWARE PROCESS EXPERTS
S.P. MODEL M
S.P. MODEL 2
INSTANIATION MANAGER
KNOWLEDGE BASE
KNOWLEDGE ACQUISITION
QUERY MECHANISM KNOWLEDGE ACQUISITION
QUERY MECHANISM KNOWLEDGE ACQUISITION
QUERY MECHANISM
S.P. MODEL 1
Figure
1: The System Architecture of The Articulator
reasons about information and knowledge to determine answers to several types of questions.
Section 6 for a description of the query mechanism.
The knowledge acquisition manager is an interface for the Articulator to get a software
process model and associated data. The knowledge acquisition manager takes a structured
description of agents, tasks, and resources as inputs, then translates it into a configured
software process model. It also assesses the information gathered from software development
projects controlled by the Articulator and stores them for later use. An automated knowledge
acquisition manager for interactive capturing of model refinements and real-time software
process data has not been implemented yet, but simple mechanisms for model and data
input are currently in use.
Users of the Articulator fall into three categories: process researchers, project managers,
and software developers. Software process researchers study software process models in order
to identify ones that are highly efficient, satisfy different performance requirements, or reveal
subtle software process dynamics requiring further study. These users define various types
of software process models, test them by simulation, compare the simulation results with
observed development histories, and refine them according to certain criteria. This is an
iterative, incremental process; it continues until an acceptable model of software processes
is achieved which fits in a particular infrastructure. These users may potentially modify the
meta-model as well when it is necessary to incorporate new features into it.
Software project managers select or configure an existing software process model which
suits their project needs and essentially provides guidance (a plan) for how to carry it
out successfully. Managers use the Articulator to get access to a software process model,
instantiate it according to their local project situations, simulate and refine it in order to
create a plan for development, and realize it in their own organizational settings. When they
encounter unexpected problems during a planned development project, they can consult the
Articulator to find plausible solutions and to evolve the model based on the solution. This is
similar to the suggested use of the Callisto system used in computer manufacturing processes
[SFG85].
Software developers use the Articulator through a CASE environment [Sca88]. In this
way, the Articulator helps to coordinate their development activities according to a prescriptive
model, and serves as both an active agenda mechanism and an information exchange
center. At the same time, all the development activities are recorded into a history of development
and can be fed back to managers in order to monitor development progress. This
history will then be used by software process researchers as an empirical source of observation
on the practical character of the software process model in use. It can also be used in
validating the model or in making modifications to it.
4 The Articulator Meta-model
This section presents the Articulator meta-model stored in the knowledge base. It is an
object-based representation of a software development infrastructure that consists of the
web of resources and situations and the agent's task performance skills.
4.1 The Web of Resources and Situations
The web of resources and situations describes an infrastructure of developers, organizations,
tasks, and other resources, through which software systems are engineered. It is intended
to provide an articulate view of the many aspects of software engineering processes within a
single formalism. The theoretical scheme underlying the Articulator meta-model is the web
model of computing introduced by Kling and Scacchi [KS82]. This web model replies upon
empirical studies to make explicit a variety of connections between computing technologies,
artifacts, activities, together with their embedding social situations and organizational in-
frastructure. It also focuses equal attention to how people and their computing systems
interact, cooperate, compete, and conflict with each other in the course of their work.
In computational form, the web consists of clusters of attributed objects and relations
linking them, together with various processing mechanisms. Each of the objects is defined as
a model of a type of the components in the software process and represented as a schematic
class. These objects are further divided into several subclasses. A subclass is divided repeatedly
until an empirically observable level of detail is reached. Furthermore, every schematic
class has a set of attributes specifying its own properties, a set of relations linking to other
classes, and may have many instances to inherit its defined properties and relations with
their defined values. Several high level classes in the web of resources and situations are
shown in Figure 2. We briefly discuss its main components and their relationships here 3 .
The top level abstraction of the Articulator meta-model consists of three major objects:
resources, agents, and tasks. They are linked together through two relations: agents perform
tasks and tasks use resources. This abstraction captures our fundamental understanding of
software development processes, and in a larger sense, complex organizational activities.
A resource, as a model of general objects and products, portrays the general properties
of organizational objects and is the root of the Articulator meta-model in terms of the IS-A
relation. Accordingly, resources are objects used in tasks by agents. In the Articulator
meta-model, tasks consume and produce resources, which in turn alter the values of resource
3 The current implementation of the web represents more than 500 object classes and nearly 2000 relations.
Most object classes include 10 or so attributes. In addition, there are over 200 rules and procedures which
support behavioral simulation and query processing.
ROLE
AGENT
ACCOMMODATION (INDIVIDUAL)
NEGOTIATION (COLLECTIVE)
INDIVIDUAL PRIMARY TASK
COLLECTIVE PRIMARY TASK
ARTICULATION TASK
INDIVIDUAL AGENT
COLLECTIVE AGENT
ORGANIZATION
Figure
2: Part of The Web of Resources and Situations along the IS-A Relation
attributes. These attributes include among others name, current status, function description,
location, ownership, and usage in tasks and by agents.
An agent represents a collection of behaviors and associated attributes. An agent's behavior
emerges during the performance of tasks (including communications, accommodation,
and negotiation) given the agent's set of skills, available resources, affiliated agents, and organizational
constraints or incentives - that is, given the agent's circumstantial situation.
We use agents as a general model of developers, development teams, and organizations. We
also include development tools such as computers or software programs as a subclass of
agents.
An agent's ability to perform tasks is defined by its working load, its agenda, its selected
work style, and its working tasks together with its behavior controller (its "self"). An agent
may also have skill, experience, and knowledge of task performance. In order to perform a
task, an agent must possess the necessary resources and rights of information access. Living
in an organizational infrastructure, an agent may have affiliations with other organizations
and play different roles in different organizational situations [Gar89, KS82]. Besides these,
agents have a knowledge representation which specifies their potential behaviors, and this
behavior can also be dynamically simulated. These aspects will be discussed later.
There are several types of agents in the Articulator meta-model. Individual agents are
single entities, such as a single developer or a single machine. Collective agents, such as
teams and organizations, have infrastructures defined for a group of agents to work together,
thereby enlarging their efforts. In a collective agent, individual agents work cooperatively
or competitively to achieve their collective and individual goals. However, collective agents
can also be in conflict over how to achieve their goals, as well as over which goals are worth
achieving in what order.
A task models organizational work and development processes. Tasks represent situations
for work and processes in terms of a network of actions (i.e. operators), that agents perform
which manipulate the web of resources and situations. A task, defined as a structural hierar-
chy, is used to represent both a semi-formal plan of the actual task before it is carried out (a
prescription) and the actual execution trajectory of the task after it has been done (a descrip-
tion). Both of these include a hierarchy of task decomposition and a non-linear performance
sequence. We model two types of organizational work: primary tasks and articulation tasks
[BS87], which distinguish development-oriented tasks from coordination-oriented tasks. The
hierarchy of task decomposition may include multi-level nested decomposition, iteration and
multiple selection. Levels of specification depend on user requirements and can be modified
as requested. At the bottom level of this hierarchy are actions. Actions are basic processing
units within the processing mechanism. Further, an action links to a procedural specifica-
tion, such as a LISP function or a forward-chaining mechanism, which propagates updates
through the current state (or instance) of the web of resources and situations.
Interesting properties of a task include: assigned and authorized performers; task hierarchy
and execution ordering; schedule; duration, deadline, start time and finish time; and
resources planned to be consumed or produced by the task.
The Articulator meta-model is an open system [Hew86]. It has the following special
characteristics:
ffl The boundary of the meta-model and its interface with the outside world is deter-
minable, though not necessarily static. The meta-model, besides manipulating its own
resources, communicates with the outside world. Such communication of the Articulator
meta-model with the outside world is made possible through acquiring or providing
resources.
ffl All the resources in the web have their own life cycle. Every instance of a resource is
either created by some task, or introduced by the outside world, persists for a period
of time, and then is consumed by other tasks or exported to the outside.
ffl An agent's power to manipulate the web of resources and situations is limited and
configurable. This manipulation power includes possession or control of resources,
rights of information access, and rights of task performance. This power can be restricted
to some constraint over a period of time. On the other hand, this power may
be reconfigured at any time by authorized agents. In this way, centralized control,
distributed control, or something in between, can be modeled in the Articulator meta-
model. Also, differences in relative power among interacting agents can give rise to
conflicts in task performance. These conflicts can thus alter the task situation from
focus on performance to resolution of conflict.
ffl The web of resources and situations is a densely interrelated infrastructure. By defi-
nition, any entity in the web is associated with many other objects through relations.
With this kind of infrastructure, execution of a task can cause many implicit side effects
besides its intended behavior. For example, in a development task, a manager agent
may assign a task to a developer agent without allocating the necessary resources for
task completion. This will not cause any problem in task assignment, but will surely
delay the task execution since the developer agent will have to spend time to find the
resources required for task execution. The consequence and implication of side effects
are of interest because they resemble real situations in many ways.
Establishing a model of a software development process is made possible by using the
Articulator meta-model. Different types of software process models can be defined. For
example, a software production-process model, such as the Waterfall model [Gar89, HK89,
Ost87, Wil88] or the Automation model [Gar89] can be specified by the Articulator as
a hierarchy of software development activities and their suggested prescriptive execution
sequence. A software production-setting model can be viewed as a mixed task representation
of primary tasks and articulation tasks. As an example to be used later, we define a simple
working team here. A development team, called Team-A, belongs to company F and has
three members: Mary, Joe and Peter. The team is responsible for the task of designing the
FOO system, which consists of two component tasks: architecture design and detail design.
This small example essentially shows a setting, a development team and a task assigned to
the team. Figure 3 gives a specification of the example in SPSL, while other details, such as
resource specification, detailed process prescription will be provided later.
;; Company-F is an organization in the model.
(define-object Company-F
(is-a
;; Team-A is a team in Company-F and has two members, Mary and Joe.
(define-object Team-A
(is-a TEAM)
(team-belong-to-organization Company-F))
(define-object Mary
(is-a PEOPLE) LEGEND:
(individual-in-collective-agent lower-case words: Reserved key terms.
(task-execution-strategy Finish-one-FIFS) UPPER-CASE words: Reserved object types.
(accommodation-strategy Switching)) Upper-case starting word: Defined objects.
(define-object Joe
(is-a PEOPLE)
(individual-in-collective-agent
(task-execution-strategy Finish-one-FIFS)
(accommodation-strategy Waiting))
;; Task Design-FOO has two subtasks and is assigned to Mary and Joe.
(define-object Design-FOO
(is-a TASK-CHAIN)
(task-force-assigned-to-agent Joe Mary)
(production-task-has-component Architecture-design Detail-design))
;; Subtask Architecture-design is assigned to Mary
(define-object Architecture-design
(is-a TASK-CHAIN)
(task-force-assigned-to-agent Mary))
;; Subtask Detail-design is assigned to Mary and Joe
(define-object Detail-design
(is-a TASK-CHAIN)
(task-force-assigned-to-agent Mary Joe)
(task-force-has-predecessors Architecture-design))
Figure
3: An SPSL specification of Team-A. Five classes of objects are defined in terms of
their associated attributes. Other details are omitted for simplicity. The task specification
performed by this team is given in Fig 7.
4.2 Model of Agent's Task Performance Skill
An agent's behavior during the software process is the way it performs tasks, given its plans
and emerging circumstances. In other words, it is the trajectory of task execution that
constitutes the agent's behavior. The behavioral specification is a knowledge representation
of task performance skill [Str88]. An agent's task performance skill is represented according
to a three-level paradigm, where each level is a space specifying a particular type of knowledge
and operators to manipulate it. This three-level paradigm is similar in concept with other
multi-level problem-solving architectures, such as those in [Gen83, Ste81].
The domain space stores information and knowledge of an application domain. This is
an agent's personalized domain knowledge and is generally a subset image of the web. It is
limited by the agent's manipulation power, i.e. its possession of resources, its information
access and its rights of task performance. Operators in the domain space are actions that a
designated agent can perform within the application domain.
The task space stores operational knowledge of the manipulation and reasoning of domain
information and knowledge, i.e. the specification of tasks. In other words, operators in the
domain space are also objects in the task space. They are associated and configured to
create meaningful tasks. Other objects in the task space are entities used for the evolution
of these tasks. Operators in the task space are meta-actions that manipulate tasks in the
domain space. Also, meta-tasks (e.g. how to organize, staff and plan primary tasks) in the
task space are combinations of meta-actions.
The strategy space stores strategic knowledge which directs tasks of organizational work
and task performance, such as control structures. Objects in the strategy space, just as
in the tasks space, are meta-tasks defined in the task space and other associated entities.
Operators in the strategy space are super-meta-actions 4 , which manipulate meta-tasks in
order to determine their control structures.
In this representation, when an operator is applied to a state 5 of its space, it creates a new
state. The application of an operator is a step in the application of a task which consists
of a set of ordered operators. In terms of the three spaces, task execution is a reasoning
process for how to apply an operational specification (an object in the task space), which
is a task in the domain space, on (the state of) the domain space to infer to a new state
through continuous application of operators according to specified plans. Meta-reasoning is
a process of applying a strategic specification, (an object in the strategy space), on a state of
the task space to infer to a new state through continuous application of operators according
4 Super as in super-class, class, sub-class hierarchies.
5 A state refers to a snapshot of interrelated object-attribute values in the web. Thus a new state represents
updates of object-attribute values or relations in the current state.
to the strategic specification. Two types of task performance skill are modeled: individual
task performance and collective task performance.
Individual task performance models the agent's ability to perform tasks individually. It
is conceptualized as a combination of reasoning and meta-reasoning processes in all three
spaces. When a problem is presented, the agent first chooses a strategy to deal with it.
Meta-reasoning is then performed to create a meta-task for the problem, which in turn can
be used to produce a copying action on a problem-solving method, i.e. a task in the task
space. Next, the task is performed in the domain space to produce a resolution. When a
solution or a task for a problem is known and available, reasoning is the only process issued
to create the solution.
Collective task performance skill refers to an agent's ability to work with other agents
through interactions to get things done jointly. This collective intelligence, based on individual
task performance, supports three basic kinds of interaction: communication, synchronization
and articulation. Communication among agents is a way to exchange information.
In communication, agents exchange their knowledge about the web of resources and situations
by sending and receiving messages. Through message exchange, they can transfer their
manipulation power. They can also integrate individual efforts by combining exchanged
individual products together. Synchronization among agents arrange schedule for a group
of agents come together in order to perform collective tasks. For a collective task, all its
performers have to be present for them to be executed. On the other hand, these agents are
normally executing their own individual tasks while some of them may initialize collective
tasks at any moment. Synchronization arranges schedules for collective tasks and is responsible
for the follow-up actions when this fails. Articulation, at last, handles unexpected events
which stop normal task performance. Articulation is the way to amplify individual skill and
intelligence in a workplace [BS87, GS86, Hew86, MS89, Str88] and is discussed in [MS89].
5 Behavioral Simulation
With the specification of a software development process model and a set of agents, behavioral
simulation is defined as the agents' symbolic execution of task specification using available
resources over a period of time. The trajectory of this symbolic execution is recorded as
the predicted development history of the process and the behavior of the agents is exhibited
through their simulated task performance.
The behavioral simulation generally begins with a set of agents with their behavioral
specification defined according to their current task performance knowledge and skill. These
agents are given a set of tasks as their assignment. A set of resources is also provided along
Action AN+M
Action A2+M
Action A1+M
Time N+M
Time N Time N+1 Time N+2
Action AN
Action A2
Action
Figure
4: The Behavioral Simulation of Agents
with the tasks. Some of these resources will be consumed during the task performance.
Others will only be used and returned (i.e. reusable resources). All these objects are specified
within a single state as the initial instance of a process model as the starting position. A
state is used to model the trajectory at a different instance in time. Each state is created
by actions, and actions are linked as tasks. Figure 4 suggests a description of this behavior
simulation. In the picture, TimeN, TimeN+1 are states and lines represent actions. Actions
use some resources which are represented as circles. Overlapped circles indicate resource
requirement conflicts, which can be resolved either through synchronization or articulation
[MS89].
There are several requirements to be observed during the behavioral simulation:
1. The task assignments are stable, if not otherwise explicitly changed. This is to say
that all the agents must finish their assigned tasks before the simulation is done.
2. At any time instance, an agent cannot perform two actions simultaneously. But, an
agent may perform several tasks concurrently during a period.
3. At any time instance, an agent can not work on more tasks than its available supply
of resources allow
4. The preconditions associated with in an action must be satisfied before it can be
executed. These preconditions include resource requirements, partial ordering of exe-
cution, and execution authorization.
;; run the current action. The rule selects the agent, its agenda, and
;; its current action as the conditions. It then starts execution
(p Run-action :context t
-instance !? ()
-schema-name !agent?
-controller-goal Controller-action-checked
-agent-has-agenda !agenda?
(AGENDA
-instance !? ()
-schema-name !agenda?
-current-slot !c-slot?
-time-slot-allocation !slots?
(ACTION
-instance !? ()
-schema-name !action?
-schema-name (select-current-action !? !slots? !c-slot?)
-?
(format t "Start to execute action ~A~&" $!action?)
(new-value $!agent? 'controller-parameter1 (current-p-l $!slots? $!c-slot?))
(new-value $!agent? 'controller-goal 'run-action))
Figure
5: An SPSL rule for an individual-agent action
The behavioral simulation starts from the initial state and simulates the agents' activities
performing their assigned tasks. Simulation of task performance is accomplished by
# of Agents # of Tasks # of Actions Agent-Task Agent-Action Communication
One One Many 1
One Many Many 1 - Scheduling for One Agent
Many Many Many 1 - N 1 Communication among the Agents
Many One Many Synchronous Combination of Results
Many One Many M - 1 M - 1 Synchronization of Performance
Many Many Many M - N M - 1 All the Above
Figure
Types of Task Performance by Agents
execution of the task specification in top-down fashion. The higher levels of a task hierarchy
provide information about work assignment and resource allocation. The lower levels
of the task hierarchy provides links to procedural definitions which are executable in order
to create new states. At each simulation step, symbolic execution is done by first propagating
necessary information from high levels to lower ones, checking preconditions of an
action, and then invoking the associated procedural definition to propagate changes. All the
changes from different agents are then combined to form a new state of the software process
model. Two things condition the new state and the task execution. First each agent selects
the executing action out of its own choice. The selected action is among its current work
assignments. The choice can be influenced but not determined by outsiders. Second, there
are probably conflicts in resource requirements and the changes which need to be solved
through articulation [MS89]. Part of the symbolic execution of an action is shown in Figure
5 where the rule gets an agent and its agenda, finds its current action, and starts to execute
the action symbolically.
The final result of the behavioral simulation is a trajectory over a period of time, in
which every action of the tasks is performed once by a subset of the agents at a time
instant. These trajectories can be made persistent and evolvable. Such trajectories can be
subsequently studied according to different criteria in order to analyze information about
task performance, such as the agents' behavior during execution, their productivity, resource
utilization, alternative "what-if" scenarios, and other interesting properties. However, the
task execution is not guaranteed to finish successfully. Problems may rise due to unexpected
events that need to be articulated. Articulation of task performance also affects the above
criteria and their consequences can be tracked as well [MS89].
Based on the Articulator meta-model, many types of agents and tasks can be assigned
to a software development process. This means that behavioral simulation can be divided
into several types according to the number of involved agents, tasks, and the communication
patterns between the agents shown in Figure 6.
Let us consider our earlier example of Team-A (Figure again. The example is input
into the Articulator and then simulated by the behavior simulator. In the example, there
are two agents performing task "Design FOO". The task specification and work assignment
are shown in Figure 7. The resource requirements of Design-FOO are also given, but we can
only show a single requirement here as in Figure 8. In addition, Mary has another task, and
thus sends a message to Peter, another member in Company F, for assistance.
Due to space limits, we only present a summary report in Figure 9 of the simulation here,
which is obtained from the trajectory history and provides condensed information about the
simulation. Then we discuss the types of behavior demonstrated in this example.
This behavioral simulation involves multiple agents performing a single task that requires
the combination of each agent's task results. Initially, it has three agents and two tasks.
However, our example focuses on one which is performed by two agents in combination.
During the performance, a task-action ordering emerges. When either of two actions can be
executed at the same time, an agent selects one randomly.
The agents communicate twice in the simulation. At time 3, Mary sends a work assignment
to Joe, who reads the message at time 5 and begins to perform the task at time 7.
Mary sends a file to Peter at time 9, who reads it at time 12.
Lack of resources occurs twice, and both are resolved through accommodation. At time 7,
when Joe tries to start his task execution, the Valid-document-spec, a document created by
action Validating-architecture-design, does not exist at the moment. Since Joe chooses
a waiting strategy to accommodate (Figure 3), he simply waits for the resource. Fortunately,
the resource becomes available at time 8, so he continues. At time 8, Mary encounters the
same problem. She prefers to switch to another task as her accommodation strategy, so she
selects to perform another task: Send-file-to-peter and resume the original task at time
10, when the resource is available.
The task execution completes in eleven time steps by two agents. In total, there are 22
time steps from two agents of which 12 are used to perform the task, 1 for
waiting and 1 for switching. The other time steps ("slack time")
could be utilized for other task performance if needed.
6 The Query Mechanism
The query mechanism accepts user queries to retrieve information from the the Articulator
meta-model, the software process models and their instances.
The query functions are built in bottom-up fashion. A set of atomic functions, implemented
as forward-chaining rules, are used to get very basic information about attributes
Detailed Design
Architectural Design
Design-FOO
Task has component
Task has successor
LEGEND
A3
ACTIONS:
A1: Establish system structure A2: Decompose system
A3: Establish subsystem interface A4: Inform Joe about the task
A5: Document architecture design A6: Validate architecture design
A7: Design module structure A8: Develop data representation
A9: Detail subsystem interface A10: Design system interface
Figure
7: Task Decomposition of Design-FOO. The unshaded actions (indicated by circles)
are assigned to Mary and the shaded actions are assigned to Joe.
;; System-data-structure-spec is a resource manipulated in Design-FOO.
;; It is created by Developing-data-representation, and used in
;; Defining-algorithm and Designing-system-interface.
(define-object System-data-structure-spec
(is-a DOCUMENT)
Figure
8: A Resource Specification by SPSL
and relations. They are atomic because they only retrieve attributes and relations. Such a
set of rules is used to find if a designated relation exists between two objects. Higher level
functions involve the knowledge representation and provide information about the represen-
tation. Users are encouraged to develop their own queries using the facilities we provide.
The main concern in the query mechanism is to provide a set of functionally-complete basic
facilities.
There are four types of queries supported in the query mechanism: meta-knowledge
queries, information queries, history queries and what-if queries.
A meta-knowledge query provides the definition of an entity and its related terminology
in the Articulator. It is based on the information given when an object is defined, which
is stored as meta knowledge. An example definition of meta knowledge appears in Figure
10. This function is intended to help new users to understand the Articulator meta-model.
Users can also provide their own meta-knowledge for models they defined, in order to help
guide other users' interactions with a given model. A meta-knowledge query is in form of
q-WHAT. For example, Figure 10 also lists a meta-knowledge query about company F and the
question is shown in the figure.
An information query provides information about either a state of a software process
model or the model itself. It is generally concerned with resource values and configurations.
Typical questions answered include "Is Peter a member of Team-A at time 1?", "What are
the relations linking Peter and Mary now?", etc. An information query has several basic
functions for this kind of deductive retrieval. For example, q-is checks the existence of
an entity in the status, and q-relation finds the relations which link two given entities.
6 DOCUMENT-ARCHITECTURE-DESIGN IDLE IDLE
9 SEND-FILE-TO-PETER DEVELOP-DATA-STRUCTURE IDLE
Figure
9: Summary of Simulation Result
Through the use of an information query, every value and every relation within a state can
be retrieved without difficulty.
More complicated queries have been implemented as examples of query building using
these basic functions. For example, q-follower and q-predecessor are used to find follower
actions and predecessor actions of a given action along relation
task-force-has-follower. These two queries are useful for users to check the configuration
of the tasks they perform. They are implemented by the q-relation query using
task-force-has-follower as the given relation. Another example is to get all component
modules of a given software project, implemented as q-soft-configuration. Many such
queries with specific requirements can be built in the same manner. In Figure 11, we provide
some information queries about our simple model of Company F.
A history query traverses a trajectory of states created in a simulation, collects a record of
changes on specified entities, then summarizes them to give clear and condensed information
about these changes. Typical information provided in history queries includes the activities
performed by the agents in the simulation period and the resources consumed or produced
by agents or teams. Other specific queries may ask about the consequence of a particular
action, a value change, or an inserted relation.
Implementation of the history query is based on information queries and the instantiation
manager. The former provides facilities to retrieve information within a state, while the latter
gives the capability to traverse within the state trajectory. Also a history query has a facility
to sum up gathered information. The simulation result presented in Figure 9 comes from a
;; Definition of meta-knowledge schema in SPSL with definitions for explanation.
(define-object Meta-knowledge
(is-a SCHEMA)
(methods-and-procedures)
;; Definition of meta knowledge for RESOURCE
(define-object Meta-resource
(is-a META-KNOWLEDGE)
(definition "RESOURCE is the basic entity in KB. It provides basic
descriptions about entities in the meta-model. Every object
must be a class of RESOURCE or an instance of RESOURCE.")
(methods-and-procedures "A resource can be created, used, and consumed")
(reason-or-explanation NA)
(literature-available "mi"))
(attach-meta-schema 'RESOURCE 'Meta-resource)
;; Meta-knowledge query - WHAT question:
(:RELATED ?ENTITY IS-A RESOURCE)
(BIND ?DEFINITION (GET-VALUE (GET-META-SCHEMA ?ENTITY) 'DEFINITION)) !)
(= ?DEFINITION "There is no such an entity in KB") ! fail)
;; Example of use of WHAT question: what is Company-F?
(q-WHAT Company-F ?DEFINITION)
"Company-F is a software vendor. It develops software on SUN systems.
Currently it has three members: Mary, Joe and Peter."
Figure
10: Meta-knowledge in SPSL and Its Queries
;; IS question: Is Mary in the meta-model?
(q-is Mary)
true
;; RELATION question: How are Mary and Joe related?
(q-RELATION Mary Joe)
Mary individual-in-collective-agent Company-F collective-agent-has-member Joe.
;; It means Mary and Joe are both in Company-F
;; FOLLOWER question: What are the followers of Develop-data-representation?
(q-follower Develop-data-representation ?what)
(Design-system-interface
Figure
11: Examples of Information Query
history query.
A what-if query includes a combination of simulation and history queries. It starts from
a given state, or a modification of a state in the middle of a sequence trajectory, and calls
the behavior simulator to simulate the given updated scenario. When the simulation is done,
a history query is activated to gather required information. What-if queries are designed to
facilitate testing of the hypothesis scenario and handling of unexpected events.
7 Conclusion
Within the Articulator project, we propose some novel contributions to the study of software
engineering processes using a knowledge engineering environment. We create a tractable
open-system model of software processes and resource infrastructures that are articulated by
agents working in development settings. We explore relationships among the components,
such as software processes, development resources and developers, within the model and
their impact on the software development products, processes, and workplace settings under
study. We also provide formalisms to represent task performance skill. We present a meta-model
of software processes which is suitable for describing software process models. All
these contributions are further enhanced through the simulation of the dynamics of software
process models as a basis for querying the state of values in the represented model, the
simulated trajectory and the recorded process history. As the Articulator becomes more
complete, we hope to provide a framework to further assist the interactive empirical study
of large scale software development projects.
--R
Understanding Software Maintenance Work.
Work Structures and Shifts: An Empirical Analysis of Software Specification Teamwork.
Carnegie Group Inc.
A Field Study of the Software Design Process for Large Systems.
On Building Software Process Model Under the Lamppost.
Cooperation Through Communication in a Distributed Problem Solving Network.
Information Management in Software Engineering
An Overview of Meta-level Architecture
Analyzing Due Process in the Workplace.
Offices Are Open Systems.
Software Process Modeling: Principles of Entity Process Models.
The Web of Computing: Computer Technology as Social Organization.
Negotiation: A Collective Problem-Solving Approach
Software Processes are Software Too.
The USC System Factory Project.
Representation of activity knowledge for project management.
MOLGEN Part 2: Planning and Meta-Planning
The Articulation of Project Work: An Organizational Process.
This is IT: A Meta-Model of the Software Process
Software Process Modeling: A Behavioral Approach.
--TR
This is IT: a metamodel of the software process
Understanding software maintenance work
Software processes are software too
On building software process models under the lamppost
A field study of the software design process for large systems
Breakdowns and processes during the early activities of software design by professionals
A methodology for studying software design teams: an investigation of conflict behaviors in the requirements definition phase
Software process modeling: a behavioral approach
A plan-based intelligent assistant that supports the software development
Work structures and shifts
Software process modeling
The integration of computing and routine work
Analyzing due process in the workplace
Offices are open systems
Intelligent Assistance for Software Development and Maintenance
ISHYS
--CTR
Peiwei Mi , Walt Scacchi, Process Integration in CASE Environments, IEEE Software, v.9 n.2, p.45-53, March 1992
C. Bellettini , E. Damiani , M. G. Fugini, User opinions and rewards in a reuse-based development system, Proceedings of the 1999 symposium on Software reusability, p.151-158, May 21-23, 1999, Los Angeles, California, United States
Pankaj K. Garg , Peiwei Mi , Thuan Pham , Walt Scacchi , Gary Thunquest, The SMART approach for software process engineering, Proceedings of the 16th international conference on Software engineering, p.341-350, May 16-21, 1994, Sorrento, Italy
Markku Oivo , Victor R. Basili, Representing Software Engineering Models: The TAME Goal Oriented Approach, IEEE Transactions on Software Engineering, v.18 n.10, p.886-898, October 1992
Darren C. Atkinson , Daniel C. Weeks , John Noll, Tool support for iterative software process modeling, Information and Software Technology, v.49 n.5, p.493-514, May, 2007
Kari Rnkk , Yvonne Dittrich , Dave Randall, When Plans do not Work Out: How Plans are Used in Software Development Projects, Computer Supported Cooperative Work, v.14 n.5, p.433-468, October 2005
Christopher M. Lott, Process and measurement support in SEEs, ACM SIGSOFT Software Engineering Notes, v.18 n.4, p.83-93, Oct. 1993
Nazim H. Madhavji , Kamel Toubache , Ed Lynch, The IBM-McGill project on software process, Proceedings of the 1991 conference of the Centre for Advanced Studies on Collaborative research, October 28-30, 1991, Toronto, Ontario, Canada
David Raffo, Evaluating the impact of process improvements quantitatively using process modeling, Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative research: software engineering, October 24-28, 1993, Toronto, Ontario, Canada
G. Fugini , C. Bellettini, Corrigenda: a hierarchy-aware approach to faceted classification of object-oriented components, ACM Transactions on Software Engineering and Methodology (TOSEM), v.8 n.4, p.425-472, Oct. 1999
G. Fugini , C. Bellettini, A hierarchy-aware approach to faceted classification of objected-oriented components, ACM Transactions on Software Engineering and Methodology (TOSEM), v.8 n.3, p.215-262, July 1999
Maria Letizia Jaccheri , Gian Pietro Picco , Patricia Lago, Eliciting software process models with the
Vincenzo Ambriola , Reidar Conradi , Alfonso Fuggetta, Assessing process-centered software engineering environments, ACM Transactions on Software Engineering and Methodology (TOSEM), v.6 n.3, p.283-328, July 1997 | knowledge based systems;knowledge-based environment;statics;software process behavior simulator;articulator;knowledge base querying mechanism;modelling;prototype computational environment;simulating software engineering processes;design;dynamics;representation schemes;modeling;programming environments;knowledge metamodel;software engineering |
627506 | Generalization by Neural Networks. | The authors discuss the requirements of learning for generalization, where the traditional methods based on gradient descent have limited success. A stochastic learning algorithm based on simulated annealing in weight space is presented. The authors verify the convergence properties and feasibility of the algorithm. An implementation of the algorithm and validation experiments are described. | Introduction
Neural networks are being applied to a wide variety of applications from speech generation[1], to handwriting
recognition[2]. Last decade has seen great advances in design of neural networks for a class of problems called
recognition problems, and in design of learning algorithms[3-5, 5-7]. The learning of weights for neural network for
many recognition problem is no longer a difficult task. However, designing a neural network for generalization
problem is not well understood.
Domains of neural network applications can be classified into two broad categories- recognition and generalization
[1, 8]. For both classes, we first train the neural network on a set of input-output pairs
O 2 ),.,(I n , O n ). In recognition problems, the trained network is tested with a previously seen input I j (1 j n)
corrupted by noise as shown in Fig.1. The trained network is expected to reproduce the output O j corresponding to
I j , in spite of the noise. Shape recognition [9, 10], and handwriting recognition[2] are examples of recognition prob-
lems. On the other hand, in generalization problems, the trained neural network is tested with input I n+1 , which is
distinct from the inputs I 1 , I 2 ,.,I n used for training the network, as shown in Fig.1. The network is expected to
correctly predict the output O n+1 for the input I n+1 from the model it has learned through training. Typical examples
of generalization problems are Bond Rating[11] and Robotics[12].
Neural networks for generalization problems are important, since there are many applications, of enormous
importance in the real world[13-15] which would benefit from this work. In many of these applications it is difficult
to successfully apply either conventional mathematical techniques (e.g., statistical regression) or standard AI
approaches (e.g., rule based systems). A neural network with generalization ability will be useful for such
domains[11], because it does not require an a priori specification of a functional domain model; rather it attempts to
learn the underlying domain model from the training input-output examples.
T r a i n e d
I 1 I 2
O 1 O 2
I n
O n
T r a i n e d
I 1 I 2
O 1 O 2
I n
O n
I n+1O n+1
I n+2O n+2
Figure
1. : Classes of Problems
The learning algorithm for generalization problems, should be different from the learning algorithm for
recognition problems. In recognition problems, the network is expected to reproduce one of the previously seen out-
puts. The network may remember the outputs and inputs by fitting a curve through the used for train-
ing. To remember the outputs, one often uses large networks with many nodes and weights. However memorization
of learning samples is not suited for generalization problems, since it can lead to worse performance during
prediction of outputs on unseen inputs. Furthermore, generalization problems allow a small amount of error in the
output predicted by the network and hence the fitted curve need not pass through any (I i , O i ) pair used for training.
Networks addressing generalization problem may instead fit a simple # curve (e.g. a low degree polynomial, or
basic analytical functions like log(x), sine(x), tangent(x) etc.) through the input-output pairs rather than fitting a
crooked curve. The neural network used in generalization problems tend to be simpler with small number of hidden
nodes, layers, and interconnection edges and weights, enabling one to use computationally sophisticated algorithms.
Most of the earlier work in neural networks [4, 9] is related to recognition problems. There has been little
research towards developing neural network models for generalization problems [16, 17].
We present a new learning algorithm, stochastic backpropagation for generalization problems. We verify the
convergence of the algorithm and provide theoretical arguments towards the capability of the proposed algorithm to
discover optimal weights. We also describe an implementation of the algorithm and our experience with the algorithms
in solving generalization problems.
2. Problem Formulation
Generalization problems for neural networks have been formulated in three different ways: (a) analytical
constructive function learning [25-27], and (c) symbolic semantic network[8]. The analytical
formalism focuses on the existence of networks with a capability to generalize. It also provide worst case time complexity
to discover such networks to solve arbitrary generalization problems. It does not provide a way to discover
the networks. The constructive function learning formalism approaches generalization problems in a complementary
fashion. It studies algorithms to discover networks which can solve a class of generalization problems. Its aims
at discovering a function f mapping the input domain to the output domain from a set of learning examples. The
function to be discovered may be defined over boolean numbers or over real numbers. The inputs and outputs are
assumed to be numbers with no symbolic meaning. The function and network do not represent symbolic meaning
beyond the numeric computation. The third approach of symbolic semantic network associates symbolic meaning to
the network. Generalization occurs by attaching a new node to an appropriate parent node in the network to inherit
the properties of the parent.
This classification of formulations of generalization problems does not include some special cases. For example,
signal detection problem can be considered as a special case. The task in signal detection problem is to learn to
recognize a parameter of the function f: I -> O, rather than learn the function. Recognizing the frequency of given
sinusoidal function is an example of signal detection problem. Backpropagation neural networks has been applied
successfully to this problem[28].
We focus on the constructive function learning formulation in terms of learning a numeric function. A simple
neural network can be described as a directed graph E). The vertex set V has three kinds of nodes: (a) input
nodes at leaves, (b) hidden nodes as internal nodes and (c) output nodes at roots. Each edge e i in E is associated
with a weight w i , j as shown in Fig.2.
The network is used to compute an output from a set of inputs. Each node i computes a function of weighted
sum of the input signals, g(
in inset
output. The function g maps from [-, ] to [-1,1]. Given any input,
the network would compute an output = f(input). The inverse problem of discovering the function f (i.e. the set of
edge weights) from a given set of input-output pairs is referred to as the learning problem, which can be stated as
follows. Given a set of example input output pairs {(I 1 ,O 1 ),.(I n ,O n )}, find the weights on each edge e i j E, of
neural network, such that the network maps I j to O j for j=1,2,.n, as closely as possible.
I represent the possibly infinite domain of inputs, and O represent the possibly infinite range of outputs.
dimensional feature space, F 1 , ., F k , describing each of the input. Each input I j can be
considered a k-tuple in the Cartesian space F 1 F 2 . F k .
Given a learning sample I , S O ) and per sample error function E: I O Real with S I = { I 1 ,I 2, ., I n },
and S O = { O 1 , O 2, ., O n }, generalization involves finding the mapping function f
Neural Network Example
Symbols
Nodes
Edges
Weights
Output: O
Input: I2
Input: I1
w36
Figure
2.: Feed-forward neural network
to minimize error function E over the entire domain I. In particular f for each
learning sample.
Reducing the error function over entire domain I by looking at a small subset S I is difficult for arbitrary learning
samples S I and arbitrary input domain I . The generalization problem is often simplified by making assumptions
about the domain I and the learning sample S . We assume S I represents entire domain S adequately so that the
value of E over I can be estimated from the the value of E over S I . The function f is assumed to be smooth, continuous
and well behaved.
The domain and range of function f can be boolean sets or the set of real numbers. Usually more than one
function can fit the given set of learning samples of input-output pairs. It makes the generalization problem harder.
For example, learning a specific boolean function from a subset of domain is difficult [25] since several boolean
functions over the domain can fit the learning samples. There is little consensus on a criteria to prefer one of the
candidate boolean functions over the rest to break the tie. We restrict our attention to the functions over the set of
real numbers. We draw upon the notion of simplicity of functions over real numbers to choose one function among
the set of possible functions which fit the learning samples. Simplicity is intuitively defined in term of the number of
maxima and minima of the function. Simplicity reduces to the notion of degree of polynomials for polynomial functions
3. Stochastic Backpropagation
The general idea behind our algorithm is to use simulated annealing in the weight space. The weight space is
defined by a collection of configurations W connection weights in
the neural network. The simulated annealing procedure searches the weight space for the configuration W opt to
minimizes the error-to-fit function E(W i ). The search procedure is based on the Monte Carlo method [29]. Given
the current configuration W i of the network, characterized by the values of its weights, a small, randomly generated,
perturbation is applied by a small change in a randomly chosen weight. If the difference in error DE between the
current configuration W i and the slightly perturbed one is negative, i.e. if the perturbation results in a lower error of
fit, then the process is continued with the new state. If DE 0 then the probability of acceptance of the new configuration
is given by exp (-DE/k B T). These occasional transitions to higher error configuration help the search process
to get out of local minimas. This acceptance rule for the new configurations is referred to as the Metropolis
Criteria. Following this criteria, the probability distribution of the configurations approaches the boltzman distribution
given by Eq.A.1:
where Z(T) is a normalization factor, known as partition function, depending on temperature T and boltzman constant
. The factor exp (-E/k B T) is known as the boltzman factor.
T denotes a control parameter, which is called temperature due to historic reasons. Starting off at a high
value, the temperature is decreased slowly during the execution of algorithm. As the temperature decreases, the
boltzman distribution concentrates on the configurations with lower error and finally, when the temperature
approaches zero, only the minimum error configurations have a non-zero probability of occurring. In this way we
get the globally optimal weights for the network minimizing the error of fit with the training examples, provided the
maximum temperature is sufficiently high and the cooling is carried out sufficient slowly.
Algorithm Description: One has to define configurations, a cost function and a generation mechanism(or
equivalently, a neighborhood structure) before describing the algorithm. We assume that each weight takes discrete
values from set y = {-sd,.,-d,0,d,2d,.,sd}. The restriction of the weights to discrete values does limit # the learning
ability of neural networks for most generalization problems. The configurations can now be defined as n-tuple
of weights, where n is the number of weights in the network. The configuration space is constructed by allowing
each weight to take values from y. The cost function is defined by the error between desired outputs and network
outputs for the learning examples as shown below:
Here y j ,c refers to the j-th network output for the input from c-th training example, and d j ,c refers to the j-th
(desired) output from the c-th training example. The indices c and y refers to different outputs of the network and
various training examples respectively.
To generate the neighboring configurations, we change one randomly chosen weight element in the configuration
by d. We use uniform probability distribution to chose the weight to be changed, and thus the probability of
generating any neighboring configuration from the current configuration, is uniformly distributed over the neighbors
# One can always choose large value for s and scale the input output data value to a small range to achieve better accuracy.
Procedure
begin
while (not stop_criterion) {system is not "frozen"}
repeat
accept := false;
{where w lm was changed by PERTURB}
else if exp (- c
if accept# then UPDATE(configuration j);
until equilibrium_is_approached_sufficiently_closely;
c M+1 := f(c M );
Fig.3: Stochastic backpropagation algorithm in psuedo-Pascal
______________________________________________________________________________
A psuedo-Pascal description of the stochastic backpropagation is shown in Fig.3. The INITIALIZE routine
assigns default values to all the variables, in particular to current configuration, temperature, and
outer_iteration_count M. The loops correspond to simulated annealing in the weight space. The inner loop
represents simulated annealing at a fixed value of the control parameter T. It executes until the probability distribution
of current configuration being any of the possible configuration, becomes stable. This helps us to achieve boltzman
distribution. The outer loop changes the control parameter slowly to the final value near 0. This corresponds to
slow cooling to achieve configurations with globally minimum error. The steps inside the innermost loop combine
backpropagation with transitions for simulated annealing. We use the BACKPROP, the backpropagation algo-
rithm, [4] as a subroutine to compute the error derivatives E /w lm with respect to various weights. These derivatives
help us to estimate the change in error function, when a particular weight is changed by d. PERTURB produces
a neighboring configuration by changing a randomly chosen weight by d. The change in error function due to
the perturbation is estimated using the error derivatives obtained from the backpropagation algorithm. The new configuration
is accepted unconditionally, iff it has lower error than the current one. Otherwise the new configuration
is accepted with probability Finally the program variables are updated by UPDATE to state of the
newly chosen configuration.
# Note that the acceptance criterion is implemented by drawing random number from a uniform distribution on (0,1) and comparing
these with exp (-E(i)/T).
The basic algorithm shown in Fig.3. can be made more efficient, if one changes the function of BACKPROP
procedure. We notice that the backpropagation produces the partial derivatives of E with respect to all the weights,
whereas we use only one of the derivatives in subsequent computation. It is better to modify the backpropagation
algorithm to compute only one derivative, which is required.
Comparison with Existing Algorithms: Some of the existing learning algorithms, e.g. in Hopfield net-
works[30, 31], are based on memorizing the learning examples accurately. These cannot be used for prediction in
generalization problems. Two of the more flexible learning methods include backpropagation [9, 32] and boltzman
machine learning [33]. Both are iterative algorithms based on gradient descent on the error surface.
In backpropagation the error of fit for a given set of weights is defined by Eq. B.1 where y j ,c is the actual
state of unit j in input-output training example c and d j ,c is the desired state. Backpropagation algorithm computes
the gradient of error with respect to each weight. A hidden unit, j, in layer J affects the error via its effects on the
units, k, in the next layer K. So the derivative of the error E /y j is given by Eq. B.2 where the index c has been
suppressed for clarity.
dy k
The weights are then changed in a direction to reduce the error in the output. One may change the weights simultaneously
to avoid conflicting local weight adjustments.
The boltzman machine is stochastic in nature and the aim of learning the weights is to achieve a probability
distribution of input-output mapping. The boltzman machine learning [5] is based on a series of simulated annealing
on the state space of network. State space for the network can be characterized by defining the state of the net-
work. The state of a node is described by its output and the state of the network is a n-tuple vector with one component
for each node. The learning algorithm aims to achieve a certain probability distribution over the states, and
does a gradient descent to minimize the error in probability distribution. Our use of simulated annealing in weight
space is quite different from the boltzman machine [33], where simulated annealing is carried out in the state space
of the network.
These learning methods work only when the cost/error surface is concave. Since these algorithms are based
on a simple heuristic of gradient descent, these can get stuck in local minima. Furthermore these can get stuck at
plateaus, where the gradient is very small, as shown in Fig.4. These algorithms cannot guarantee the optimality of
discovered weights. The learning problem is NP-complete in general [34] and remains NP-complete under several
restrictions. it is not surprising that heuristic learning algorithms like backpropagation do not always work, and cannot
be trusted to find the globally optimal weights for generalization problems.
There are two ways of approaching NP-complete problems: (a) approximation methods [35], and (b) stochastic
enumeration method such as simulated annealing [36]. Since it is difficult to formulate a general approximation
method for neural network learning, we use the simulated annealing method. We extend the backpropagation algorithm
with stochastic weight changes for learning the weights. The algorithm has convergence properties and can
achieve globally optimal weights for a simple network for generalization. The implementation and performance studies
show that the algorithm performs well for many generalization problems.
Global Minima
Figure
4: Two Bad Cases for Gradient Descent
4. Modeling and Analysis of Stochastic Backpropagation
Given a neighborhood structure, stochastic backpropagation can be viewed as an algorithm that continuously
attempts to transform the current configuration into one of its neighbors. This mechanisms can mathematically be
described by means of a Markov Chain: a sequence of trials, where the outcome of each trial depends only on the
outcome of the previous one [37]. In the case of stochastic backpropagation, the trials correspond to transitions and
it is clear that the outcome of a transition depends only on the outcome of the previous one (i.e. the current confi-
guration).
A Markov chain is described by means of a set of conditional probabilities for each pair of outcomes
is the probability that the outcome of the k-th trial is j, given that the outcome of the k - 1-th
trial is i. Let a i (k) denote the probability of outcome i at the k-th trial, then a i (k) is obtained by solving the recursion
a
l
S a i (k-1).P li (k-1,k ), k=1,2.,(C.1)
where the sum is taken over all possible outcomes. Hereinafter, X(k) denotes the outcome of the k-th trial. Hence
and a i
If the conditional probabilities do not depend on k, the corresponding Markov chain is called homogeneous, otherwise
it is called inhomogeneous.
In case of stochastic backpropagation, the conditional probability denotes the probability that k-th
transition is a transition from configuration i to configuration j. Thus X(k) is the configuration obtained after k tran-
sitions. In view of this, is called the transition probability and the | R | | R | matrix P(k-1,k) the transition
matrix. Here | R | denotes the size of the configuration space.
The transition probabilities depend on the value of the control parameter T (temperature). Thus if T is kept
constant, the corresponding Markov chain is homogeneous and its transition matrix can be defined as
l =1,li
G il (T) A il (T) forallj=i
i.e., each transition probability is defined as the product of the following two conditional probabilities: the generation
of generating configuration j from configuration i, and the acceptance probability A ij (T) of
accepting configuration j, once it has been generated from configuration i. The corresponding matrices G(T) and
A(T) are called the generation and acceptance matrices, respectively. As a result of the definition in Eq.C.4, P(T) is
a stochastic matrix, i.e.
1. G(T) is represented by a uniform distribution over the neighborhoods
since transitions are implemented by choosing at random a neighboring configuration j from the current configuration
i. A(T) is computed from the Metropolis criteria, i.e. min(1, exp(-DE/k B T)).
The stochastic backpropagation algorithm attains a global minimum, if after a (possibly large) number of
transitions, say K, the following relation hold:
where R opt is the set of globally optimal configurations with minimum error of fit. It can be shown that Eq.C.5
holds asymptotically (i.e.
lim Pr{X (k) R opt
1. certain conditions on matrix A(T l ) and G(T l ) are satisfied
2.
3. under certain additional conditions on matrix A(T k ), the rate of convergence of the sequence {T k } is not
faster than O( | log k |
The proof is carried out in three steps: (a) showing the existence of stationary distribution for homogeneous
Markov chains, (b) showing the convergence of inner-loop of stochastic backpropagation to the stationary distribu-
tion, and (c) showing that the stationary distribution for the final "frozen system", has non-zero probabilities on the
optimal configurations only. The proof for last step is contingent on the cooling rate and provides us with a bound
on the rate.
4.1. Existence of Stationary Distribution
The following theorem establishes the existence of the stationary distribution.
Theorem 1 (Feller, [37] ): The stationary distribution q of a finite homogeneous Markov chain exists if the Markov
chain is irreducible and aperiodic. Furthermore, the vector q is uniquely determined by the following equation:
We note that q is the left eigenvector of the matrix P with eigenvalue 1.
A Markov chain is irreducible, if and only if for all pairs of configurations (i,j) there is a positive probability of
reaching j from i in a finite number of transitions, i.e.
Markov chain is aperiodic, if and only if for all configurations i R , the greatest common divisor of all integers n1,
such that
is equal to 1.
In the case of stochastic backpropagation, the matrix P is defined by Eq. C.4. Since the definition of A guarantee
that forall i,j,c it is sufficient for irreducibility to check that the Markov chain induced by G(T) is
irreducible [38],
i.e.
G lk lk+1 (T) > 0, k=0,1,.,p-1. (C.10)
To establish aperiodicity, one uses the fact that an irreducible Markov chain is aperiodic if the following condition is
satisfied [38]:
Thus for aperiodicity it is sufficient to assume that
Using the inequality of Eq. C.12 and the fact that forall i 1, we can prove the following:
l =1,liT
S | R
A iT l (T) G iT l (T)
l =1,liT , jT
S | R
A iT l (T) G iT l (T)
l =1,liT , jT
S | R
G iT l (T)
l =1,liT
S | R
G iT l (T)
l =1
S | R
G iT l
Thus P iT
l =1,liT
S | R
A iT l (T) G iT l (T) > 0 (C.14)
and thus, Eq. C.11 holds for i=i T .
Summarizing we have the following result. The homogeneous Markov chain with conditional probabilities
given by Eq. C.4. has a stationary distribution if the matrices A(T) and G(T) satisfy Eqs. C.10 and C.12, respec-
tively. Note that in the stochastic backpropagation the acceptance probabilities are defined by
and hence Eq. C.12 is always satisfied by setting, forall T > 0, i T R opt , j T R opt .
4.2. Convergence of the Stationary Distribution
We now impose further conditions on the matrices A(T) and G(T) to ensure convergence of q(T) to the distribution
p, as given by Eq. C.5.
Theorem 2 [38]: if the two argument function y(E(i) - E opt ,T) is taken as A i0,i (T) (for an arbitrary configuration
does not depend on T, then the stationary distribution q(T) is given by
A i0,i (T)
provided the matrices A(T) and G satisfy following conditions:
The proof of this theorem is discussed elsewhere [ 39].
It is implicitly assumed that the acceptance probabilities depend only on the cost values of the configuration and not
on the configurations itself. Hence, A i0,i (T) does not depend on the particular choice for i 0 , since
To ensure that
of Eq. C.5, the following condition is sufficient [38]:
Thus the conditions (C.19)-(C.23) guarantee the convergence. It can be easily checked that the matrices G(T) and
A(T) for stochastic backpropagation meet all these conditions.
4.3. Cooling rate
Under certain conditions on matrices A(T) and G(T), the stochastic backpropagation algorithm converges to a global
minimum with probability 1 if for each value of T l of the control parameter(l = 0,1,2.), the corresponding Markov
chain is of infinite length and if the T l eventually converges too 0 for l , i.e. the validity of the following
equation is shown:
limq
| R opt |
However the cooling rate or the constraint on the sequence T l of the control parameter(l = 0,1,2.), has to satisfy
certain properties to assure convergence to the globally optimal configurations. In particular, if T k is of the form
G
then one can guarantee + the convergence to the globally optimal configurations [40].
4.4. Convergence Results
We have listed the conditions under which simulated annealing converges to globally optimal configurations.
Our formulation of stochastic backpropagation uses similar acceptance probabilities, generation probabilities, and
cooling schedule, as the simulated annealing algorithm [38]. That is we use Metropolis criteria as acceptance cri-
teria. We have a configuration generating mechanism, which has uniform distribution over neighbors. The generation
mechanism across two neighboring configuration is symmetric. One can generate any arbitrary configuration
from a given configuration in finite number of steps. Thus we satisfy the conditions of Theorem 1 and 2 and are
guaranteed convergence to global minima. We follow a cooling schedule of T n logn
G to satisfy the conditions on
cooling rate. This guarantees the convergence to global minima provided G is greater than the depth of any local
minima.
5. Implementation
We have carried out a complete implementation of stochastic backpropagation and conducted validation stu-
dies. We implemented the algorithm on a unix platform on a sequential machine (e.g. SUN 3/60). Since generalization
often takes large amounts of computation, we plan to reimplement the algorithm on a vector processor (e.g.
Cray XMP) for speed-up.
The current prototype is based on the source code of a public domain software implementing backpropagation
algorithm[41]. We studied the software to reuse pertinent modules to implement stochastic backpropagation. The
implementation comprises of 5000 lines of C code. It required approximately seven man months to design, code and
debug. A large fraction of the effort was directed towards reading and understanding of the backpropagation
software for reuse. The effort was rewarded in reduced time of designing, coding and debugging. The code was
Provided G >= D, the maximal depth of any local minima in the error surface.
verified by walkthrough method and by extensive usage. The prototype has been used in seminar courses and
research.
The backpropagation package [41] has three modules: user interface, learning module and testing module.
User interface implements the commands by associating the commands to internal functions via a table. The commands
allow users to examine, and modify the state of the software, choose options to specify the type and speed of
computing and displays. The learning module implements the backpropagation algorithm by computing error
derivatives and weight adjustments to tune the weights iteratively for training. It repeats the weight adjustments for
fixed number of times or till the total squared error reaches a value set by the user. The learning algorithm provides
option of adjusting weights after examining each pattern or after examining all the patterns. The testing module
computes outputs for given inputs to the neural network. It also computes the total squared error between the output
produced by the network and the desired output specified for the input pattern.
We augmented the user interface module by adding commands to examine and modify the parameters of stochastic
backpropagation. The routines to process the commands were installed in the table associating commands to the
processing routines. A command to enable the user to choose between the alternative learning algorithms was also
added.
We implemented the stochastic backpropagation in C with a simplified cooling schedule. The cooling schedule is
based on expotential cooling T simplicity and efficiency. This cooling schedule has been used in
many applications [42]. The main routine in the learning module, namely trial(), was modified to adjust weight
based weight by randomly choosing a neighbor and accepting it by metropolis criteria. Testing module was not
altered for the implementation.
6. Validation
We evaluated stochastic backpropagation learning algorithm for generalization to two types of functions: (a)
monotonic functions , and (b) non monotonic functions.
The experimental setup consisted of four modules: data set generator, neural network simulator, data collection
and data analysis as shown in Fig 5. The data set generator module uses four functions : linear, quadratic, logarithmic
and trigonometric as shown in table 1. The neural network simulator implements alternative learning algorithms
of backpropagation and stochastic backpropagation. It takes the network configuration and data sets. The
module simulates learning algorithm and produces the outputs as well as the weights. The data collection module
comprises of a set of routines to sample the state of neural network simulator. It can periodically (say every 100
epochs of learning) sample the weights or collect them at the termination of learning. The data analysis module produces
graphs, and statistics.
The algorithms are monitored during the learning phase as well as during the testing phase. The performance
of algorithm during learning phase is measured by the total square error on the learning set of input-output pairs.
The performance of algorithm during testing phase is measured by the per pattern error on a new set of input-output
pairs, which is distinct from learning set. The behavior of alternative algorithms during learning are shown in Figures
6 and 7. Figure 6 shows the change in total square error along with learning steps for monotonic functions, i.e.
linear, logarithmic and quadratic functions. Stochastic backpropagation and backpropagation yield comparable total
square error. We tested the trained network with an independent set of samples. Stochastic backpropagation trained
network yielded 1.7% error per pattern. Backpropagation trained network yielded 0.9% error per pattern. Both networks
predict the outputs for all samples within 5% of desired output. Figure 7 shows the change in total square
error with epochs of learning for non-monotonic function, i.e. trigonometric function. Stochastic backpropagation
Network Configuration monotonic, non monotonic
Experimental Setup
Data Sets
Backpropagation).
(Stochastic Backprop,
Neural Network Simulator
Outputs, inputs
Finat weights,
experiment
per
Datafiles
plots, statistics
(plots, statistics, etc.)
Analysis Module
(weights, input, outpputs)
weight samples
Periodic intermediate
Data Collection
Problems
Fig. 5.: Experiment Design for the Performance Comparison Study
Table
1: Functions for controlled study
yields better total square error during learning as well as during testing. The network trained with backpropagation
network yields per pattern error of 15%, predicting the output for 14 samples out of 50 within 5% of the desired out-
put. The network trained with stochastic backpropagation yields per pattern error of 11%. It predicts the output for
samples out of 50 within 5% of the desired output.
Initial
Stochastic Backpropogation
Stochastic Backpropogation
Backpropogation2EPOCH
Figure
Learning monotonic functions
Initial
Stochastic Backpropogation
Backpropogation2EPOCH
Figure
7: Learning non-monotonic functions
7. Conclusions
Stochastic backpropagation provides a feasible learning algorithm for generalization problems. It reduces the
error of fit with the training examples, which is critical for generalization problem. Stochastic backpropagation performs
well on monotonic functions using simple networks with fewer hidden nodes. It performs better than backpropagation
algorithm on non-monotonic functions. The stochastic backpropagation learning algorithm has theoretical
property of convergence. It also provides a stochastic guarantee of finding the optimal weights. However, our
experiments did not confirm it. One needs to further tune the parameters of the implementation of stochastic back-propagation
to get better results.
8.
Acknowledgements
We acknowledge useful help from the CS 8199 class of spring 1991, UROP at the University of Minnesota,
and the backpropagation package[41]
9.
--R
Handwritten numeral recognition by multi-layered neural network with improved learning algorithm
Learning in
Learning Internal Representations by Back Propagating Errors
A Learning Algorithm for Boltzman Machine
Linear Function Neurons: Structure and Training
The ART of adaptive pattern recognition by a self-organizing neural network
Brain Style Computation: Learning and Generalization
Learning to Recognize Shapes in a Parallel Network
Learning Translation Invariant Recognition in Massively Parallel Network
The design of intelligent robot as a federation of geometric mchines
Neural Computers
Using Artificial Neural Nets for Statistical Discovery: Observations after using Back-Propagation
Beyond Regression: New Tools for Prediction and Analysis in the Behaviorial Sciences
Connectionist Expert Systems
Material Handling: A Conservative Domain for Neural Connectivity and Propagation
On the representation of continuous functions of several variablesby superposition of continuous functions of one variable and addition
Meaning of Generalization (ch.
Mapping Abilities of Three Layered
Training a 3-node neural net is NP-complete
On Complexity of Loading Shallow Networks
Neural Network Design and the Complexity of Learning
Scaling and Generalization in
A Learning Algorithm for Generalization Problems
Memorization and Generalization (Ch.
Creating Artificial
Equation of State Calculations by Fast Computing Machines
Neural networks and physical systems with emergent collective computing abilities
Neural computation of decisions in optimization problems
Parallel Distributed Processing: Explorations in the Microstructure of Cognition
Constraint Stisfaction Machines That Learn
Complexity of Connectionist Learning with Various Node Functions
Computers and Interactability: A Guide to the Theory of NP-Completeness
Optimization by simulated annealing
An Introduction to Probability Theory and Applications
Probabilistic Hill Climbing Algorithms: Properties and Applications
Theory and Applications
Cooling Schedules for Optimal Annealing
Explorations in Parallel Distributed Processing - A Handbook of Models
John Wiley
--TR
Linear function neurons: Structure and training
Connectionist expert systems
The ART of Adaptive Pattern Recognition by a Self-Organizing Neural Network
Cooling schedules for optimal annealing
On the complexity of loading shallow neural networks
Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing
Neurocomputer applications
Explorations in parallel distributed processing
Neural network design and the complexity of learning
Training a 3-node neural network in NP-complete
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Creating artificial neural networks that generalize
The design of intelligent robots as a federation of geometric machines
Brain style computation
Computers and Intractability
Learning Translation Invariant Recognition in Massively Parallel Networks
Complexity of Connectionist Learning with Various Node Functions
--CTR
V. Kumar , S. Shekhar , M. B. Amin, A Scalable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures, IEEE Transactions on Parallel and Distributed Systems, v.5 n.10, p.1073-1090, October 1994 | simulated annealing;weight space;learning;convergence properties;learning systems;stochastic learning algorithm;gradient descent;neural networks;neural nets;generalization |
627540 | Incremental Recovery in Main Memory Database Systems. | Recovery activities, like checkpointing and restart, in traditional database management systems are performed in a quiescent state where no transactions are active. This approach impairs the performance of online transaction processing systems, especially when a large volatile memory is used. An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented. A page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed. Pages are recovered individually and according to the demands of the post-crash transactions. A method for propagating updates from main memory to the backup database on disk is also provided. The emphasis is on decoupling the I/O activities related to the propagation to disk from the forward transaction execution in memory. The authors also construct a high-level recovery manager based on operation logging on top of the page-based algorithms. The proposed algorithms are motivated by the characteristics of large MMDBs, and exploit the technology of nonvolatile RAM. | Introduction
The task of a recovery manager in a transaction processing system is to ensure that, despite system and trans-action
failures, the consistency of the database is maintained. To perform this task, book-keeping activities are
performed during the normal operation of the system and restoration activities take place following the failure.
Traditionally, the recovery activities are performed in a quiescent state where no transactions are being processed.
For instance, following a crash, transaction processing is resumed only once the database is brought up-to-date
and its consistency is restored by a restart procedure. Essentially, restart processing is accounted as part of the
down-time of the system, since no transactions are processed until it terminates. A similar effect of halting, or
interfering with, transaction processing in order to perform a recovery-related activity is observed in connection
with certain checkpointing techniques. To checkpoint a consistent snapshot of the database, transaction processing
has to halt. The appealing alternative is to perform these activities incrementally and in parallel with
transaction execution.
This fundamental trade-off between recovery activities and forward transaction processing is underlined in a
database system incorporating very large semiconductor memory (in the order of Gbytes). Such Main Memory
Database systems are subsequently referred to as an MMDBs (see [Bit86], and the references there for an overview
of different aspects of MMDBs). The potential for substantial performance improvement in an MMDB is promis-
ing, since I/O activity is kept at minimum On the other hand, because of the volatility of main memory, the
issue of failure recovery becomes more complex in this setting than in traditional, disk-resident database systems.
Moreover, since recovery processing is the only component in a MMDB that must deal with I/O, this component
must be designed with care so that it would not impede the overall performance.
Another advancement in semiconductor memory technology is that of non-volatile RAM, which is referred
to, hereafter, as stable memory. An example of stable memory technology is battery-backup CMOS memories
that are widely available [CKKS89]. In case of a power failure, the contents of this memory are not lost. Stable
memories are available in sizes on the order of tens of megabytes and have read/write performances two to four
times slower than regular RAMs, depending on the hardware. The reader is referred to [CKKS89] for more details
on this technology.
The traditional approach to recovery has to be revisited in light of he availability of large main memories
and stable memories. On the one hand, traditional recovery techniques fall short of meeting the requirements of
high-performance databases systems that incorporate very large volatile buffers. In such systems, the trade-off
between recovery and forward processing is sharpened and made more critical. On the other hand, by their
nature, stable memory devices are bound to advance the design of a recovery management subsystem.
The following points explain the impact of large main memories and stable memories on the approach to
recovery:
ffl The larger the database buffer, the less page replacement occurs. Therefore, in database systems where the
database buffer is huge, paging cannot be relied upon as the primary mechanism for propagating updates
to backup database on disk, since paging is expected to be a relatively rare activity. Many recent research
efforts go to the extreme with this trend arguing that there are cases where the entire database can fit
in memory, thus eliminating paging entirely (e.g., [DKO
page replacements, checkpointing and keeping a stable copy of the database may become a very disruptive
function.
Typically, persistence and atomicity of transactions is guaranteed by performing disk I/O at certain critical
points (e.g., flushing a commit log record at the end of transaction). Stable memory enables divorcing
atomicity and persistence concerns from slow disk I/O. This simple, yet promising, approach was explored
in [CKKS89, DKO
ffl Traditionally, a sequential I/O method, namely logging, is used to accommodate efficiently the book-keeping
needs of the recovery management system. Consequently, this information is a sequence of log records
lacking any helpful structure or organization. The availability of a stable memory provides the means for
maintaining some of the recovery book-keeping information in randomly accessible and fast memory.
In light of the above factors, we propose an alternative to the traditional approach to recovery management
in database systems. Our approach is based upon the principle that recovery activities should be performed
in an incremental fashion, concurrently with, and without impeding, transaction processing. The algorithms
we propose are motivated by the characteristics of an MMDB and exploit the technology of stable memory in a
genuine manner that differs from the numerous proposals for using these devices in transaction processing systems
(e.g., [Eic87, DKO
The techniques we propose concentrate on incremental approach to restart processing and checkpointing in
MMDBs. We devise a scheme in which transaction processing resumes at once after a crash. Restoring data
objects is done incrementally and is guided by the demand of the new transactions. Our checkpointing scheme
capitalizes on the performance advantages of MMDBs without precluding the possibility of having some portions
of the database on secondary storage. The scheme's main feature is decoupling of recovery processing and
transaction execution, thereby almost eliminating the common effect of the former delaying the latter. The work
reported in this paper is a continuation of our earlier work in this area [Lev91, LS90].
Our intention in this paper is to emphasize the principles of an incremental approach to recovery processing
rather than present an involved implementation. We first develop incremental recovery techniques that are
based on physical entry logging for a simple page-based model. Then we use this algorithm as a module in the
construction an incremental restart algorithm based on operation logging and multi-level transactions.
The paper is organized as follows. In Section 2 we briefly survey why conventional recovery techniques are
not suitable for MMDBs. Section 3 outlines a page-based recovery model that is used in the construction of the
lower layer of our architecture. The model and terminology established in this section are used in the rest of the
paper. The principles that should underlie a sound design are presented in Section 4. The incremental techniques
we propose are described in Section 5, and proved correct in Section 6. Several improvements to the architecture
are proposed in Section 7. The applicability of our methods for high-level recovery management, which is not
page-based, is elaborated in Section 8. Related work is reviewed in Section 9. We sum up with conclusions in
section 10.
2 The Deficiencies of Conventional Approaches
We concentrate on the subjects of checkpointing a large buffer, and restart processing. Later, we propose an
integrated solution for these problems that does not possess the deficiencies outlined in this section and thus is
more suitable for MMDBs.
2.1 Checkpointing a Large Buffer
To illustrate the problem of checkpointing large buffers, consider the direct checkpointing technique, variants of
which are offered as the checkpoint mechanism for MMDBs [Eic87, SGM87a]. A direct checkpoint is a periodic
dump of the main memory database to disk, and is essential for the purposes of recovering from a system crash.
Consider a naive checkpointing algorithm which simply halts transaction processing and dumps the main memory
database to disk. For a database size in the order of Gbytes, execution of this algorithm takes hundreds of seconds
during which no transactions are processed! Moreover, as sizes of databases and memory chips are increasing
rapidly, the problem will become more severe. Indeed, contemporary direct checkpointing algorithms are much
more sophisticated and efficient than this naive algorithm, but still the periodic sweep of the main memory that
guides the dumping to the disk is the basis to all of them. Therefore, any variation of direct checkpointing is
bound to delay transaction processing to a considerable extent.
Many of the proposed algorithms and schemes for MMDBs rely on the explicit assumption that the entire
database is memory-resident [GMS90, LN88, SGM87b, DKO Although other proposals acknowledge that
this assumption is not valid for practical reasons, the issue is not addressed directly in their designs [LC87, Hag86,
Eic86]. Even though the size of main memory is increasing very rapidly the size of future databases is expected to
increase even more rapidly. Indeed, there are a number of commercial database management systems in existence
with a Tera byte or more of active data. We stress that the assumption that the database is only partially
memory-resident must underlie a practical design of a practical database system.
2.2 Restart Processing
The notion of a restart procedure is common to a variety of transaction processing systems that rely on logging
as a recovery mechanism. After a system crash, the restart procedure is invoked in order to restore the database
to its most recent consistent state. Restart has to undo the effects of all incomplete transactions, and to redo the
committed transactions, whose effects are not reflected in the database. Restart performs its task by scanning
a suffix of the log. In some cases there are up to three sweeps of the suffix of the log (analysis, forward, and
backward sweeps [Lin80, MHL
There are two major activities that contribute to the delay associated with restart processing. First, the
log suffix must be read from disk to facilitate the undoing and redoing of transactions. Second, bringing the
database up-to-date triggers a significant amount of updates that translate to substantial I/O activity. The
interval between consecutive checkpoints largely determines how long performing these two activities would take
[Reu84, CBDU75]. The longer the interval, more log records are generated and accordingly more transactions are
to be undone and redone by restart. The key point is that normal transaction processing is resumed only after
restart's termination. That is, standard restart processing is accounted as part of the down-time of the system.
The maximum tolerable down-time is a very important parameter, and in certain cases the delay caused by
executing restart is intolerable. In systems featuring high transaction rates, for instance, restart has to be fast
since even a short outage can cause a severe disruption in the service the system provides [Moh87]. We argue
that the standard approach to restart is not appropriate in an advanced database management systems featuring
huge storage capacity and high transaction rates, since recovering the entire database by replaying the execution
would contribute significantly (in the order of minutes) to the down-time of the system.
3 A Page-Based Recovery Model
In the sequel we use the following terms and assumptions to define our model. The model is simplified for ease
of exposition.
On the lowest level, a database can be viewed as a collection of data pages that are accessed by transactions
issuing read and write operations. Pages are stored in secondary storage and are transferred to main memory
buffer to accommodate reading and writing. A buffer manager controls the transfer of individual pages between
secondary and main memory by issuing flush and fetch operations to satisfy the reading and writing requests of
executing transactions. Flush transfers and writes a page from the buffer to secondary storage. Flushing a page
to secondary storage is made atomic by stable storage techniques (e.g, [Lam81]). A page is brought to the buffer
from secondary storage by issuing the fetch operation. If the buffer is full, a page is selected and flushed, thereby
making room for the fetched page. It is assumed that executing a read or a write is not interrupted by page
flushes.
Abstractly, a log is an infinite sequence (in one direction) of log records which document changes in the database
state. A suffix of the sequence of log records is stored in a log buffer in memory, and is occasionally forced to
secondary storage, where the rest of the log is safely stored. We refer to the portion of the log in main memory
as the log's tail. Whenever a page is updated by an active transaction, a record that describes this update is
appended to the log tail. In order to save log space, each update log record includes only the old and new state
(also called the before and after images) of the affected portion of the updated page, along with an indication of
that portion (e.g., an offset and length of affected portion) [Lin80]. Such a logging method is called entry logging
(or partial physical logging).
Concurrency control is achieved through the use of a locking protocol. Appropriate locks must be acquired
prior to any access to the database pages. We emphasize that (at this stage) locking granularity is entire pages,
and that the protocol produces strict schedules with respect to pages [BHG87]. Granularity of locking is refined
in Section 8. Strict locking means that only one active transaction can update a page, at any given instance.
In order to present our algorithms formally and precisely we introduce the following terminology and notation.
At any given instance there are three images (or states) associated with each page x:
ffl Current image. If x is currently in main memory then its image there is its current image; otherwise its
current image is found in secondary memory. The current image of x is denoted current[x].
ffl Backup image. The image of x as found in secondary memory at this particular instance, regardless of
relevant log information. The backup image of x is denoted backup[x].
Committed image. The image that reflects the updates performed by the last committed transaction so far.
The committed image of x is denoted committed[x].
The committed image of a page may not be realized directly on either secondary or main memory. However,
it should always possible to restore the committed image by applying log records to the backup image. Following
a crash, the backup image of the database pages is available on secondary storage. It may not reflect updates of
committed transactions (depending on the buffer management policy) may reflect updates of aborted ones. That
is, it differs from the committed image.
We use the term Primary Database (PDB) to denote the set of database pages that reside in main memory.
The set of backup pages stored on secondary storage is referred to as the Backup Database (BDB). The BDB is
an instance of the entire database, and the PDB is just a subset of the database pages.
Following a crash, the restart procedure brings the database up-to-date based on the BDB and the log. During
normal operation, updates to the PDB are propagated to the BDB keeping it close to being up-to-date (an activity
we refer to as checkpointing).
We use the following terminology to denote the properties of a page x. We say that:
ffl page x is dirty iff backup[x] 6= current[x]
ffl page x is stale iff backup[x] 6= committed[x]
ffl page x is up-to-date iff
Conversely, when x is not dirty, we say that x is clean, and similarly we say that x is fresh when it is not stale. It
follows from our definitions that a page that does not reside in main memory is clean. These three notions (dirty,
stale, up-to-date) are central to recovery management.
We use the variable x:dirty to denote the clean/dirty status of page x. Whenever a PDB page is updated this
variable is set. Conversely, once a page x is flushed, x:dirty is cleared and we say that x:clean holds.
In the sequel, x:dirty is interchangeable with the phrase "x is dirty", and similarly for x:clean and "x is
clean". x:dirty). Notice that using our terminology, a page may be
dirty and up-to-date. Such a situation arises when the committed image of the page has not been propagated to
the BDB.
4 Principles Underlying the Architecture
First, we list the principles that should constitute a good design of a recovery component for a MMDB.
ffl Large memory and larger database. The database systems for which we target our study are characterized by
having a very large database buffer, and an even larger physical database. It is assumed that by exploiting
the size of the buffer, the disk-resident portion of the database is accessed infrequently. By adhering to this
principle, we guarantee that the approach capitalizes on the performance advantages offered by MMDBS,
without precluding the possibility of having some portions of the database on secondary storage.
Redo-only BDB. Having a very large buffer, it is anticipated that page replacements are not going to be very
frequent or very urgent. Therefore, there is no need to complicate recovery by propagating uncommitted
updates to the BDB (i.e., the steal policy [HR83] should not be used). By enforcing this principle, a stale
page is brought up-to-date by only redoing missing updates; there are no updates to undo. This principle
will contribute to fast and simple recovery management.
ffl Decoupling of transaction and recovery processing. Transaction processing should be interrupted as little
as possible by recovery-related overhead. Otherwise, as noted earlier, the performance opportunities in
MMDBs would remain unexploited. This principle can be satisfied only by virtually separating recovery
and transaction processing.
We incorporate the above principles in the proposed architecture. We do not assume an entirely-resident
MMDB, in the spirit of the first principle. Consequently, we deal with buffer management issues. The second
principle is enforced by insisting on using the no-steal buffer management policy. Namely, only updates of
committed transactions are propagated to the BDB. This is an explicit assumption of our design.
The preservation of the third principle is the crux of the problem. Fortunately, stable memory is the technology
that enables promoting this principle. In the architecture we propose, the log tail is stored in stable memory.
Committing a transaction, thereby making its updates persistent, is guaranteed by writing the commit log record
to the log tail in stable memory. Any further recovery activity is totally separated from transaction processing.
We emphasize that in the architecture we propose the log tail is kept in stable memory (i.e., non-volatile RAM).
By making the fast stable memory the only point of friction between transaction and recovery processing we
achieve the goal of decoupling the two as much as possible.
5 The Incremental Techniques
There are two techniques that are integrated in our architecture:
ffl Log-driven backups: The key idea is to use log records as the means for propagating updates to the BDB
rather than relying on page flushes.
ffl Fresh/Stale Marking: Maintaining in stable memory a "freshness" status of each database page. Conse-
quently, restart processing is simplified and made very fast.
We first review each of these techniques separately.
5.1 Log-Driven Backups
The flow of log records in our architecture is a central element to the understanding the log-driven technique.
The abstraction we are using here is that of a stream of log records that continuously flows from a component
to its successor in a pipelined fashion. These components manipulate the log records and pass them along to the
next component down the pipeline. The flow of log records is depicted schematically in Figure 1.
Log records are produced by active transactions as they access the PDB, and are appended to the log tail.
There, a component referred to as the accumulator processes the stream of log records as follows before it
forwards them to the next stage in the pipeline. Log records of active transactions are queued and delayed until
the transaction either commits or aborts. If a transaction aborts, its log records are used for the undoing of the
corresponding updates on the relevant PDB pages and then discarded. Log records of committed transactions
are grouped together on a page-basis and then transferred to the next stage in the pipeline. That is, all records
documenting updates to a certain database page are grouped together. Thus, the accumulator filters out log
records of active and aborted transactions and forwards only log records of committed transactions grouped on a
database page basis. The accumulator operates entirely within the non-volatile stable memory. Observe that log
records that pass the accumulator are Redo-Only log records, and have no before-image information since they
document only committed updates.
Next in the pipeline are two parallel components: the logger and the propagator. The logger flushes log records
to the log disk in order to make room in the (limited-size) stable memory.
The task of the propagator is to update the BDB pages to reflect the modifications specified by the log records.
In order to amortize page I/O, the accumulator groups log records that belongs to the same page together, so that
the propagator will apply them all in a single I/O. Since the updates of the BDB are driven by the log records,
we coin the name log-driven backups accordingly.
Notice that the propagator applies to the BDB updates of only committed transactions. In effect, following the
accumulator, there are only Redo log records. These log records are grouped on a database page basis. They are
written to the log on disk by the logger, and are used to guide a continuous update of the BDB by the propagator.
When rearranging the log records, the accumulator can also reorder the records to minimize seek-time when the
propagator applies the corresponding updates to the BDB.
update markers
Fetch & Flush
monitors updates to BDB
Buffer Manager
Redo-only Log Recs.
Log Recs.
Transaction Processing
Stale/Fresh Marking
Log Tail
Stable Memory
Accumulator
Logger
Propagator
Marker
Redo-only Log Recs.
Log
Figure
1: A Schematic View of the Architecture
The pipeline of log records can be efficiently mapped onto a multi-processor shared-memory architecture. In
particular, the propagator and the logger tasks can be carried out by dedicated processors. This way, recovery-
related I/O is divorced from the main processor that executes the transactions processing activity.
The timing of discarding log records from the stable memory presents a trade-off. A log record may be
discarded only after it is written to the log disk by the logger. However, such an early discarding implies that
if the record has not yet been processed by the propagator, then its update will not be reflected in the BDB
(since it skipped the propagator processing stage). The propagator can fetch the missing records from the disk
log but this would really delay the propagation. Alternatively, the pages whose updates where skipped by the
propagator can be marked stale (see below on how the marking is managed), thereby postponing handling of the
missing updates to a later time. These difficulties can be avoided when log records are not discarded from stable
memory before they have been processed by the propagator. However, the trade-off arises as it is anticipated
that the propagator would lag behind the logger because the former performs random access I/O whereas the
latter performs sequential I/O. In [LS90] we analyze this trade-off and propose to use a RAID I/O architecture
[PGK88] for the propagator in order to balance the I/O load between the logger and the propagator.
Independently from the log-driven activity, database pages are exchanged between the buffer and the BDB,
as dictated by the demands of the executing transactions. The Buffer Manager is in charge of this exchange. We
emphasize that the buffer manager flushes only pages that reflect updates of already committed transactions-
the no-steal policy. Observe that the principle of Redo-Only BDB is implemented by both sources of updates to
the BDB; the buffer manager as well as the propagator.
Conceptually, the scheme could have been designed without flushing database pages at all. That is, propagating
updates by the propagator would have been the sole mechanism for keeping the BDB up-to-date. The problem
with such an approach is that page fetching must be delayed until the most recent committed values are applied
by the propagator. Such a delay of transaction processing is intolerable. Since only committed database pages are
flushed (no-steal buffer management), flushing can serve as a very effective means for keeping the BDB up-to-date.
The fact that the BDB is updated by both the propagator and by flushing buffered pages must be considered
with care. First, one should wonder whether these double updates do not interfere with the correctness of this
scheme. Second, since two identical updates are redundant, one of them should be avoided for performance reasons.
Regarding correctness, a problem arises when the propagator writes an older image of a page, overwriting the
most up-to-date image that was written when the page was previously flushed by the buffer manager. If the page
is fetched before the up-to-date images are written to the BDB by the propagator, transactions read inconsistent
data. The problem can be solved by imposing the following Safe-Fetch rule:
The propagator applies updates only to database pages that are in the PDB. Updates pertaining to
pages that are not in the PDB are ignored by the propagator.
Notice that because of this rule, a page that is fetched from the BDB was last modified when it was flushed by
the buffer manager. Therefore, the page is up-to-date when it is fetched to the PDB. The rule is referred to as
Safe-Fetch since it ensures that a page fetched from the BDB is always up-to-date (except for following a crash).
Implementing Safe-Fetch implies that the propagator should know which pages are in the PDB. We assume
that the propagator initially knows which pages are in the PDB, and it is notified about each page replacement by
the buffer manager. We assume that the propagator and the buffer manager share some memory for this purpose.
Alternatively, since a single I/O controller serves I/O requests of both the propagator and the buffer manager,
enforcing Safe-Fetch can be implemented by a smart controller. In any case, since page flushes are assumed to be
infrequent, implementing this rule should not incur too much of an overhead.
Besides the correctness aspect, Safe-Fetch enables the heavily loaded propagator to avoid processing some
log records. Safe-Fetch deals with cases where a page was flushed to the BDB before the corresponding updates
were applied to the BDB by the propagator. I/O activity can be reduced considering the opposite case too, by
imposing the following Single-Propagation rule:
When all of the log records corresponding to a page have been applied to the BDB by the propagator,
flushing that page to the BDB is useless. In such a case, the buffer manager can simply discard the
page without issuing a flush to the BDB.
This rule can be easily implemented using the log-sequence-number (LSN) mechanism [MHL Flushing
of the page can be avoided if the page's LSN is at most the LSN of the page that was last written by the propagator.
In this case, we are assured that all the updates have been applied to the BDB already (by the propagator) and
there is no need to flush the page.
Implementing Single-Propagation can be very effective in large memory systems, where we assume that paging
activity is quite rare. By the time a page needs to be flushed to the BDB, it is quite possible that all the
relevant updates have been propagated to the BDB by the propagator. We emphasize that incorporating Single-
Propagation is only for performance reasons, and has nothing to do with correctness. By enforcing Safe-Fetch and
Single-Propagation, the combination of propagator updates and page flushes as means for update propagation is
made optimal.
The log-driven backups technique ensures that the gap between the committed and backup images of the
database is not too wide. The technique is well-suited to MMDBs where most of the time all the accesses are
satisfied by the PDB.
5.2 Stale/Fresh Marking
The goal of the marking technique is to enable very fast restart after a crash. The key observation is that
transaction processing can be resumed immediately as the system is up, provided that access to stale pages is
denied until these pages are recovered and brought up-to-date. An attempt by a transaction T to access a page
x triggers the following algorithm:
if x is stale then begin
fetch the backup image of x;
Retrieve all the relevant log records for x from the log;
Apply these log records to x's image in order to make x up-to-date;
Let T access x
To support this approach to restart, a stale/fresh marking that indicates which pages are (potentially) stale
needs to be implemented. The updates needed to bring a stale page up-to-date are always Redo updates because
of our assumptions. The log records with the missing updates can be found either in the log tail or on the log disk
according to the trade-off presented earlier regarding the timing of discarding a log record from stable memory.
In [Lev91] we elaborate on how to support efficient retrieval of the needed log records from a disk.
The stale/fresh marking of data pages is the crux of the algorithm. The marking enables resuming transaction
processing immediately after a crash, while preserving the consistency of the database. Typically, the log
stores enough information to deduce the stale/fresh status of pages. However, this information is not available
immediately. The marking also controls the recovery of data pages one by one according to the transactions'
demands. In order for the algorithm to be practical, it is critical to both maintain the stale/fresh marking in
main memory, as well as have it survive a crash. Therefore, we underline the decision to maintain the stale/fresh
marking in stable memory. We do not elaborate on how to manage the marking efficiently. However, in light of
the scale of current databases, an appropriate data structure holding page IDs that supports efficiently inserts,
deletes, and searches is deemed crucial. Observe that the functions of the analysis pass [MHL + 90] in standard
restart procedures are captured by the stale/fresh marking, and are ready for use by restart without the need to
analyze the log first.
The partition of the set of the backup pages into a set of stale pages and a set of fresh ones varies dynamically
as transaction processing progresses. There are two events that trigger transitions in that partition:
ffl the commit event of an updating transaction, and
ffl the updates to BDB pages by either the buffer manager or the propagator.
When a transaction commits, its dirty pages become stale since they were not written to the BDB (see rule
Dirty-Stale below). When flushing occurs, the transitions depend on whether the page is committed or not.
Since we enforce the no-steal policy, we consider only flushing a committed page - an event that makes the page
fresh (see rule Flush-Fresh below).
Based on the above transitions we present a reactive algorithm that manages a stale/fresh marking of pages
to indicate whether they are stale or fresh. In order to present the algorithm formally, we introduce the following
variables and conventions:
1. Each page x is assigned a boolean variable x:stale that is used for the stale/fresh marking. This set of
variables is the only data structure that is maintained in stable memory. All other data structures are kept
in volatile memory and are lost in a crash. We stress that the boolean variables are introduced only to
present the algorithm, and we do not intend to implement them directly.
2. Each transaction T is associated with a set, T:writeset, that accumulates the IDs of the pages it modifies.
The algorithm is given by the following two rules, each of which includes assignment that is coupled with the
temporal event that triggers it:
Prior to the commit point of
flushing a dirty page x: x:stale := false
An assignment and its triggering event need not be executed as an atomic action. All that is required is that
no events that affect the variables we have introduced occur between the triggering event and the corresponding
assignment. The key idea in the algorithm is to always set x:stale to true just prior to the event that actually
causes x to become stale. As a consequence, a situation where x:stale holds but x is still fresh is possible.
Likewise, falsifying x:stale is always done just following the event that causes x to become fresh.
We illustrate the marking scheme with the following example.
Example 1. Consider the following three transactions 1 that read and write (R/W) the pages a; b and c.
The following sequence lists write operations of T ij
on page x (w ij
(x)), commit points of of T ij
flushes (f lush(x)) in their order of occurrence in a certain execution that is interrupted by a
After the crash, a:stale holds (by Dirty-Stale prior to c 21 ), b:stale does not hold (by Flush-Fresh after f lush(b)),
and c:stale also does not hold (by Flush-Fresh after f lush(c)). Note that T 11 and T 21 are committed whereas T 12
has to be aborted. We say that T 11 and T 21 are winner transactions, whereas T 12 is a loser transaction. Using
the marking, only the updates of the winner transactions to page a need to be redone, since only a is marked
stale. 3
5.3 The Integrated Architecture
To summarize the integrated architecture we list the five components we have introduced and their corresponding
functionality. We refer the reader to Figure 1 for a schematic description of this architecture.
ffl Buffer manager: Enforces no-steal policy.
ffl Accumulator: Operates entirely within the stable memory. Accumulates log records as they are produced
by transactions and forwards log records of committed transactions. In order to amortize page I/O, the
accumulator groups log records that belongs to the same page together, so that the propagator will apply
them all in a single I/O.
Applies page-updates to BDB based on Redo log records.
ffl Logger: Writes Redo log records to the log on disk
ffl Marker: Reacts to page flushes by the buffer manager and BDB updates by the propagator and maintains
the fresh/stale marking in stable memory.
We use double subscripts for transactions since the same example is used again in the context of subtransactions in Section 8.
6 Correctness Aspects
We prove two claims that underlie the correctness of our integrated architecture. The correctness of the marking
algorithm is stated concisely by the hypothesis of Lemma 1 below:
Lemma 1. At all times, in particular following a crash, if a page x is stale then x:stale holds. Formally:
Proof. Consider the state space formed by the variables we have introduced. We model the execution
of transactions and fetching and flushing of pages, as transitions over that state space. We prove the claim by
showing that the invariant holds initially and that it is preserved by each of these transitions.
Assuming that initially all pages are fresh, the invariant holds vacuously when the algorithm starts. Flushing
a page is modeled as an assignment to backup[x], and committing a page is modeled as an assignment to
committed[x]. There are four state transitions that may affect the validity of the invariant: an execution of
the assignment statement specified in one of the rules Dirty-Stale, Flush-Fresh, the commitment of an updating
transaction, and the flushing of a page. We prove that the invariant holds by showing that each of these state
transitions preserves the invariant:
ffl Rule Dirty-Stale: Under no circumstances setting x:stale to true can violate the invariant.
ffl Commit of T : Consider an arbitrary page x updated by the just committed transaction T (i.e., x 2
T:writeset). Since a strict concurrency protocol is employed at a page level, we are assured that no other
transaction has updated x subsequently to T 's update and before T 's commitment. If x is dirty, then
T 's commitment renders it stale. However, since the assignment in Dirty-Stale is executed prior to the
commitment of T , x:stale holds, and the invariant still holds.
Flushing x: According to our assumptions regarding buffer management policy, flushing a page x always
renders it fresh (since only committed pages are flushed). Therefore, the invariant holds vacuously.
ffl Rule Flush-Fresh: Since this rule's execution follows immediately the flushing of x, x is fresh after the
flush, and hence falsifying x:stale preserves the invariant.
Thus, the invariant holds. 2
It should be realized that if x:stale holds it does not necessarily mean that x is indeed stale, however the
converse implication does hold, as stated in Lemma 1. Hence, notice that x:stale and "x is stale" are not
interchangeable.
Lemma 2. For all pages x, if x is not in the PDB, then x is fresh.
Proof. A backup page can be updated by either the buffer manager or the propagator. If a page is not
in the PDB the propagator does not update it because of the Safe-Fetch rule. Regarding the buffer manager,
flushing a page is allowed only if the page is committed. Therefore, all pages that are not in the PDB are fresh.7 Improvements
In this section we present several possible enhancements and refinements to the techniques we have presented
7.1 Improving Restart Processing
Using the fresh/stale marking post-crash transactions can access fresh pages as soon as the system is up. An
attempt to access a stale page triggers the recovery of that individual page. The transaction that requested this
access is delayed until the page is recovered. Interestingly, aided by the marking, post-crash transactions can be
requested R W
held
Figure
2: Lock compatibility matrix.
allowed even greater flexibility. Indeed, stale pages cannot be read by post-crash transactions; however, writing
data items in a stale page is possible.
One way to view this improvement is to consider a new type of locks, called restart locks, that lock all stale
pages, and no other pages, after a crash. An imaginary restart transaction acquires these locks as soon as the
system is rebooted and before post-crash transactions are processed. In Figure 2, we present the lock compatibility
matrix for the three lock modes read (R), write (W), and restart (RS). Since restart locks are not requested,
but rather are held by convention by the restart transaction, the compatibility matrix lacks the request column
for the new lock type. An entry with "X" in this table, means that the corresponding locks are incompatible.
Observe that restart locking does not interfere with the normal concurrency control. This can be shown by
observing that the imaginary restart transaction is a two-phased transaction that is serialized before any post-
crash transaction that attempts to access a stale page. Also, restart locking cannot introduce deadlocks, since
the restart transaction is granted the RS locks on all the stale-marked pages unconditionally at reboot time.
An RS lock held on a stale page x is released when the page is brought up-to-date. This happens only when
x is explicitly brought up-to-date by the incremental restart procedure, by applying log records to the backup
image.
A write of a stale page results in an update log record containing only the after image of the update, since
the page has not been recovered yet. Such a log record will actually affect the relevant page once the page is
recovered and brought up-to-date (unless the transaction that generated the record aborts).
In summary, the above protocol allows post-crash transactions to be processed concurrently with the incremental
restart processing. Some transactions are scheduled without being delayed by the recovery activity at all,
and some are delayed only as a result of recovering data items they need.
7.2 Further Improvements
In this subsection, we briefly mention several points that can further improve an implementation of the incremental
restart algorithm.
ffl RS-locking can be used to combine incremental and standard restart for different sets of pages, thereby
avoiding the need to maintain stale/fresh marking for too many pages. The set of pages that are recovered
using standard restart should be RS-locked until they are made consistent. Only predicted 'hot spot' data
can be supported by incremental restart (and the stale/fresh marking). This improvement allows a very
attractive and flexible use of incremental restart even in very large databases.
ffl Background process(es) can recover the remaining portions of the database, while priority process(es) recover
pages demanded by executing transactions. Once a page is recovered and made consistent, the RS lock can
be released. This technique provides even greater concurrency between restart and transaction processing.
ffl It is not necessary to log restart activities in order to guarantee its idempotence. It is advised, though, to
flush previously stale pages that are made up-to-date, thereby marking them fresh. Doing this will save
recovery efforts in case of repeated failures.
ffl Assuming a very large number of pages for which stale/fresh marking is managed using a sophisticated
data structure, updating the marking data structure can become a bottleneck. A queue in stable memory
that records recent updates to the marking can prevent this undesirable phenomenon. Applying the queued
updates to the actual marking data structure can take place whenever the CPU is not heavily loaded.
Incremental Recovery for High-Level Recovery Management
A common way of enhancing concurrency is the use of semantically-rich operations instead of the more primitive
read and write operations. Having semantically-rich operations allows refining the notion of conflicting versus
commutative operations [BR87, Wei88]. It is possible to examine whether two operations commute (i.e., do
not conflict); such operations have the nice property that they can be executed concurrently. Semantics-based
concurrency control is often cited as a very attractive method for handling high contention to data (i.e., 'hot
Wei88]. The problem, however, is that the simple state-based (i.e., physical) recovery
methods no longer work correctly in conjunction with these operations. Only operation logging, referred to also
as logical-transition logging [HR83], can support this type of enhanced concurrency. For instance, consider the
increment and decrement operations which commute with each other and among themselves. A data item can be
incremented concurrently by two uncommitted transactions. If one of the transactions aborts, its effect can be
undone by decrementing the item appropriately. However, reverting to the before image may erase the effects of
the second transaction also, resulting in an inconsistent state.
One of the problems of using operation logging is that the logged high-level operations may be implemented
as a set of lower-level operations, and hence their atomicity is not guaranteed. Therefore, when logged operations
are undone or redone after a crash, they should not be applied to a backup database that reflects partial effects
of operations. Therefore, a key assumption in any operation logging scheme is that operations must appear as
though they were executed atomically. This requirement is a prerequisite to any correct application of operation
log records to the BDB at restart time, and is referred to as high-level action atomicity in [WHBM90]. As an
illustration, we mention System R [G + 81] which employs operation logging. There, at all times, the BDB is in an
operation-consistent state - a state that reflects the effects of only completed operations, and no partial effects
of operations. This property is obtained by updating the BDB atomically, and only at checkpoint time, using
a shadowing technique [Lor77]. At restart, the operation log is applied to the consistent shadow version of the
database.
The problem of implementing operation logging is best viewed as a multi-level recovery problem. A very
elegant and simple model of (standard, non-incremental) multi-level recovery is introduced in [WHBM90]. In
what follows, we make use of that model to construct an incremental multi-level recovery scheme.
A transaction consists of several high-level operations. A high-level operation is defined over fine-granularity
items (e.g, tuples, records), and is implemented by several base-level primitives that collectively may affect more
than a single page. The base level primitives are read and write that affect single pages-primitives that are
consistent with our page-level model.
In other words, transactions are nested in two levels. Serializability of transactions is enforced by a multi-level
concurrency control that uses strict two-phase locking at each level [BSW88].
Recovery is also structured in two levels. Our page-based incremental method constitutes the base recovery.
It ensures persistence and atomicity of higher-level operations and not of complete transactions. That is, the
high-level operations are regarded as transactions as far as the base recovery module is concerned. Persistence of
a committed transaction is obtained as a by-product of the persistence of its operations (i.e., if all operations of a
transaction have committed, then the transaction itself has committed). Observe that both the log-driven backups
and marking algorithms refer to operations rather than transactions in the current context. Any occurrence of a
transaction there should be substituted with an operation.
We still require that dirty pages are not flushed unless the operation that updated them is committed (i.e.,
no-steal policy with respect to operations is enforced). This is not a major restriction since operations update a
small number of pages. Imposing this restriction also helps avoiding the extra overhead due to the hierarchical
layering. Consequently, the log of the base recovery, called the base log, is a redo log and there is no need to
perform base-level undo at restart.
The high-level recovery is based on operation logging and it guarantees atomicity of complete transactions.
The high-level log is separate from the base log and it holds only high-level undo information. The high-level
Undo log does not participate in the log-driven backups flow, and may, in fact, be implemented as a traditional log
on disk. The overall plan is to use the base recovery to redo committed transactions and committed operations,
thereby bringing the BDB to an operation-consistent state, and then apply high-level undo in order to undo the
operations of loser transactions.
Since the high-level log deals with Undo log records, it should obey the Write-Ahead-Log (WAL) rule
In our case, since updates are not propagated before the commit of an operation, the WAL
rule means that the high-level undo record should be written to the high-level log prior to the commit point of
the corresponding operation.
By structuring the high-level recovery on top of our incremental restart method, we intend to give the overall
recovery scheme incremental flavor. The major challenge in making this multi-level recovery scheme incremental
is the fact that we can no longer treat single pages as the individual unit for recovery, since operations affect
several pages. Had we used single pages, we would have violated the high-level action atomicity requirement
mentioned above. For this reason we devise the notion of a recovery unit (RU). An RU is a set of pages, such that
it is not possible for any high-level operation to affect more than one RU. For instance, if an INSERT operation
is used for updating both index and data files, then the index and the corresponding data file constitute an RU.
It is the responsibility of the base recovery to bring an RU to an operation-consistent state before any high-level
undo can be applied to it.
When a post-crash transaction requests to access an RU, the incremental restart algorithm is applied to all the
pages of that RU. Once this phase is completed, the RU is in an operation-consistent state. Then, the high-level
recovery brings the RU to its committed state by applying the high-level undo operations for loser transactions
in the reverse order of the appearance of the corresponding log records. To facilitate fast restoration of individual
RUs, high-level log records should be grouped on an RU basis on the high-level log (see [Lev91] for techniques for
grouping log records). A high-level undo operation is treated as a regular operation, keeping both base and high-level
logging in effect. Care should be taken to undo only operations whose effect actually appears in the backup
database (the high-level action idempotence requirement of [WHBM90]). Therefore, the base recovery passes to
the high-level recovery an indication which of the operations of loser transactions were winner operations, and
hence were redone, in the base recovery phase.
By partitioning the database to RUs the incremental effect is obtained. RUs can be of coarse granularity,
thereby diminishing the benefits of incremental restart. For example, an entire relation and the corresponding
index structure must be recovered before a post-crash transaction may read any of the tuples. This observation
calls for as small RUs as possible.
Example 2. Consider again the three transactions of Example 1. This time, however, T 11 and T 12 are
high-level operations (subtransactions of T 1 ), and T 21 is the sole operation of T 2 . The same sequence of events is
used. The stale/fresh marking of a; b and c, and the winner/loser status of the operations remain as in Example
1 in this execution. Pages a; b and c constitute an RU, and the high-level log for that RU is as follows (we
represent the logged undo information for operation T ij
In terms of transactions, T 1 is a loser, whereas T 2 is a winner. Base recovery for the three pages takes place
exactly as in Example 1 (i.e., only page a is recovered). In the high-level recovery phase, only T 11 is undone, since
T 12 was a loser in the base-level. 3
The presented scheme is not efficient mainly since it performs excessive log I/O while committing the high-level
operations. A more efficient version of the scheme would probably employ the improvements outlined in
the second approach in [WHBM90]. The goal of presenting the above scheme was only to demonstrate how
incremental restart can be used as the base for a more complex and higher-level recovery, using the modular
multi-level model of [WHBM90].
9 Related Work
The work reported in this paper is a continuation of our earlier work in this area [Lev91, LS90]. A general
stale/fresh marking algorithm that is not based on no-steal buffer management is presented in [Lev91].
A proposal for incremental restart is presented in [LC87] in the context of a main-memory database (MMDB).
Stable memory is used extensively to implement this approach. There are several aspects that distinguish our
work from the work on [LC87]. Some aspects there are peculiar to entirely-resident MMDBs. Namely, there is no
consideration of paging activity. Integrating full-fledged operation logging is not discussed in [LC87] at all. Also,
stale/fresh partition and the improvements it entails are lacking from the work in [LC87].
Delaying restart activities was first described in [Rap75]. There, restart does not perform any recovery activity.
Instead, reading a data item triggers a validity check that finds the committed version of the data item that should
be read. The incremental restart procedure we propose resembles this early work in that data items are recovered
only once they are read.
A more conventional approach to speeding up restart is proposed in [MP91] in the context of the ARIES
transaction processing method. The idea there is to shorten the redo pass of conventional restart by performing
selective redo. Instead of repeating the history by redoing all the actions specified in the log, only those actions
specified in winner log records are redone. It is also mentioned there that undo of loser transactions can be
interleaved with the processing of new transactions if locks (similar to RS-locks) protect the uncommitted data
items updated by the loser transactions. During the analysis pass of restart, the identity of these data items is
discovered, whereas in our scheme such data items are already marked as stale.
The concept of deferred restart (which is similar to incremental restart) is discussed in [MHL + 90] also in the
context of ARIES. It is mentioned that in IBM's DB2 redo/undo for objects that are off-line can be deferred.
The system remembers the LSN ranges for that objects and makes sure that they are recovered once they are
brought on-line and before they are made accessible to other transactions. DB2 employs physical, page-level
logging. Problems related to logical undoing and deferred restart are also discussed in [MHL + 90]. Our work
differs from the ARIES work in exploiting stable memory and as it presents a simple algorithmic description of
the fundamentals of incremental restart in the context of both physical and operation logging.
Another noteworthy approach to fast restart is the Database Cache [EB84]. There, dirty pages of active
transactions are never flushed to the backup database. At restart, the committed state is constructed immediately
by loading the recently committed pages from a log device (called safe there). The main disadvantages of
this approach are that locking is supported only at the granularity of pages, full-page physical logging is used
in contrast to our entry logging, update-intensive transactions need to be treated specially, and that commit
processing includes a synchronous I/O. The DB cache idea is refined to accommodate finer granularity locking
in [MLC87], however this extension does not deal with operation logging and concurrency among semantically
compatible operations.
Work on improving restart processing is reported in [Moh91]. The approach there is to adapt the passes
of traditional restart and admit new transactions during these passes. Also, associating freshness status with
uncommitted pages is discussed there and in [Moh90].
A thorough survey of different MMDBS checkpointing policies, their impact on overall recovery issues, and
their performance can be found in [SGM87b].
Next, we compare our log-driven backups scheme with several variations of MMDBS checkpointing (e.g.,
Checkpointing interferes in one way or another with transaction processing, since both activities compete
for the PDB and the main CPU. Taking a consistent checkpoint requires bringing transaction activity to
a quiescent state, since a transaction-consistent checkpoint reflects a state of the database as produced by
completed transactions. In the extreme case, transactions have to be aborted to guarantee the consistency
of the checkpoint [Pu86]. Even in fuzzy algorithms, which do not produce consistent checkpoints [Hag86],
memory contention is inevitable since both normal transactions and the checkpointer must access the very
same memory. By contrast, in the log-driven backups scheme, transaction processing and propagation to the
BDB do not use the same memory and may use different processors. This separation is the key advantage
of the scheme.
ffl It has been observed in [SGM87b] that consistent checkpoints must be supported by two copies of the
database on secondary storage, since there is no guarantee that the entire checkpoint will be atomic. More
precisely, there is always one consistent checkpoint of the entire database on secondary storage that was
created by the penultimate checkpoint run, while the current run creates a new checkpoint. This problem
does not arise in the log-driven backups technique since the propagation to the BDB is continuous and not
periodic.
ffl It is not clear how checkpointing algorithms can be adjusted to support our assumption of a partially resident
database. The correctness of these algorithms may be jeopardized by arbitrary fetching and flushing of
database pages. It seems that fuzzy checkpointing, which is the simplest type of checkpointing algorithms,
can be adopted for such purposes, but this deserves separate attention. On the other hand, since the
log-driven design is predicated on a partial-residence assumption, it can accommodate partially-resident
databases efficiently by enforcing rules Safe-Fetch, and Single-Propagation.
The above comparison favors the log-driven approach. Among the rest, fuzzy algorithms seem to be close com-
petitors. We note that fuzzy algorithms stand out (considering CPU overhead during normal operation) according
to the performance evaluation studies of Salem and Garcia-Molina [SGM87b].
We should note that other methods that are log-driven in spirit can be found in [Eic86] and [LN88]. It is
interesting to note that in [Eic86], log records of a transaction are marked after the transaction has committed,
so that only log records of committed transactions would affect the BDB. It should also be mentioned that a log-
driven approach is often used to manage remote backups for disaster recovery purposes (e.g., [KGMHP88, Tan87]).
Conclusions
The increasing size of contemporary databases, and the availability of stable memory and very large physical
memories are bound to impact the requirements from, and the design of recovery components. In particular, for
checkpointing and restart processing, the traditional approach becomes inappropriate for high rates of transactions
and very large databases. An incremental approach, that exploits the new technological advances, is a natural
solution. In this paper we described in a high-level manner such a solution.
The main thrust of this paper is the design of recovery techniques in a manner that would allow their interleaving
with normal transaction processing. The techniques exploit stable memory and are geared to meet
the demands of systems that incorporate large main memories. We have proposed both restart algorithm (called
incremental restart) and a checkpointing-like technique (called log-driven backups) that operate in an incremental
manner, in parallel with transaction processing. The prominent original concepts motivating our design are as
follows:
Associating restoration activities with individual data objects, and assigning priorities to these activities
according to the demand for these objects. Consequently, recovery processing is interleaved with normal
transaction processing. By contrast, the conventional restart procedure for example, treats the database as
a single monolithic data object, and enables resumed transaction processing only after its termination.
ffl A direct consequence of the previous point is the grouping of recovery-related information (e.g., log record)
on data objects basis. This structuring is aimed to facilitate the efficient restoration of individual data
objects.
ffl Carrying out recovery processing and transaction execution in parallel implies decoupling the respective
resources to reduce contention as much as possible. In the log-driven backups technique both data and
processing resources for checkpointing are separate from the resources required for forward transaction
processing.
--R
Concurrency Control and Recovery in Database Systems.
The effect of large main memory on database systems.
Analytic models for rollback and recovery strategies in database systems.
The case for safe RAM.
Implementation techniques for main memory database systems.
A database cache for high performance and fast restart in database systems.
Main memory database recovery.
A classification and comparison of main memory database recovery techniques.
The recovery manager of the system R database manager.
System M: A transaction processing testbed for memory resident data.
Notes on database operating systems.
A crash recovery scheme for memory-resident database system
Principles of transaction oriented database recovery - a taxonomy
Management of a remote backup copy for disaster recovery.
Atomic transactions.
A recovery algorithm for a high-performance memory-resident database system
Incremental restart.
In Distributed Databases
Multiprocessor main memory transaction processing.
Physical integrity in a large segmented database.
ARIES: A transaction recovery method supporting fine-granularity locking and partial rollbacks using write-ahead logging
Finer grained concurrency for the database cache.
Directions in system architectures for high transaction rates.
Commit-LSN: A novel and simple method for reducing locking and latching in trans-action processing systems
ARIES-RRH: restricted repeating of history in the ARIES transaction recovery method.
A case for redundant arrays of inexpensive disks (RAID).
File structure design to facilitate on-line instantaneous updating
Performance analysis of recovery.
Checkpointing memory-resident databases
Crash recovery for memory-resident databases
Tandem Computers Corporation.
--TR
Principles of transaction-oriented database recovery
A database cache for high performance and fast restart in database systems
Performance analysis of recovery techniques
Concurrency control and recovery in database systems
A crash recovery scheme for a memory-resident database system
Panel: The effect of large main memory on database systems
A recovery algorithm for a high-performance memory-resident database system
A case for redundant arrays of inexpensive disks (RAID)
Commutativity-Based Concurrency Control for Abstract Data Types
Multiprocessor main memory transaction processing
The case for safe RAM
Log-driven backups: a recovery scheme for large memory database systems
Multi-level recovery
Physical integrity in a large segmented database
Main memory database recovery
The Recovery Manager of the System R Database Manager
File structure design to facilitate on-line instantaneous updating
Implementation techniques for main memory database systems
System M
Multi-Level Transaction Management, Theoretical Art or Practical Need ?
Finer Grained Concurrency for the Database Cache
Semantics-Based Concurrency Control
Incremental Restart
ARIES-RRH
A Classification and Comparison of Main Memory Database Recovery Techniques
Commit_LSN
Notes on Data Base Operating Systems
Atomic Transactions
--CTR
Chin-Hsien Wu , Tei-Wei Kuo , Li-Pin Chang, Efficient initialization and crash recovery for log-based file systems over flash memory, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Jun-Lin Lin , Margaret H. Dunham, Segmented fuzzy checkpointing for main memory databases, Proceedings of the 1996 ACM symposium on Applied Computing, p.158-165, February 17-19, 1996, Philadelphia, Pennsylvania, United States
Jing Huang , Le Gruenwald, Impact of timing constraints on real-time database recovery, Proceedings of the workshop on on Databases: active and real-time, p.54-58, November 12-16, 1996, Rockville, Maryland, United States
H. V. Jagadish , Abraham Silberschatz , S. Sudarshan, Recovering from Main-Memory Lapses, Proceedings of the 19th International Conference on Very Large Data Bases, p.391-404, August 24-27, 1993
Chin-Hsien Wu , Tei-Wei Kuo , Li-Pin Chang, The Design of efficient initialization and crash recovery for log-based file systems over flash memory, ACM Transactions on Storage (TOS), v.2 n.4, p.449-467, November 2006
Jacob Slonim , Michael Bauer , Paul Larson, CORDS: status and directions, Proceedings of the 1992 conference of the Centre for Advanced Studies on Collaborative research, November 09-12, 1992, Toronto, Ontario, Canada | high-level recovery manager;checkpointing;database management systems;system recovery;updates;storage management;incremental database recovery;page-based incremental restart algorithm;nonvolatile RAM;online transaction processing systems;memory database systems;transaction processing |
627561 | Using Tickets to Enforce the Serializability of Multidatabase Transactions. | To enforce global serializability in a multidatabase environment the multidatabase transaction manager must take into account the indirect (transitive) conflicts between multidatabase transactions caused by local transactions. Such conflicts are difficult to resolve because the behavior or even the existence of local transactions is not known to the multidatabase system. To overcome these difficulties, we propose to incorporate additional data manipulation operations in the subtransactions of each multidatabase transaction. We show that if these operations create direct conflicts between subtransactions at each participating local database system, indirect conflicts can be resolved even if the multidatabase system is not aware of their existence. Based on this approach, we introduce optimistic and conservative multidatabase transaction management methods that require the local database systems to ensure only local serializability. The proposed methods do not violate the autonomy of the local database systems and guarantee global serializability by preventing multidatabase transactions from being serialized in different ways at the participating database systems. Refinements of these methods are also proposed for multidatabase environments where the participating database systems allow schedules that are cascadeless or transactions have analogous execution and serialization orders. In particular, we show that forced local conflicts can be eliminated in rigorous local systems, local cascadelessness simplifies the design of a global scheduler, and that local strictness offers no significant advantages over cascadelessness. | Introduction
M ULTIDATABASE SYSTEM (MDBS) [1], [2] is a facility
that supports global applications accessing
data stored in multiple databases. It is assumed that the
access to these databases is controlled by autonomous and
possibly heterogeneous Local Database Systems (LDBSs).
The MDBS architecture (Figure 1) allows local transactions
and global transactions to coexist. Local transactions
are submitted directly to a single LDBS, while the
multidatabase (global) transactions are channeled through
the MDBS interface. The objectives of a multidatabase
transaction management are to avoid inconsistent retrievals
and to preserve the global consistency in the presence of
multidatabase updates. These objectives are more difficult
to achieve in MDBSs than in homogeneous distributed
database systems because, in addition to the problems
caused by data distribution that all distributed database
D. Georgakopoulos is with the Distributed Object Computing De-
partment, GTE Laboratories, Incorporated, 40 Sylvan Road, MS-62,
Waltham, MA 02254.
M. Rusinkiewicz is with the Department of Computer Science, University
of Houston, Houston,
A. Sheth is with Bellcore, 444 Hoes Lane, Piscataway, NJ 08854.
G
global transaction
base
data
data
base
base
data
local transaction local transaction
Fig. 1. Multidatabase system architecture.
systems have to solve, transaction management mechanisms
in MDBSs must also cope with heterogeneity and
autonomy of the participating LDBSs.
The most important heterogeneities from the perspective
of transaction management are dissimilarities in (i)
the transaction management primitives and related error
detection facilities available through the LDBS interfaces,
and (ii) the concurrency control, commitment, and recovery
schemes used by the LDBSs.
Local autonomy is the most fundamental assumption of
the MDBS concept. Autonomy specifies the degree of independence
and control the LDBSs have over their data.
Since total autonomy means lack of cooperation and com-
munication, and hence total isolation, some less extreme
notions of LDBS autonomy have been proposed in the literature
[3], [4], [2], [5]. Garcia-Molina and Kogan [4] explored
the concept of node (site) autonomy in the context
of a distributed system. Veijalainen [3] classifies the
LDBS autonomy requirement into design autonomy, execution
autonomy, and communication autonomy. In addition
to these notions of autonomy, Sheth and Larson [2] identify
additional LDBS properties that preserve association au-
tonomy. In this paper, we consider that LDBS autonomy
is not violated if the following two conditions are satisfied:
1. The LDBS is not modified in any way.
2. The local transactions submitted to the LDBS need
not to be modified in any way (e.g., to take into account
that the LDBS participates in a MDBS).
In a multidatabase environment the serializability of local
schedules is, by itself, not sufficient to maintain multi-database
consistency. To ensure that global serializability
IEEE TRANSACTIONS ON KNOWLEDGE AND
is not violated, local schedules must be validated by the
MDBS. However, the local serialization orders are neither
reported by the local database systems, nor can they be
determined by controlling the submission of global sub-transactions
or observing their execution order. To determine
the serialization order of the global transactions at
each LDBS, the MDBS must deal not only with direct conflicts
that may exist between the subtransactions of multi-database
transactions, but also with the indirect conflicts
that may be caused by local transactions. Since the MDBS
has no information about the existence and behavior of local
transactions, determining if an execution of global and
local transactions is globally serializable is difficult. An
example illustrating this problem is presented in the next
section.
Several solutions have been proposed in the literature
to deal with this problem, however, most of them are not
satisfactory. The main problem with the majority of the
proposed solutions is that they do not provide a way of assuring
that the operation execution order of global trans-
actions, which can be controlled by the MDBS, is reflected
in the local serialization order of the global transactions
produced by the LDBSs. For example, it is possible that a
global transaction G i is executed and committed at some
LDBS before another global transaction G j , but their local
serialization order is reversed. In this paper, we address
this problem by introducing a technique that disallows such
local schedules, and enables the MDBS to determine the serialization
order of global transactions in each participating
LDBS. Our method does not violate the local autonomy
and is applicable to all LDBSs that ensure local serializ-
ability. Unlike other solutions that have been proposed in
the literature, our technique can be applied to LDBSs that
provide interfaces at the level of set-oriented queries and
updates (e.g., SQL or QUEL).
Having established a method to determine the local serialization
order of global transactions in LDBSs, we introduce
optimistic and conservative methods that enforce
global serializability. In addition, we propose efficient
refinements of these methods for multidatabase environments
where the participating database systems use cas-
cadeless or rigorous schedulers [6], [7]. We show that multidatabase
scheduling is simplified in multidatabase environments
where all local systems are cascadeless. Further
simplifications are possible if LDBSs use one of the many
common schedulers that ensure that transaction serialization
orders are analogous to their commitment order. We
show that in such multidatabase environments the local serialization
order of global transactions can be determined
by controlling their commitment order at the participating
LDBSs. Although we address the problem of enforcing
global serializability in the context of a multidatabase sys-
tem, the solutions described in this paper can be applied
to a Distributed Object Management System [8].
This paper is organized as follows. In Section II, we
identify the difficulties in maintaining global serializability
in MDBSs and review related work. The multidatabase
model and our assumptions and requirements towards local
database management systems are discussed in Section
III. In Section IV, we introduce the concept of a ticket
and propose the Optimistic Ticket Method (OTM) for multidatabase
transaction management. To guarantee global
serializability, OTM requires that the LDBSs ensure local
serializability. In Section V, we introduce the Conservative
Ticket Method (CTM) that also requires global transactions
to take tickets but is free from global restarts. Variations
of OTM and CTM that use simpler global schedulers
but work only in multidatabase systems in which all local
systems are cascadeless are presented in Section VI. In
Section VII we introduce the concept of implicit tickets
and propose the Implicit Ticket Method (ITM) which does
not require subtransaction tickets but works only in multi-database
environments where the participating LDBSs are
rigorous. Integrating the methods above in mixed multi-database
schedulers is discussed in Section VIII. Finally,
in Section IX, we summarize our results.
II. Problems in maintaining global
serializability and related work
Many algorithms that have been proposed for transaction
management in distributed systems are not directly
applicable in MDBSs because of the possibility of indirect
conflicts caused by the local transactions. To illustrate this
point consider Figure 2 which depicts the execution of two
multidatabase transactions G 1 and G 2 , and a local trans-action
If a transaction G i reads a data object a, we
draw an arc from a to G i . An arc from G i to a denotes
that G i writes a. In our example, the global transactions
have subtransactions in both LDBSs. In LDBS 1 , G 1 reads
a and G 2 later writes it. Therefore, G 1 and G 2 directly
conflict in LDBS 1 and the serialization order of the transactions
is In LDBS 2 , G 1 and G 2 access different
data items: G 1 writes c and later G 2 reads b. Hence, there
is no direct conflict between G 1 and G 2 in LDBS 2 . How-
ever, since the local transaction T 1 writes b and and reads
conflict indirectly in LDBS 2 . This indirect
conflict is caused by the presence of the local transaction
. In this case, the serialization order of the transactions
in LDBS 2 becomes G
In a multidatabase environment, the MDBS has control
over the execution of global transactions and the operations
they issue. Therefore, the MDBS can detect direct conflicts
involving global transactions, such as the conflict between
G 1 and G 2 at LDBS 1 in Figure 2. However, the MDBS has
no information about local transactions and the indirect
conflicts they may cause. For example, since the MDBS
has no information about the local transaction T 1 , it cannot
detect the indirect conflict between G 1 and G 2 at LDBS 2 .
Although both local schedules are serializable, the global
schedule is non-serializable, i.e. there is no global order
involving G 1 , G 2 and T 1 that is compatible with both local
schedules.
In the early work in this area, the problems caused by indirect
conflicts were not fully recognized. In [9], Gligor and
Popescu-Zeletin stated that a schedule of multidatabase
transactions is correct if multidatabase transactions have
GEORGAKOPOULOS, RUSINKIEWICZ, SHETH: ENFORCING THE SERIALIZABILITY OF MULTIDATABASE TRANSACTIONS 3
@
@
@
@
@
@I
CO
ae
ae
ae
ae
ae
ae
ae
ae?
a
Fig. 2. Serial execution of multidatabase transactions may violate
serializability.
the same relative serialization order at each LDBS they (di-
rectly) conflict. Breitbart and Silberschatz have shown [10]
that the correctness criterion above is insufficient to guarantee
global serializability in the presence of local transac-
tions. They proved that the sufficient condition for global
consistency requires multidatabase transactions to have the
same relative serialization order at all sites they execute.
The solutions to the problem of concurrency control in
MDBSs proposed in the literature can be divided into several
groups:
Observing the execution of the global transactions
at each LDBS [11]. The execution order of global transactions
does not determine their relative serialization order at
each LDBS. For example, at LDBS 2 in Figure 2, the global
transaction G 1 is executed before G 2 , but G 2 precedes G 1
in the local serialization order there. To determine local
conflicts between transactions, Logar and Sheth [12] proposed
using the commands of the local operating system
and DBMS to "snoop" on the LDBS. Such an approach
may not always be possible without violating the autonomy
of the LDBS.
Controlling the submission and execution order of
global transactions. Alonso et al. proposed to use site
locking in the altruistic locking protocol [13] to prevent
undesirable conflicts between multidatabase transactions.
Given a pair of multidatabase transactions G 1 and G 2 , the
simplest altruistic locking protocol allows the concurrent
execution of G 1 and G 2 if they access different LDBSs. If
there is a LDBS that both G 1 and G 2 need to access, G 2
cannot access it before G 1 has finished its execution there.
Du et al. [14] have shown that global serializability may
be violated even when multidatabase transactions are submitted
serially, i.e., one after the completion of the other,
to their corresponding LDBS. The scenario in Figure 2 illustrates
the above problem. G 1 is submitted to both sites,
executed completely and committed. Only then is G 2 submitted
for execution; nevertheless the global consistency
may be violated.
Limiting multidatabase membership to the LDBSs
that use strict schedulers. By disallowing local executions
that are serializable but not strict, this approach
places additional restrictions on the execution of both
global and local transactions at each participating LDBS.
A solution in this category, called the 2PC Agent Method,
was proposed in [15]. The 2PC Agent Method assumes
that the participating LDBSs use two-phase locking (2PL)
[16] schedulers and produce only strict [17] schedules. The
basic idea in this method is that strict LDBSs will not
permit local executions that violate global serializability.
However, even local strictness is not sufficient. To illustrate
this problem, consider the LDBSs in Figure 2 and
the following local schedules:
The schedule at LDBS 1 is serial. In LDBS 2 , G 1 and G 2 are
both able to obtain read-locks and read b. Next, G 2 releases
its read lock on b and does not acquire any more locks.
G 1 is able to obtain a write lock and update b before G 2
commits. This execution is allowed by 2PL. Strictness in
2PL is satisfied if each transaction holds only its write-locks
until its end. Therefore, both schedules above are strict
and are allowed by 2PL. However, global serializability is
violated.
Assume conflicts among global transactions whenever
they execute at the same site. This idea has been
used by Logar and Sheth [12] in the context of distributed
deadlocks in MDBSs and by Breitbart et al. [18] for concurrency
control in the Amoco Distributed Database System
(ADDS). Both approaches are based on the notion of the
site graph. In the ADDS method, when a global transaction
issues a subtransaction to a LDBS, undirected edges are
added to connect the nodes of the LDBSs that participate
in the execution of the global transaction. If the addition
of the edges for a global transaction does not create a cycle
in the graph, multidatabase consistency is preserved and
the global transaction is allowed to proceed. Otherwise,
inconsistencies are possible and the global transaction is
aborted.
The site graph method does not violate the local autonomy
and correctly detects possible conflicts between
multidatabase transactions. However, when used for concurrency
control, it has significant drawbacks. First, the
degree of concurrency allowed is rather low because multidatabase
transactions cannot be executed at the same
LDBS concurrently. Second, since the site graph method
uses an undirected graph to represent conflicts, not all cycles
in the graph correspond to globally non-serializable
schedules. Third, and more importantly, the MDBS using
site graphs has no way to determine when it is safe to
remove the edges of a committed global transaction. The
edge removal policy used in the Serialization Graph Testing
algorithm [17] is not applicable in this case, since the site
graph is undirected. To illustrate this problem consider the
LDBSs in Figure 2 and the following local schedules:
perform operations in both LDBSs the
site graph that corresponds to the schedules above contains
a cycle between G 1 and G 2 . To resolve the cycle,
the site graph method aborts G 2 . Suppose that the edges
corresponding to G 1 are removed from the site graph immediately
following the commitment of G 1 . If G 2 is restarted
after the commitment of G 1 , it will be allowed to commit,
since there is no cycle in the site graph. Now suppose that
after G 2 commits, a local transaction T 1 issues w
commits. The execution of these operations results in the
schedules shown in Figure 2. These schedules are locally
serializable, but globally non-serializable. Therefore, if the
edges corresponding to a global transaction are removed
from the site graph immediately following its commitment,
global serializability may be violated.
The site graph method may work correctly if the removal
of the edges corresponding to a committing transaction is
delayed. However, concurrency will be sacrificed. In the
scenario represented by Figure 2, the edge corresponding
to G 1 can be removed after the commitment of the local
transaction T 1 . However the MDBS has no way of determining
the time of commitment or even the existence of the
local transaction T 1 . This problem has been recognized in
[6], [7].
Modifying the local database systems and/or ap-
plications. Pu [19] has shown that global serializability
can be ensured if LDBSs present their local serialization
orders to the MDBS. Since traditional DBMSs usually do
not provide their serialization order, Pu suggests modifying
the LDBSs to provide it. Pons and Vilarem [20] proposed
modifying existing applications so that all transactions (in-
cluding local ones) are channeled through multidatabase
interfaces. Both methods mentioned here preserve multi-database
consistency, but at the expense of partially violating
the local autonomy.
Rejecting serializability as the correctness crite-
rion. The concept of sagas [21], [22] has been proposed
to deal with long-lived transactions by relaxing transaction
atomicity and isolation. Quasi-serializability [23] assumes
that no value dependencies exist among databases
so indirect conflicts can be ignored. S-transactions [24] and
flexible transactions [25] use transaction semantics to allow
non-serializable executions of global transactions. These
solutions do not violate the LDBS autonomy and can be
used whenever the correctness guarantees they offer are ap-
plicable. In this paper, we assume that the global schedules
must be serializable.
III. The multidatabase system model
Global transactions consist of a transaction begin oper-
ation, a partially ordered collection of read and write op-
erations, and a commit or abort (rollback) operation. In
the following discussion, we refer to the collection of the
read and write operations performed by a transaction T
as the database operations of T . We use the term transaction
management operations to refer to the non-database
operation performed by T
The MDBS processes each global transaction G as fol-
lows. First, the MDBS decomposes G to subtransactions
. The decomposition of G is based on the
location of the data objects G accesses. For example, if G
data objects on LDBS i , the MDBS issues a sub-transaction
i to carry out the operations of G at LDBS i .
We assume that subtransactions generated by the MDBS
satisfy the following requirements:
1. There is at most one subtransaction per LDBS for
each global transaction.
2. Like global transactions, subtransactions consists of
database operations and transaction management op-
erations. All subtransaction operations can be executed
locally by the LDBS. A subtransaction may
perform a prepare-to-commit operation before issuing
commit, if the LDBS provides this operation in its interface
3. Subtransactions have a visible prepared-to-commit
state.
We say that a transaction enters its prepared-to-commit
state [26] when it completes the execution of its database
operations and leaves this state when it is committed or
aborted. During this time, all updates reside in its private
workspace and become permanent in the database when
the transaction is committed. The prepared-to-commit
state is visible if the application program (in this case the
MDBS) can decide whether the transaction should commit
or abort. To process G, the MDBS submits the subtransactions
of G to their corresponding LDBSs. To ensure that
the logically indivisible action to commit or abort G is consistently
carried out in the participating sites, the MDBS
uses the two-phase commit (2PC) [26] protocol. Since
LDBSs may reside at remote sites, an MDBS agent process
is associated with each LDBS to submit G's operations to
the LDBS and handle the exchange and synchronization of
all messages to and from the MDBS.
A. Local database management systems assumptions
We assume that a LDBS provides the following features
without requiring any modification:
1. Permits only serializable and recoverable [17] schedules
2. Ensures failure atomicity and durability of transac-
tions. If a subtransaction fails or is aborted, the
DBMS automatically restores the database to the state
produced by the last (locally) committed transaction.
3. Supports the begin, commit and abort (rollback) trans-action
management operations. Each subtransaction
can either issue a commit and install its updates in
the database or issue an abort to roll back its effects.
4. Notifies the transaction programs of any action it
takes unilaterally. In particular, it is assumed that a
DBMS interface is provided to inform subtransaction
programs when they are unilaterally aborted by the
LDBS. For example, to resolve a deadlock, a DBMS
may roll back one (e.g., the youngest) of the transactions
involved and notify the killed transaction about
the rollback (e.g., by setting a flag in the program
GEORGAKOPOULOS, RUSINKIEWICZ, SHETH: ENFORCING THE SERIALIZABILITY OF MULTIDATABASE TRANSACTIONS 5
communication area).
These features are supported by the majority of commercial
DBMSs, including 1 DB2, INGRES, ORACLE, and
SYBASE. Furthermore, all the features described above
comply with the SQL [27] and RDA [28] standards.
Most DBMSs use high level languages (e.g. SQL) to support
set-oriented queries and updates. In our discussion we
model global transactions, their subtransactions and local
transactions as collections of read and write operations. We
have chosen the read/write transaction model to simplify
the discussion of problems in enforcing global serializability
in a multidatabase environment, and we use this model to
describe corresponding solutions. However, the use of the
read/write model neither limits the generality of the solution
proposed in this paper, nor makes it more difficult to
apply them in a LDBS that supports interfaces at the level
of set-oriented queries and updates. To illustrate this, we
have included an Appendix that discusses implementation-related
issues for LDBS using SQL interfaces.
B. The prepared-to-commit state in a multidatabase envi-
ronment
Earlier in Section III, we listed the assumption that
subtransactions have a visible prepared-to-commit state.
Many database management systems, designed using the
client-server architecture (e.g., SYBASE) provide a visible
prepared-to-commit state and can directly participate in
a multidatabase system. On the other hand, if the LDBS
does not explicitly provide such a state, the MDBS can
simulate it [29], [30].
To simulate the prepared-to-commit state of a subtrans-
action, the MDBS must determine whether all database
operations issued by the subtransaction have been successfully
completed. One way to accomplish this is to force a
handshake after each operation, i.e., the MDBS must submit
the operations of each subtransaction one at a time
and wait for the completion of the previous database operation
before submitting the next one. Alternatively, the
RDA standard [28] allows asynchronous submission of several
database operations and provides a mechanism to inquire
about the status of each of them.
Consider the state of a subtransaction that has successfully
finished all its operations but is neither committed
nor aborted. To distinguish such a state from a prepared-
to-commit state, we refer to it as the simulated prepared-to-
commit state. The basic difference between the prepared-
to-commit state and the simulated prepared-to-commit
state is that a transaction in the simulated state has no
firm assurance from the DBMS that it will not be unilaterally
aborted. However, database management systems do
Any mention of product or vendors in this paper is done for background
information, or to provide an example of a technology for
illustrative purposes and should not be construed as either a positive
or negative commentary on that product or vendor. Neither inclusion
of a product or a vendor in this paper nor omission of a product or
a vendor should be interpreted as indicating a position or opinion of
that product or vendor on the part of the authors or of Bellcore. Each
reader is encouraged to make an independent determination of what
products are in the marketplace and whether particular features meet
their individual needs.
not unilaterally abort any transaction after it has entered
its simulated prepared-to-commit state. 2 Transactions in
this state cannot be involved in deadlocks because they
have successfully performed all their operations and have
acquired all their locks. The same is true for LDBSs that
use aborts and restarts to resolve conflicts. For example,
timestamp ordering [17] aborts a transaction when it issues
an operation that conflicts with some operation performed
earlier by a younger transaction. Therefore, timestamp ordering
schedulers never abort transactions after they have
successfully issued all their operations and entered their
simulated prepared-to-commit state. The behavior of optimistic
concurrency control protocols [32] is similar. No
transaction is ever aborted after it passes validation.
While DBMSs do not abort transactions in this state
for concurrency control and recovery reasons, it is possible
to argue that DBMSs must set timeouts to avoid having
"idle" transactions holding resources forever. However,
due to the difficulties in determining whether a subtransaction
is "idle" and for how long, the only timeouts set
by most DBMSs are on outstanding operations (e.g., in
SYBASE and ORACLE). Therefore, when the last read
or write operation of a subtransaction is completed, the
MDBS can be certain that the subtransaction has entered
a state which in practice is no different from the prepared-
to-commit state required by 2PC. In the rest of this paper,
we do not distinguish whether a visible prepared-to-commit
state is simulated or is provided by local systems. Additional
issues related to the problem of effectively providing
a prepared-to-commit state are discussed in [33].
IV. The Optimistic Ticket Method (OTM)
In this section, we describe a method for multidatabase
transaction management, called OTM, that does not violate
LDBS autonomy and guarantees global serializability
if the participating LDBSs ensure local serializability. The
proposed method addresses two complementary issues:
1. How MDBS can obtain information about the relative
serialization order of subtransactions of global transactions
at each LDBS?
2. How MDBS can guarantee that the subtransactions of
each multidatabase transaction have the same relative
serialization order in all participating LDBSs?
In the following discussion, we do not consider site failures
(commitment and recovery of multidatabase transactions
are discussed, among others, in [34], [35], [30], [33]).
A. Determining the local serialization order
OTM uses tickets to determine the relative serialization
order of the subtransactions of global transactions at
each LDBS. A ticket is a (logical) timestamp whose value
is stored as a regular data object in each LDBS. Each
avoidance technique [31] may abort a trans-action
holding a lock because some other transaction requests the
same lock. This is the only policy we are aware of that may abort
a transaction in its simulated prepared-to-commit state Since its use
is limited in commercial DBMSs, we do not consider it in this paper
and assume that a transaction in the simulated prepared-to-commit
state is not aborted by its LDBS.
6 IEEE TRANSACTIONS ON KNOWLEDGE AND
subtransaction of a global transaction is required to issue
the Take-A-Ticket operation which consist of reading the
value of the ticket (i.e., r(ticket)) and incrementing it (i.e.,
w(ticket regular data manipulation opera-
tions. The value of a ticket and all operations on tickets
issued at each LDBS are subject to local concurrency control
and other database constraints. Only a single ticket
value per LDBS is needed. The Take-A-Ticket operation
does not violate local autonomy because no modification of
the local systems is required. Only the subtransactions of
global transactions have to take tickets 3 ; local transactions
are not affected.
\Theta
\Theta
Z
Z
Z
Z
Z
Z
Z
Z-
BN
\Phi-
\Theta
\Theta
\Theta
\Theta
\Theta
c
a
flFig. 3. The effects of the Take-A-Ticket approach.
Figure
3 illustrates the effects of the Take-A-Ticket process
on the example in Figure 2. The ticket data objects at
are denoted by t 1 and t 2 , respectively.
In LDBS 1 , the t 1 values obtained by the subtransactions
of G 1 and G 2 reflect their relative serialization order. This
schedule will be permitted by the local concurrency controller
at LDBS 1 . In LDBS 2 the local transaction T 1 causes
an indirect conflict such that G
by requiring the subtransactions to take tickets we force
an additional conflict G 1 ! G 2 . This additional ticket
conflict causes the execution at LDBS 2 to become locally
non-serializable. Therefore, the local schedule:
r
r G2 (b) wT1 (b)
will be not allowed by the local concurrency control (i.e.,
the subtransaction of G 1 or the subtransaction of G 2 or T 1
will be blocked or aborted).
On the other hand, if the local schedule in LDBS 2 were
for example:
3 This may create a "hot spot" in the LDBSs. However, since only
subtransactions of multidatabase transactions and not local LDBS
transactions have to compete for tickets, we do not consider this to
be a major problem affecting the performance of our method.
r G2 (b) wT1 (b)
the tickets obtained by G 1 and G 2 would reflect their relative
serialization order there. In this case, the local schedule
would be permitted by the local concurrency control
at LDBS 2 . Although the transactions in our example take
their tickets at the beginning of their execution, transactions
may take their tickets at any time during their life-time
without affecting the correctness of the Take-A-Ticket
approach. Theorem 1 formally proves that the tickets obtained
by the subtransactions at each LDBS are guaranteed
to reflect their relative serialization order.
Theorem 1: The tickets obtained by the subtransactions
of multidatabase transactions determine their relative serialization
order.
Proof: Let g i and g j be the subtransactions of global transactions
G i and G j , respectively, at some LDBS. Without
loss of generality we can assume that g i takes its ticket
before
precedes r g j
(ticket) in the local
execution order. Since a subtransaction takes its ticket first
and then increments the ticket value, only the following execution
orders are possible:
However, among these executions only E 3 is serializable
and can be allowed by the LDBS concurrency control.
Therefore, increments the ticket value before g j reads
it and g j obtains a larger ticket than g i .
To show now that g i can only be serialized before g j , it
is sufficient to point out that the operations to take and increment
the ticket issued first by g i and then by g j create a
direct conflict . This direct conflict forces g i and g j
to be serialized according to the order in which they take
their tickets. More specifically, if there is another direct
conflict between g i and g j , such that
(a)) or indirect conflict caused by local transactions, such
that
the resulting schedule is serializable and both g i and g j are
allowed to commit. In this case, g i is serialized before g j
and this is reflected by the order of their tickets. However,
if there is a direct conflict or an
indirect conflict
ure 4 (d)), the ticket conflict creates a cycle in the
local serialization graph. Hence, this execution becomes
non-serializable and is not allowed by the LDBS concurrency
control. Therefore, indirect conflicts can be resolved
through the use of tickets by the local concurrency control
even if the MDBS cannot detect their existence. 2
An implementation of tickets and the Take-A-Ticket operation
in LDBSs using SQL is described in Appendix I.
B. Enforcing global serializability
To maintain global serializability, OTM must ensure
that the subtransactions of each global transaction have
the same relative serialization order in their corresponding
LDBSs [10]. Since, the relative serialization order of
the subtransactions at each LDBS is reflected in the values
of their tickets, the basic idea in OTM is to allow the sub-
GEORGAKOPOULOS, RUSINKIEWICZ, SHETH: ENFORCING THE SERIALIZABILITY OF MULTIDATABASE TRANSACTIONS 7
OE
OE
OE
OE
OE
OE
OE
OE
A
A
AU
\Delta-
oe
oe
oe
\Theta
\Theta
\Theta
(d)
(c)
(b)
(a)
transaction conflicts
ticket conflicts
Fig. 4. The effects of ticket conflicts in OTM.
transactions of each global transaction to proceed but commit
them only if their ticket values have the same relative
order in all participating LDBSs. This requires that all sub-transactions
of global transactions have a visible prepared-
to-commit state.
OTM processes a multidatabase transaction G as follows.
Initially, it sets a timeout for G and submits its subtransactions
to their corresponding LDBSs. All subtransactions
are allowed to interleave under the control of the LDBSs
until they enter their prepared-to-commit state. If they
all enter their prepared-to-commit states, they wait for the
OTM to validate G. The validation can be performed using
a Global Serialization Graph (GSG) test. 4 The nodes
in GSG correspond to "recently" committed global trans-
actions. For any pair of recently committed global transactions
G c
contains a directed edge G c
if at least one subtransaction of G c
was serialized before
(obtained a smaller ticket than) the subtransaction of G c
in the same LDBS. A strategy for node and edge removal
from the GSG is presented in Lemma 1 below.
Initially, GSG contains no cycles. During the validation
of a global transaction G, OTM first creates a node for
G in GSG. Then, it attempts to insert edges between G's
node and nodes corresponding to every recently committed
multidatabase transaction G c . If the ticket obtained by a
4 Other validation tests such as the certification scheme proposed in
[19] can be also used to validate global transactions.
subtransaction of G at some LDBS is smaller (larger) than
the ticket of the subtransaction of G c there, an edge G
added to GSG. If all such edges can
be added without creating a cycle in GSG, G is validated.
Otherwise, G does not pass validation, its node together
with all incident edges is removed from the graph, and G
is restarted. This validation test is enclosed in a single
critical section. 5
G is also restarted, if at least one LDBS forces a sub-transaction
of G to abort for local concurrency control
reasons (e.g., local deadlock), or its timeout expires (e.g.,
global deadlock). If more than one of the participating
LDBSs uses a blocking mechanism for concurrency con-
trol, the timeouts mentioned above are necessary to resolve
global deadlocks.
The timeout assigned to a global transaction G is based
on a conservative estimate of the expected execution time
of G. If it is difficult to estimate the expected duration of
a global transaction G, an alternative solution is to set a
different timeout for each subtransaction of G. The latter
timeout strategy can be combined with a wait-for graph
(WFG). The WFG is maintained by the MDBS and has
LDBSs as nodes. If a cycle is found in the WFG, and
the cycle involves LDBSs that use a blocking scheme to
synchronize conflicting transactions, a deadlock is possible.
MDBSs that maintain a WFG can resolve global deadlocks
by setting timeouts only for operations issued at LDBSs
that are involved in a WFG cycle and, in addition, use
blocking to enforce local serializability and recoverability.
In this paper, we do not discuss timeout strategies further,
because the choice of the timeout strategy does not effect
the correctness of OTM. A decentralized deadlock-free refinement
of the Optimistic Ticket Method is described in
[38].
As we mentioned, the serialization graph must contain
only the nodes corresponding to recently committed global
transactions. Below we provide a condition for safe removal
of transaction nodes from the serialization graph.
Lemma 1: A node corresponding to a committed trans-action
G c can be safely removed from the serialization
graph if it has no incoming edges and all transactions that
were active at the time G c was committed are either committed
or aborted. When a node is removed from the
graph, all edges incident to the node can be also removed.
Proof: For a transaction node to participate in a serialization
cycle it must have at least one incoming edge. No
transaction started after the commitment of G c can take
its tickets before G c , so it cannot add incoming edges to
the node of G c . Since we assume that G c has no incoming
edges and all transactions that were active at the time G c
5 Including the validation test in a critical section has been originally
proposed by Kung and Robinson in [32]. Several schemes have
been proposed in the literature (e.g., the parallel validation schemes
in [32], [36]) to deal with the possibility of bottlenecks caused by
such critical sections. Although we could have adopted any of these
schemes, there is no evidence that they allow more throughput than
performing transaction validation serially, i.e., within a critical section
as in OTM. Most commercial implementations of optimistic concurrency
control protocols have chosen serial validation over parallel
validation for similar reasons (e.g., Datacycle [37]).
was committed are finished, the node corresponding to G c
will be never involved in a serialization cycle. Therefore, is
can be safely removed from the serialization graph. 2
The following theorem proves the correctness of OTM.
Theorem 2: OTM guarantees global serializability if the
following conditions hold:
1. the concurrency control mechanisms of the LDBSs ensure
local serializability;
2. each multidatabase transaction has at most one sub-transaction
at each LDBS; and
3. each subtransaction has a visible prepared-to-commit
state.
Proof: We have already shown that the order in which sub-transactions
take their tickets reflects their relative serialization
order (Theorem 1). After the tickets are obtained
by a global transaction at all sites it executes, OTM performs
the global serialization test described earlier in this
section. Global transactions pass validation and are allowed
commit only if their relative serialization order is the
same at all participating LDBSs. Lemma 1 shows that the
the serialization test involving only the recently committed
transactions is sufficient to guarantee global serializability.C. Effect of the ticketing time on the performance of OTM
OTM can process any number of multidatabase transactions
concurrently, even if they conflict at multiple LDBS.
However, since OTM forces the subtransactions of multi-database
transactions to directly conflict on the ticket, it
may cause some subtransactions to get aborted or blocked
because of ticket conflicts (Figure 4 (b)). Since subtransactions
may take their tickets at any time during their
lifetime without affecting the correctness of OTM, optimization
based on the characteristics of each subtransaction
(e.g., number, time and type of the data manipulation
operations issued or their semantics) is possible. For ex-
ample, if all global transactions conflict directly at some
LDBS, there is no need for them to take tickets. To determine
their relative serialization order there, it is sufficient
to observe the order in which they issue their conflicting
operations.
Choosing the right time to to take a ticket during the
lifetime of a subtransaction can minimize the synchronization
conflicts among subtransactions. For example, if a
LDBS uses 2PL it is more appropriate to take the ticket
immediately before a subtransaction enters its prepared-
to-commit state. To show the effect of this convention consider
a LDBS that uses 2PL for local concurrency control
Figure
5 (a)). 2PL requires that each subtransaction sets
a write lock on the ticket before it increments its value.
Given four concurrent subtransactions
1 does not interfere with g 2 which can take its ticket and
commit before g 1 takes its ticket. Similarly, g 1 does not interfere
with g 3 , so g 1 can take its ticket and commit before
3 takes its ticket. However, when g 4 attempts to take its
ticket after g 1 has taken its ticket but before g 1 commits
and releases its ticket lock, it gets blocked until g 1 is com-
Qk
ae
Z?
lock t
abort
validation:
parallel
serial
abort
abort
abort
(c) Preferred ticketing in a LDBS using OCC
(b) Preferred ticketing in a LDBS using TO
(a) Preferred ticketing in a LDBS using 2PL
Fig. 5. Preferred ticketing in LDBSs.
mitted. The ticket values always reflect the serialization
order of the subtransactions of multidatabase transactions
but ticket conflicts are minimized if g 1 takes its ticket as
close as possible to its commitment time.
If a LDBS uses timestamp ordering (TO) [17] (Figure 5
(b)), it is better to obtain the ticket when the subtransaction
begins its execution. TO assigns a timestamp ts(g 1 )
to a subtransaction g 1 when it begins its execution. Let g 2
be another subtransaction such that ts(g 1 If the
ticket obtained by g 1 has a larger value than the ticket of
2 then g 1 is aborted. Clearly, if g 2 increments the ticket
value before g 1 then, since g 2 is younger than g 1 , either
r g1 (ticket) or w g1 (ticket) conflicts with the w g2 (ticket) and
1 is aborted. Hence, only g 1 is allowed to increment the
ticket value before g 2 . Similarly, if g 2 reads the ticket be-
GEORGAKOPOULOS, RUSINKIEWICZ, SHETH: ENFORCING THE SERIALIZABILITY OF MULTIDATABASE TRANSACTIONS 9
fore increments it, then when g 1 issues w g1 (ticket) it
conflicts with the r g2 (ticket) operation issued before and
1 is aborted. Therefore, given that ts(g 1
1 takes its ticket before g 2 or g 1 is aborted. Hence, it is
better for subtransactions to take their tickets as close as
possible to the point they are assigned their timestamps
under TO, i.e., at the beginning of their execution.
Another significant optimization can be used to completely
eliminate tickets in LDBSs that use TO schedulers.
Let g 1 and g 2 be a pair of subtransactions that do not take
tickets. Since transactions under the control of a TO scheduler
are assigned their timestamp some time between their
submission and the time they complete their first database
operation, the global scheduler can ensure that g 1 obtains
a local timestamp smaller than the timestamp of g 2 by
delaying the submission of g 2 until g 1 completes its first
database operation. By using this technique, the global
scheduler can ensure that the submission order of the sub-transactions
determines their local serialization order and
that g 1 is serialized before g 2 in the local system.
Finally, if a LDBS uses an optimistic concurrency control
there is no best time for the subtransactions
to take their tickets (Figure 5 (c)). Transactions
under the control of OCC have a read phase that is followed
by a validation phase. OCC uses transaction readsets and
writesets to validate transactions. Only transactions that
pass validation enter a write phase. Thus, each subtransaction
1 reads the ticket value before it starts its (serial
or parallel) validation but increments it at the end of its
phase. If another transaction g 2 is able to increment
the ticket in the meantime, g 1 does not pass validation and
is restarted.
The basic advantages of OTM are that it requires the
local systems to ensure only local serializability and that
the optimistic global scheduler imposes no restrictions on
the local execution of global transactions. Its main disadvantages
are the following:
ffl under optimistic scheduling global restarts are possible
ffl the global scheduler must maintain a GSG, and
ffl tickets introduce additional conflicts between global
transactions which may not conflict otherwise.
In the following three sections we describe solutions that
address these issues, respectively.
V. The Conservative Ticket Method (CTM)
OTM does not affect the way in which the LDBSs handle
the execution of global transactions up to the point in
which their subtransactions enter their prepared-to-commit
state. Optimistic global schedulers based on uncontrolled
local execution of the global subtransactions, such as OTM,
are easier to implement and in some cases allow more concurrency
than conservative schedulers. However, since optimistic
global schedulers allow global transactions to take
their tickets in any order, they suffer from global restarts
caused by out-of-order ticket operations. To explain the
problem of global restarts consider a situation in which
a global transaction G i obtains its ticket before another
global transaction G j at some LDBS. If in another LDBS
G j is able to obtain its ticket before G i , the MDBS scheduler
aborts and restarts either G i or G j to disallow the
globally non-serializable execution of their ticket opera-
tions. More specifically, in multidatabase systems in which
the participating LDBSs use blocking for local concurrency
control, the incompatible orders in which G i and G j take
their tickets in different LDBSs causes a global deadlock. To
resolve such a global deadlock the OTM scheduler aborts
and restarts the global transaction whose timeout expires
first. If the LDBSs do not use blocking for local concurrency
control, then incompatible execution orders of ticket
operations cause a cycle in the GSG. In this case, the global
transaction that enters global validation last is rejected,
and the OTM scheduler aborts it.
In this section we describe CTM, a method for multi-database
transaction management that eliminates global
restarts. Like OTM, CTM requires subtransactions of
global transactions to take tickets at their corresponding
LDBSs. However, unlike OTM, CTM controls the order
in which the subtransactions take their tickets. To avoid
global restarts, CTM ensures that the relative order in
which global transaction take their tickets is the same in
all participating LDBS.
CTM requires that all subtransactions of global transactions
have a visible prepared-to-Take-A-Ticket state in
addition to a visible prepared-to-commit state. A subtransaction
enters its prepared-to-Take-A-Ticket state when it
successfully completes the execution of all its database operations
that precede the Take-A-Ticket operations and
leaves this state when it reads the ticket value. The visible
prepared-to-Take-A-Ticket state can be provided by the
multidatabase system by employing the same techniques
that simulate the prepared-to-commit state. For exam-
ple, one way to make the prepared-to-Take-A-Ticket state
of a subtransaction visible, is to force a handshake after
each database operation that precedes the Take-A-Ticket
operations. That is, if all operations that precede the
Take-A-Ticket operations are completed successfully, the
MDBS can be certain that the subtransaction has entered
its prepared-to-Take-A-Ticket state. We say that a global
transaction becomes prepared to take its tickets when all
its subtransactions enter their prepared-to-Take-A-Ticket
state.
CTM processes a set G of global transactions as follows.
Initially, the CTM sets a timeout for each global trans-action
in G, and then submits its subtransactions to the
corresponding LDBSs. The subtransactions of all global
transactions are allowed to interleave under the control
of the LDBSs until they enter their prepared-to-Take-A-
Ticket state. Without loss of generality, suppose that the
subtransactions of global transactions G 1 , G in
G become prepared to take their tickets before their time-out
expires. Furthermore, suppose that a subtransaction
of G 2 enters its prepared-to-Take-A-Ticket state after all
subtransactions of G 1 become prepared to take their tickets
(i.e., G 1 becomes prepared to take its tickets before
a subtransaction of G 3 becomes prepared to take its
ticket after all subtransactions of G 2 enter their prepared-
to-Take-A-Ticket state (i.e., G 2 becomes prepared to take
its tickets before G 3 a subtransaction of G k enters
its prepared-to-Take-A-Ticket state after all subtransactions
of G become prepared to take tickets (i.e., G
becomes prepared to take its tickets before G k ). The CTM
allows the subtransactions of such global transactions G 1 ,
to take their tickets in the following order:
the subtransactions of G 1 take their tickets before the sub-transactions
of G 2 , the subtransactions of G 2 take tickets
before the subtransactions of G 3 the subtransactions
of G k\Gamma1 take their tickets before the subtransactions of G k .
Global transactions are allowed to commit only if all
their subtransactions successfully take their tickets and report
their prepared-to-commit state. On the other hand,
the MDBS aborts and restarts any multidatabase trans-action
that has a subtransaction that did not report its
prepared-to-commit state before its timeout expired. Local
optimizations discussed in Section IV-C can also be
applied on CTM.
Theorem 3: CTM guarantees global serializability and
it is free of global restarts if the following conditions are
1. the concurrency control mechanisms of the LDBSs ensure
local serializability;
2. each multidatabase transaction has at most one sub-transaction
at each LDBS; and
3. each subtransaction has a visible prepared-to-Take-A-
Ticket and a visible prepared-to-commit state.
Proof: Without loss of generality, suppose that global
transactions in a set G become prepared to take their tickets
in the following . Under the
control of CTM, G 1 takes all its tickets before G 2 takes
its tickets, G 2 takes tickets before G 3 takes its
tickets before G k . Since CTM ensures that the relative
order in which the subtransactions of each global transaction
take their tickets is the same in all participating LDBS
and we have proven that the order in which the subtransactions
take their tickets reflects their relative serialization
order (Theorem 1), CTM guarantees global serializability
and avoids global restarts due to ticket conflicts. 2
Another important property of CTM is that it does not
require a GSG. Hence, the global CTM scheduler is simpler
than the global OTM scheduler. An optimistic scheduler
that does not require a GSG is described next.
VI. Cascadeless Tickets Methods
To ensure correctness in the presence of failures and
to simplify recovery and concurrency control, transaction
management mechanisms used in database management
systems often ensure not only serializability and recoverability
[17] but also one of the properties defined below:
ffl A transaction management mechanism is cascadeless
[17] if each transaction may read only data objects
written by committed transactions.
ffl A transaction management mechanism is strict [17] if
no data object may be read or written until the transactions
that previously wrote it commit or abort.
Many commercial DBMSs allow only strict schedules to
eliminate cascading aborts and also to be able to ensure
database consistency when before images are used for
database recovery.
From the perspective of the multidatabase scheduler, the
cascadelessness of the LDBSs is important because it can
be used to eliminate the GSG (Global Serialization Graph)
test required by OTM. To take advantage of cascadeless
LDBSs, we introduce a refinement of OTM, called the Cas-
cadeless OTM. Like OTM, the Cascadeless OTM ensures
global serializability by preventing the subtransactions of
each multidatabase transaction from being serialized in different
ways at their corresponding LDBSs. Unlike OTM,
Cascadeless OTM takes advantage of the fact that if all
LDBSs permit only cascadeless schedules then global transactions
cannot take tickets and commit, unless their tickets
have the same relative order at all LDBSs.
Cascadeless OTM processes each global transaction G
as follows. Initially, the MDBS sets a timeout for G and
submits its subtransactions to the appropriate LDBSs. All
subtransactions are allowed to interleave under the control
of the LDBSs until they enter their prepared-to-commit
state. If all subtransactions of G take their tickets and report
their prepared-to-commit state, the Cascadeless OTM
allows G to commit. Otherwise, the MDBS aborts and
restarts any global transaction that has a subtransaction
that did not report its prepared-to-commit state before the
timeout of G expired. Local optimizations mentioned in
Section IV-C can be also applied on Cascadeless OTM.
Theorem 4: Cascadeless OTM guarantees global serializability
if the following conditions are satisfied:
1. the concurrency control mechanisms of the LDBSs ensure
local serializability and cascadelessness;
2. each multidatabase transaction has at most one sub-transaction
at each LDBS; and
3. each subtransaction has a visible prepared-to-commit
state.
Proof: We have already shown that the order in which
the subtransactions take their tickets reflects their relative
serialization order (Theorem 1). To prove that global serializability
is enforced without a GSG test, consider any
pair of global transactions G i and G j in a set G having
subtransactions in multiple LDBSs, including LDBS k and
LDBS l . Without loss of generality assume that at LDBS k
the subtransaction of G i takes its ticket before the sub-transaction
of G j , but at LDBS l the subtransaction of G j
takes its ticket before the subtransaction of G i . Since the
LDBSs are cascadeless, G j cannot write its ticket value at
LDBS k before G i commits, and G i cannot write its ticket at
LDBS l before G j commits. Therefore, there are two possible
outcomes for the execution of a global transaction under
Cascadeless OTM. Either the tickets of its subtransactions
have the same relative order at all LDBSs and global serializability
is ensured, or it has at least one subtransaction
that cannot commit. 2
Like the OTM, the Cascadeless OTM is not free of global
restarts. A Cascadeless CTM which is similar to CTM can
be used to deal with global restarts.
GEORGAKOPOULOS, RUSINKIEWICZ, SHETH: ENFORCING THE SERIALIZABILITY OF MULTIDATABASE TRANSACTIONS 11
While local cascadelessness can be used to simplify the
global optimistic scheduler (i.e., there is no need to maintain
a GSG), strictness offers no additional advantages over
cascadelessness. In the following section we show that if
the schedulers of local systems meet additional conditions,
ticket conflicts can be totaly eliminated.
VII. Implicit Tickets and the Implicit Ticket
Method (ITM)
We have argued that the basic problem in multidatabase
concurrency control is that the local serialization orders do
not necessarily reflect the order in which global transactions
are submitted, perform their operations or commit in
the LDBSs. To deal with this problem we have introduced
the concept of the ticket and proposed several methods that
must take tickets to ensure global serializability. However,
tickets introduce additional conflicts between global transactions
that may not conflict otherwise. Thus, it is desirable
to eliminate tickets whenever possible. In the following
sections we identify classes of schedules that include events
that can be used to determine the local serialization order
of transactions without forcing conflicts between global
transactions. We refer to such events as implicit tickets.
A. Determining the local serialization order
In Section IV-C, we have discussed how to eliminate
tickets in LDBSs that use TO for local concurrency con-
trol. This approach can be applied to all LDBSs that allow
transactions to commit only if their respective local serialization
order reflect their local submission order. That
is, in the class of LDBSs that allow schedules in which the
transaction submission order determines their serialization
order, the order transactions issue their begin operations
constitutes their implicit tickets.
Another important class of local systems in which global
transactions do not have to take tickets includes LDBSs
that allow only schedules in which the local commitment
order of transactions determines their local serialization or-
der, i.e., the order transactions perform their commit operations
constitutes their implicit tickets. In [6], [7], we
have defined the class of schedules that transactions have
analogous execution (commitment) and serialization order
as follows:
1: Let S be a serializable schedule. We say
that the transactions in S have analogous execution and
serialization order if for any pair of transactions T i and
T j such that T i is committed before T j in S, T i is also
serialized before T j in S.
The property of analogous execution and serialization orders
applies to both view serializable and conflict serializable
schedules and is difficult to enforce directly. The class
of schedules that are conflict serializable and have analogous
executions and serialization order is characterized in
terms of strong recoverability [7] defined below.
Definition 2: Let S be a schedule. We say that S is
strongly recoverable if for any pair of committed transactions
whenever an operation op T i
of T i precedes
an operation op T j
of T j in S and these operations
conflict (at least one of these operations is a write), then
precedes commitT j
in S.
A transaction management mechanism is strongly recoverable
if its produces only strongly recoverable schedules.
In [7], we have shown that if a transaction management
mechanism is strongly recoverable, it produces conflict serializable
schedules in which transaction execution and serialization
orders are analogous. The significance of strong
recoverability in simplifying the enforcement of global serializability
in multidatabase systems has been recognized in
the literature. For example, the notion of commitment ordering
proposed in [39], [40] as a solution to enforce global
serializability without taking tickets is identical to strong
recoverability.
Although strongly recoverable schedulers can be realized
in real DBMSs, most real transaction management mechanisms
produce schedules that satisfy stronger properties
that are easier to enforce.
The notion of rigorous schedules [6], [7] defined next effectively
eliminates conflicts between uncommitted trans-
actions. Thus, it provides an even simpler way to ensure
that transaction execution and serialization orders are analogous
Definition 3: A schedule is rigorous if the following two
conditions hold: (i) it is strict, and (ii) no data object is
written until the transactions that previously read it commit
or abort.
We say that a transaction management mechanism is
rigorous if it produces rigorous schedules, and we use the
rigorous LDBS to refer to a LDBS that uses a rigorous
scheduler. In [6] we have shown that if a transaction
management mechanism ensures rigorousness, it produces
serializable schedules in which transaction execution
and serialization orders are analogous. In [7] we proved
that the set of rigorous schedules is a subset of strongly recoverable
schedules.
The class of rigorous transaction management mechanisms
includes several common conservative schedulers [6],
[7], such as conservative TO [17] and rigorous two-phase
locking (2PL) (i.e., the variant of strict 2PL under which
a transaction must hold its read and write locks until it
terminates). Rigorous variations of TO and optimistic concurrency
control [32] protocols have been introduced in [6],
However, while many conservative schedulers are rigor-
ous, enforcing rigorousness is too restrictive for optimistic
schedulers, i.e., rigorous optimistic schedulers behave like
conservative schedulers.
The following class of schedules permits optimistic synchronization
of operations.
Definition 4: A schedule is semi-rigorous if its committed
projection is rigorous.
Semi-rigorousness permits validation of transactions after
they have finished all their operations. Therefore, it
simplifies the design of optimistic schedulers. Most real
optimistic schedulers, including the schedulers described
in [32], allow only semi-rigorous schedules. While semi-
rigorousness simplifies optimistic concurrency control, it
does not ensure recoverability as it is defined in [17]. There-
view serializability
conflict serializability
analogous execution and
serialization orders
strong recoverability
semi-rigorousness
rigorousness
Fig. 6. Relationship among analogous execution and serialization
orders, strong recoverability, semi-rigorousness and rigorousness.
fore, most optimistic schedulers ensure cascadelessness or
strictness in addition to semi-rigorousness. For example,
schedulers that use the optimistic protocol with serial validation
permit schedules that in addition to being semi-
rigorous that are also strict.
The set of semi-rigorous schedules includes all rigorous
schedules and is a subset of the set of strongly recoverable
schedules. The relationship among analogous execution
and serialization orders, strong recoverability, semi-
rigorousness, and rigorousness is depicted in Figure 6.
Finally, note that strictness is not sufficient to ensure
that the transaction execution order is analogous to the
transaction serialization order. For example, if we assume
that transactions commit immediately after they complete
their last operation, the schedule at LDBS 2 in Figure 2 is
strict, but the the execution order of the transactions is
not analogous to their serialization order.
B. Enforcing global serializability
To take advantage of LDBSs that allow only analogous
execution and serialization orders, we introduce the Implicit
Ticket Method (ITM). Like OTM and CTM, ITM
ensures global serializability by preventing the subtransactions
of each multidatabase transaction from being serialized
in different ways at their corresponding LDBSs. Unlike
OTM and CTM, ITM does not need to maintain tickets
and the subtransactions of global transactions do not
need to take and increment tickets explicitly. In LDBSs
that allow only analogous execution and serialization or-
ders, the implicit ticket of each subtransaction executed
there is determined by its commitment order. That is, the
order in which we commit subtransactions at each LDBS
determines the relative values of their implicit tickets. To
achieve global serializability, ITM controls the commitment
order and thus the serialization order of multidatabase sub-transactions
as follows.
Assuming rigorous LDBSs, ITM guarantees that for any
pair of multidatabase transactions G i and G j , either the
subtransactions of G i are committed before the subtransactions
of G j , or the subtransactions of G j are committed
prior to the subtransactions of G i . This can be easily enforced
by a distributed agreement protocol such as the 2PC
protocol.
ITM processes a set G of global transactions as follows.
Initially, the ITM sets a timeout for each global trans-action
in G, and then submits its subtransactions to the
corresponding LDBSs. The subtransactions of all global
transactions are allowed to interleave under the control
of the LDBSs until they enter their prepared-to-commit
state. Without loss of generality, suppose that the sub-transactions
of global transactions G 1 , G
become prepared to commit before their timeout expires.
Furthermore, suppose that a subtransaction of G 2 enters
its prepared-to-commit state after all subtransactions of G 1
become prepared to commit, a subtransaction of G 3 becomes
prepared to commit after all subtransactions of G 2
enter their prepared-to-commit state, and a subtransaction
of G k enters its prepared-to-commit state after all
subtransactions of G become prepared to commit. The
ITM allows the subtransactions of such global transactions
to commit in the following order: the subtransactions of
before the subtransactions of G 2 , the subtransactions
of G 2 before the subtransactions of G 3 the subtransactions
of G k\Gamma1 before the subtransactions of G k . Global
transactions that have one or more subtransactions that
do not report their prepared-to-commit state before their
timeout expires are aborted and restarted by the MDBS.
Theorem 5: ITM ensures global serializability if the following
conditions hold:
1. the concurrency control mechanisms of the LDBSs ensure
analogous executions and serialization orders;
2. each multidatabase transaction has at most one sub-transaction
at each LDBS; and
3. each subtransaction has a visible prepared-to-commit
state.
Proof: Without loss of generality, suppose that global
transactions in a set G enter their prepared to commit state
in the following . Under the control
of ITM, the subtransaction of G 1 commit before the sub-transactions
of G 2 , the subtransaction G 2 commit before
the subtransaction of G 3 and the subtransactions of
G before the subtransactions of G k . Since ITM
ensures that the relative order in which the subtransactions
of each global transaction commit is the same in all participating
LDBSs and the LDBSs ensure that the subtransaction
commitment order reflects their relative serialization
order, ITM guarantees global serializability. 2
VIII. Mixed Methods
In a multidatabase environment where rigorous, cascade-
less, and non-cascadeless LDBSs participate, mixed ticket
methods that combine two or more of the methods de-
GEORGAKOPOULOS, RUSINKIEWICZ, SHETH: ENFORCING THE SERIALIZABILITY OF MULTIDATABASE TRANSACTIONS 13
scribed in the previous sections of this paper can be used
to ensure global serializability. In this section we describe a
mixed ticket method that combines OTM, CTM, and their
cascadeless variations with ITM.
A mixed method processes a multidatabase transaction
G as follows:
1. Sets a timeout for G and submits its subtransactions
to the corresponding LDBSs.
2. Subtransactions that are controlled by ITM, OTM,
and the cascadeless variation of OTM are allowed to
interleave until they enter their prepared-to-commit
state. Subtransactions that are controlled by CTM
and the cascadeless CTM are allowed to proceed until
they enter their prepared-to-Take-A-Ticket state.
3. If all subtransactions of G under the control of OTM,
take tickets and report their prepared-to-commit state,
global validation is applied to ensure that these sub-transactions
are serialized the same way. If G does not
pass global validation, it is aborted.
4. Subtransactions under the control of CTM and the
cascadeless CTM are allowed to take their tickets according
to the serialization order of G determined earlier
by the validation process. To ensure this, the
mixed method delays the Take-A-Ticket operations of
the subtransactions of G that execute under the control
of CTM and the cascadeless CTM until there is
no uncommitted global transaction G 0 such
G 0 has subtransactions that have not taken their tick-
ets, and
ffl there is at least one LDBS in which the subtransaction
of G 0 has taken its ticket before the subtransaction
of G.
If there is no global transaction that satisfies these
conditions, the mixed method allows the the subtransactions
of G to take their tickets under the control of
CTM.
5. If all subtransactions of G enter their prepared-to-
commit states, the mixed method commits G. Other
global transactions are allowed to commit either before
the first subtransaction of G commits, or after
the commitment of all subtransactions of G.
6. If the timeout expires in any of these steps, the
MDBSs aborts and restarts G.
Simpler mixed methods, e.g., combining only optimistic
or only conservative ticket methods, can be developed similarly
IX. Summary and Conclusion
Enforcing the serializability of global transactions in a
MDBS environment is much harder than in distributed
databases systems. The additional difficulties in this environment
are caused by the autonomy and the heterogeneity
of the participating LDBSs.
To enforce global serializability we introduced OTM, an
optimistic multidatabase transaction management mechanism
that permits the commitment of multidatabase transactions
only if their relative serialization order is the same
in all participating LDBSs. OTM requires LDBSs to guarantee
only local serializability. The basic idea in OTM is
to create direct conflicts between multidatabase transactions
at each LDBS that allow us to determine the relative
serialization order of their subtransactions.
We have also introduced a Conservative Ticket Method,
CTM. Under CTM, global transactions must take tickets,
but CTM does not require global serialization testing and
eliminates global restarts due to failed validation. Refinements
of OTM and CTM for multidatabase environments
where all participating LDBSs are cascadeless, may use
simpler global schedulers. Unless the subtransactions of
multidatabase transactions take their tickets at approximately
the same time (e.g., the subtransactions of each
global transaction take their tickets at the end of their execution
and their duration is approximately the same), conservative
ticket methods may allow a higher throughput
than the corresponding optimistic ticket methods.
To take advantage of additional properties of LDBSs
we proposed the Implicit Ticket Method. ITM eliminates
ticket conflicts, but works only if the participating LDBSs
disallow schedules in which transaction execution and serialization
orders are not analogous. ITM uses the local
commitment order of each subtransaction to determine its
implicit ticket value. It achieves global serializability by
controlling the commitment (execution) order and thus the
serialization order of multidatabase transactions. Compared
to the the ADDS approach and Altruistic Locking,
ITM can process any number of multidatabase transactions
concurrently, even if they have concurrent and conflicting
subtransactions at multiple sites. All methods proposed in
this paper do not violate the autonomy of the LDBSs and
can be combined in a single comprehensive mechanism.
Analogous transaction execution and serialization orders
is a very useful property in a MDBS. For example, it can be
shown that the ADDS scheme [10], [18], Altruistic Locking
[13], and 2PC Agent Method [15] produce globally serializable
schedules if the participating LDBSs disallow schedules
in which transaction execution and serialization orders
are not analogous. Similarly, quasi-serializable schedules
[23] become serializable if all LDBSs permit only analogous
transaction execution and serialization orders. On the
other hand, if the local systems allow schedules in which
transaction execution and serialization orders are not anal-
ogous, these methods may lead to schedules that are not
globally serializable.
Another important finding is that local strictness in a
multidatabase environment offers no advantage over cas-
cadelessness in simplifying the enforcement of global serializability
Further research and prototyping are currently performed
at GTE Laboratories, Bellcore, and the University
of Houston. These activities include performance evaluation
of the proposed ticket methods, and benchmarking of
a prototype implementation. Current research conducted
at GTE Laboratories, includes adaptation of ticket methods
to provide consistency in a Distributed Object Management
System (DOMS) [8] in which global transactions
access homogeneous objects that encapsulate autonomous
14 IEEE TRANSACTIONS ON KNOWLEDGE AND
concurrency control mechanisms, and/or attached objects
that represent data and functionality of autonomous and
heterogeneous LDBSs.
The Take-A-Ticket operation can be viewed as a function
that returns the serialization order of a transaction in
a LDBS. If such a function is provided by the interfaces
of future DBMSs, multidatabase transaction management
methods that use tickets to enforce global serializability can
substitute the ticket operations by calls to DBMS-provided
serialization order functions and continue to enforce global
serializability without any modification.
Acknowledgments
The idea to use tickets in multidatabase transaction
management had emerged during a discussion with Gomer
Thomas. We thank Yuri Breitbart for pointing out an error
in one of our definitions in an earlier version of this paper.
Piotr Krychniak has implemented some of the ticket methods
in real DBMSs and contributed to the discussion of
implementation issues in Appendix I. We also thank Mark
Hornick and Ole Anfindsen for their useful comments.
--R
"From database systems to multidatabase systems: Why and how"
"Federated databases: Architectures and integration"
Transaction Concepts in Autonomous Database Environments
"Node autonomy in distributed systems"
"Effects of local autonomy on heterogeneous distributed database systems"
"Rigorous scheduling in multidatabase systems"
"On rigorous transaction scheduling"
"Distributed object management"
"Concurrency control issues in distributed heterogeneous database management systems"
"Multidatabase update is- sues"
"Supporting updates in heterogeneous distributed database systems"
"Concurrency control issues in heterogeneous distributed database management systems"
"Concurrency control and recovery for global procedures in federated database systems"
"Effects of autonomy on maintaining global serializability in heterogeneous distributed database systems"
"2PC Agent method: Achieving serializability in presence of failures in a heterogeneous multi- database"
"The notions of consistency and predicate locks in a database system"
Concurrency Control and Recovery in Database Systems
"An update mechanism for multidatabase systems"
"Superdatabases for composition of heterogeneous databases"
"Mixed concurrency control: Dealing with heterogeneity in distributed database systems"
"The transaction concept: Virtues and limitations"
"SAGAS"
"QSR: A correctness criterion for global concurrency control in InterBase"
"The S- Transaction model"
"Ex- tending the transaction model to capture more meaning"
Operating Systems: An Advanced Course
A Guide to The SQL Standard
Transaction Management in Multidatabase Systems
"Multidatabase recoverability and recov- ery"
"System level concurrency control for distributed database systems"
"On optimistic methods for concurrency control"
"Prepare and commit certification for decentralized transaction management in rigorous heterogeneous multidatabases"
"Transaction management in multidatabase systems"
"Reliable transaction management in a multidatabase system"
"A performance analysis of an optimistic and a basic timestamp-ordering concurrency control algorithms for centralized database systems"
"The datacycle(tm) archi- tecture"
"A de-centralized deadlock-free concurrency control method for mul- GEORGAKOPOULOS, RUSINKIEWICZ, SHETH: ENFORCING THE SERIALIZABILITY OF MULTIDATABASE TRANSACTIONS 15 tidatabase transactions"
"Extended commitment ordering, or guaranteeing global serializability by applying commitment order selectively to global transactions"
"The commitment order coordinator (coco) of a resource manager, or architecture for distributed commitment ordering based concurrency control"
--TR
Concurrency control and recovery in database systems
A guide to the SQL standard
Sagas
Multidatabase update issues
Node autonomy in distributed systems
Quasi serializability: a correctness criterion for global concurrency control in InterBase
On rigorous Transaction Scheduling
Transaction management in multidatabase systems
The S-transaction model
The Datacycle architecture
On optimistic methods for concurrency control
System level concurrency control for distributed database systems
The notions of consistency and predicate locks in a database system
Extending the transaction model to capture more meaning
Superdatabases for Composition of Heterogeneous Databases
Supporting Updates in Heterogeneous Distributed Database Systems
A Performance Analysis of an Optimistic and a Basic Timestamp-Ordering Concurrency Control Algorithm for Centralized Database Systems
Prepare and Commit Certification for Decentralized Transaction Management in Rigorous Heterogeneous Multidatabases
Mixed concurrency control
--CTR
Chih-Ping Wei , Olivia R. Liu Sheng , Paul Jen-Hwa Hu, Fuzzy Statistics Estimation in Supporting Multidatabase Query Optimization, Electronic Commerce Research, v.2 n.3, p.287-316, July 2002
A. Dogac , C. Dengi , E. Kilic , G. Ozhan , F. Ozcan , S. Nural , C. Evrendilek , U. Halici , B. Arpinar , P. Koksal , N. Kesim , S. Mancuhan, METU interoperable database system, ACM SIGMOD Record, v.24 n.3, p.56-61, Sept. 1995
Parvathi Chundi , Daniel J. Rosenkrantz , S. S. Ravi, Multi-site distributed database transactions utilizing deferred update, Proceedings of the 1997 ACM symposium on Applied computing, p.118-122, April 1997, San Jose, California, United States
SangKeun Lee , Chong-Sun Hwang , WonGye Lee, A uniform approach to global concurrency control and recovery in multidatabase environment, Proceedings of the sixth international conference on Information and knowledge management, p.51-58, November 10-14, 1997, Las Vegas, Nevada, United States
Stefan Bttcher, Concurrent checking of global cross-database integrity constraints, Integrity and internal control in information systems V, Kluwer Academic Publishers, Norwell, MA,
Dexter P. Bradshaw , Per-ke Larson , Jacob Slonim, Transaction scheduling in dynamic composite multidatabase systems, Proceedings of the 1995 conference of the Centre for Advanced Studies on Collaborative research, p.9, November 07-09, 1995, Toronto, Ontario, Canada
J. B. Lim , A. R. Hurson, Transaction Processing in a Mobile, Multi-Database Environment, Multimedia Tools and Applications, v.15 n.2, p.161-185, November 2001
Thomas Tesch , Jrgen Wsch, Global nested transaction management for ODMG-compliant multi-database systems, Proceedings of the sixth international conference on Information and knowledge management, p.67-74, November 10-14, 1997, Las Vegas, Nevada, United States
Jun-Lin Lin , Margaret H. Dunham, A Survey of Distributed Database Checkpointing, Distributed and Parallel Databases, v.5 n.3, p.289-319, July 1997
Sangkeun Lee , Chong-Sun Hwang , Heonchang Yu, Revisiting Transaction Management in Multidatabase Systems, Distributed and Parallel Databases, v.9 n.1, p.39-65, January 1, 2001
Dimitrios Georgakopoulos , George Karabatis , Sridhar Gantimahapatruni, Specification and Management of Interdependent Data in OperationalSystems and Data Warehouses, Distributed and Parallel Databases, v.5 n.2, p.121-166, April 1997
Dimitrios Georgakopoulos , Mark F. Hornick , Frank Manola, Customizing Transaction Models and Mechanisms in a Programmable Environment Supporting Reliable Workflow Automation, IEEE Transactions on Knowledge and Data Engineering, v.8 n.4, p.630-649, August 1996
James B. Lim , A. R. Hurson, Transaction Processing in Mobile, Heterogeneous Database Systems, IEEE Transactions on Knowledge and Data Engineering, v.14 n.6, p.1330-1346, November 2002
Patricia Serrano-Alvarado , Claudia Roncancio , Michel Adiba, A Survey of Mobile Transactions, Distributed and Parallel Databases, v.16 n.2, p.193-230, September 2004
smalcem Budak Arpinar , Uur Halici , Sena Arpinar , Asuman Doa, Formalization of Workflows and Correctness Issues in the Presence of Concurrency, Distributed and Parallel Databases, v.7 n.2, p.199-248, April 1999 | global serializability;local cascadelessness;forced local conflicts;distributed databases;global scheduler;multidatabase transactions;multidatabase transaction manager;local database system;schedules;local strictness;serialization orders;indirect conflicts;data manipulation operations;analogous execution;transaction processing |
627595 | A System for Approximate Tree Matching. | Ordered, labeled trees are trees in which each node has a label and the left-to-right order of its children (if it has any) is fixed. Such trees have many applications in vision, pattern recognition, molecular biology, programming compilation, and natural language processing. Many of the applications involve comparing trees or retrieving/extracting information from a repository of trees. Examples include classification of unknown patterns, analysis of newly sequenced RNA structures, semantic taxonomy for dictionary definitions, generation of interpreters for nonprocedural programming languages, and automatic error recovery and correction for programming languages. Previous systems use exact matching (or generalized regular expression matching) for tree comparison. This paper presents a system, called approximate-tree-by-example (ATBE), which allows inexact matching of trees. The ATBE system interacts with the user through a simple but powerful query language; graphical devices are provided to facilitate inputing the queries. The paper describes the architecture of ATBE, illustrates its use and describes some aspects of ATBE implementation. We also discuss the underlying algorithms and provide some sample applications. | Introduction
Ordered, labeled trees are trees in which each node has a label and the left-to-right order of its children (if it has
any) is fixed. 1 Such trees have many applications in vision, molecular biology, programming compilation and
natural language processing, including the representation of images [29], patterns [7], [13], [20], intermediate
code [1], [14], grammar parses [10], [35], dictionary definitions [5], [23] and secondary structures of RNA [30].
They are frequently used in other disciplines as well.
Many of the above applications involve comparing trees or retrieving/extracting information from repositories
of trees. For example,
ffl In molecular biology, researchers collect vast amounts of RNA structures (trees) whose features have
been analyzed. To gain information about a newly sequenced RNA, they compare the RNA's structure
against those in the database, searching for ones with very similar topologies. From such topological
similarities, it is often possible to infer similarities in the functions of the related RNAs [30], [32], [34].
ffl In natural language processing, computational linguists store dictionary definitions in a lexical database.
The definitions are represented syntactically as trees. Because the syntactic head of a definition is often
the genus term (superordinate) of the word being defined [9], linguists extract semantic information from
syntactic analysis of these definitions, thereby constructing semantic taxonomies [10], [22].
ffl In pattern recognition, a commonly used technique to classify an unknown pattern (tree) is to compare
it against those in the data sets, and to assign it to the category to which the majority of its closest
patterns belong [12].
ffl In programming languages, one effective way used to select an error recovery or correction has been
to compare the parse trees associated with corrected strings and their replacements, as well as the
corresponding strings [35].
Whereas ordered, labeled trees have been widely used in different applications, very few systems have been
built to support their comparison, and the information retrieval/extraction from repositories of such trees. As
far as we know, there are only two systems, both developed at IBM, that support such operations: APT [10]
and LQL [5]. APT is designed to extract linguistic information from a corpus of parse trees. It allows the user
to mark a target node in a parse tree, and then automatically constructs a partially instantiated PROLOG
term which serves as a pattern for finding nodes that occupy a comparable structural location in other parse
trees. LQL, on the other hand, provides users with a template tree structure, which represents a superset
of all possible structures that individual entries in a lexical database have. Through the template, users can
query, maintain, and extract information from the database. However, since both of the systems employ
unification techniques for tree matching, their capabilities are limited to exact matches. They cannot be used,
for example, to extract information from trees that are noisily represented, possibly caused by mistyping or
1 Throughout the paper, we shall refer to ordered trees simply as trees when no ambiguity occurs.
misspelling words (terminals) in the trees. 2
This paper presents an inexact tree matching system, called Approximate-Tree-By-Example (ATBE), developed
at New York University. ATBE is designed to support constructing, comparing and querying sets of
trees. Given a pattern (tree), the system allows users to retrieve (approximately) matched trees to the pattern
from a database, or extract information from trees pertinent to the pattern. 3
ATBE has many salient features.
1. It can support a wide variety of applications: ATBE provides a query language for tree comparison based
on a relational database language [33]. The system is customizable. Users can tailor the system to meet
the needs of their applications by inserting application-specific trees and application-specific distance
metrics and operations.
2. It manipulates trees in a uniform manner: There are numerous ways to represent or describe a tree [27],
[43]. However, once a tree is input, ATBE manipulates, stores, and displays them in standardized forms.
3. It is user-friendly: ATBE provides substantial graphical devices to facilitate users to input queries. It
gives users flexibility to edit trees at any time instead of using templates. The system has a multiple
window display that makes effective use of the screen, and utilizes on-screen and pop-up menus as
alternatives for typing most commands.
4. It is machine-independent: ATBE is implemented in C and X-windows [44]. The X-based implementation
permits the system to be used on a variety of workstations.
The paper gives an overview of the ATBE system. In Section 2, we introduce terminology and background
for tree comparison. Section 3 presents the query language. Section 4 describes the system's architecture.
Section 5 discusses some underlying algorithms. In two companion papers, [41] and [42], we describe in detail
the graphical interface and the use of the system.
Background
2.1 Edit Operations
The trees we are concerned with are ordered, labeled ones. Each node in a tree has a label and possibly
some additional information. (This information is referred to as node contents.) Node contents could be size
properties, like those in RNA secondary structures [30], or lexical features for grammar parses [10]. Figure 1
illustrates a grammar parse representing the sentence "The boy reads the book."
2 The importance of dealing with this type of inexact matching in actual applications has been widely addressed in the literature.
See, e.g., [11], [16], [31].
3 We include the term approximate in naming the system for two reasons. First, it states that the system can perform inexact
tree matching, i.e., it allows certain inaccuracy or dissimilarity to exist when comparing trees. Second, the approximate string
matching operation, which allows a prefix of strings to be removed when comparing strings, is important in many applications
[18], [37]. ATBE has analogous operations, allowing certain subtrees to be removed when comparing trees. The term by-example
refers to the way users query the database, which will be described in Section 3.
s
book
the
vp
det
The
boy reads
det
Fig. 1. Parse tree representing the sentence "The boy reads the book" [10].
Many algorithms have been developed for exact tree comparison [14], [17]. Our system is based on the inexact
algorithms presented in [45], [46], [47]. The algorithms are a generalization of those used for determining
the editing distance between sequences [4]. There are three types of edit operations: relabel, delete and insert
a node. We represent these operations as u ! v, where each of u and v is either a node or the null node ( ).
We call relabeling operation if u 6= and v 6= ; a delete operation if u 6= and an insert
operation if be the tree that results from the application of an edit operation
to tree T 1 ; this is written illustrates the edit operations.
Let S be a sequence s of edit operations. S transforms tree T to tree T 0 if there is a sequence
of trees T
Our definition of edit operations is really a shorthand for the specification. Here is the specification in full
detail. Consider a single edit operation, e.g., one that transforms T i\Gamma1 to T i . If it is a relabeling operation, we
specify the node to be relabeled in T i\Gamma1 . The same holds for a delete operation. An insert operation is a little
more complicated: we must specify the parent p of the node n to be inserted and which consecutive sequence
of siblings among the children of p will be the children of n. If that consecutive sequence is empty, then we
need to specify the position of n among the children of p. However, we will continue to use our shorthand
because these other specifications will be clear from the mapping structure defined below.
Let fl be a cost function that assigns to each edit operation nonnegative real number fl(u ! v).
We constrain fl to be a distance metric. That is, it satisfies the following three properties:
(non-negative
w) (triangle inequality).
We extend fl to a sequence of edit operations letting
editing
distance, or simply the distance, from tree T to tree T 0 , denoted dist(T; T 0 ), is defined to be the minimum
cost of all sequences of edit operations which transform T to T 0 , i.e.,
is a sequence of edit operations transforming T to T 0 g.
The definition of fl makes dist a distance metric as well.
c
r
r
f
(i) Relabeling to change one node label (b) to another (c).
r
a c
c
f
e
a
r
(ii) Deletion of a node. (All children of the deleted node b become children of the parent r.)
r
d
r
a e f d
f
e
a
(iii) Insertion of a node. consecutive sequence of siblings among the children of r (here, a, e and
f) become the children of c.)
Fig. 2. Examples illustrating the edit operations.
2.2 Mappings
The edit operations correspond to a mapping that is a graphical specification of which edit operations apply
to each node in the two trees. The mapping in Figure 3 shows a way to transform T to T 0 . It corresponds to
the sequence (delete (node with label d), insert (node with label d)).
Let T [i] represent the ith node of tree T according to some ordering (e.g., preorder). Formally, a mapping
from T to T 0 is a triple (M; simply M if there is no confusion), where M is any set of pairs of
integers (i; satisfying the following conditions:
represents the number of nodes in the indicated tree;
2. For any pair of (i
is to the left of T [i 2 ] if and only if T 0 [j 1 ] is to the left of T 0 [j 2 ] (sibling order preserved);
(c) is an ancestor of T is an ancestor of T 0 [j 2 ] (ancestor order preserved).
c
e
d
f
a b
a
f
d
Fig. 3. A mapping from T to T 0 . A dotted line from a node u in T to a node v in T 0 indicates
that u should be changed to v if u 6=
v, or that u remains unchanged if v. The nodes of T not
touched by a dotted line are to be deleted and the nodes of T 0 not touched are to be inserted.
Thus, the mapping in Figure 3 is f(1, 1), (2, 2), (4, 3), (5, 5), (6, 6)g.
Let M be a mapping from T to T 0 . Let I and J be the sets of nodes in T and T 0 , respectively, not touched
by any dotted line in M . Then we can define the cost of M :
(i;j)2M
i2I
Given S, a sequence of edit operations from T to T 0 , it can be shown that there exists a mapping M from
T to T 0 such that fl(M ) - fl(S); conversely, for any mapping M , there exists a sequence of edit operations S
such that
Hence, we have
is a mapping from T to T 0 g.
2.3 Approximate Tree Matching Operations
In [37], Ukkonen discussed approximate string matching operations that allow prefixes of strings to be removed
when comparing strings. Myers and Miller [21] discussed similar operations for regular expressions. We extend
these operations to trees by considering prefixes as a collection of subtrees. The following two operations are
introduced:
ffl Cutting at node n from tree T means removing n and all its descendants (i.e., removing the subtree
rooted at n).
ffl Pruning at node n from tree T means removing only the descendants of n; n itself remains in T . (Thus,
a pruning never eliminates the entire tree.)
The operations are useful in locating portions of a tree that closely match a given pattern. Consider, for
example, the trees in Figure 4. T 1 exactly matches the subtree rooted at a in T 3 if we prune at node b from T 3
(or cut at node e). (This type of subtree matching corresponds to the one defined in Hoffmann and O'Donnell
[14].) Note that there may not exist applicable pruning operations for certain matchings yielded by cuttings.
For example, by cutting at node c and node e from T 3 , the resulting tree matches T 2 . However, no pruning
operation can be applied in this case to yield such a matching. In Section 3, we shall further discuss the use
of the two operations and their practical applications.
e
d
a
a
b c d d
c
a
Fig. 4. Example trees.
3 ATBE Queries
3.1 Query Specification
In ATBE, the user formulates a query by building a pattern tree on the screen, and providing an appropriate
statement. In building the tree, the user may draw it from scratch, may edit an existing tree in the database
(e.g., the existing tree may be a template), may edit a solution tree of another query, 4 or may key in the tree
in its linear form directly. The linear form of the tree is a fully parenthesized expression which is a preorder
enumeration of the tree (i.e., first the root then the subtrees, from left to right). Node contents (if any) are
enclosed in braces, and follow immediately their corresponding node labels.
A statement can be of type retrieve, insert or delete. The first one is for information retrieval and
extraction. The second and third are used for modifying the underlying database.
Figure
5 illustrates an ATBE query. The pattern represents an RNA secondary structure drawn from
[30]; it is displayed in the Approximate Tree By Example window. Node contents are normally not shown on
the screen (for saving space purposes), and can be seen through pop-up windows (e.g., the pop-up window
associated with the node labeled N indicates that the node's size is 2). The statement is entered in the
Statement window. Also shown in the figure is the linear form of the pattern, which was keyed in using the
text editor.
The query-by-example paradigm employed by ATBE allows rapid and incremental development of queries,
which can be easily refined to highlight certain structural properties of trees under investigation. Many systems
have used similar concepts in constructing queries [15], [25], [36], [49], [50]. The difference is that, whereas
most of these systems express operations in tabular skeletons, ATBE expresses operations in tree structures,
which represent the entries in the underlying database. In this sense, ATBE's queries are similar to those of
4 Using the result of a query as an operand of some other queries is often desired in developing query languages for advanced
information systems [2]. The motivation for having this property in ATBE is that, at times users may find that one solution tree
is promising and closely matching other trees in some file. In such a situation, he may edit the solution tree and then use it as a
new pattern to search for data trees in that file.
LQL [5].
Fig. 5. A query and the screen layout for ATBE; the pattern represents an RNA secondary structure
[30]; node contents (e.g., size properties) are displayed via pop-up windows; the string shown in the
window is the linear form of the pattern.
3.2 Query Description and Interpretation
Table
1 gives a complete BNF-like syntax of ATBE statements. 5
!ATBE stmt? := retrieve !tree-type? !tree-var? from !file-name?
delete !tree-name? from !file-name?
!bool-expr? := !bool-expr? and !bool-expr?
j !bool-expr? or !bool-expr?
!dist-op? := dist j distwithcut j distwithprune
!tree-op? := size j height
where !iter-var? is !tree-type? of !file-name? )
Table
1. Syntax for the ATBE statements.
In general, ATBE's retrieve statement has the following construct
retrieve !tree-type? !tree-var? from !file-name? where !bool-expr?
The tree-type can be either tree or subtree. The from clause specifies which file users want to search.
The where clause imposes constraints on trees, specifying conditions a solution (sub)tree must satisfy. The
query is implemented by a search through the specified file in which each data (sub)tree belonging to the file
is selected and stored into the tree-variable. Each time that a new (sub)tree is stored in the variable, the
boolean expression is evaluated; if the expression is true, the (sub)tree becomes an answer. Each file is treated
as a set, and therefore the search through the file is unordered with no (sub)tree selected twice.
A boolean expression consists of terms connected with the logical connectives and (for intersection), or
(for union). Let pa refer to the pattern and t a (sub)tree in the file. A term has the form
or
where ' is a comparison operator (e.g., ?, ?=, =, ! =, !, !=), and expression evaluates to a constant.
There are two kinds of tree operators:
5 There are templates available to save the user some typing when inputing these statements.
ffl size: Computes the total number of nodes in the (sub)tree t.
ffl height: Computes the number of edges in a longest path from the root to a leaf of t.
There are three kinds of distance operators:
ffl dist: Computes the distance between pa and t.
ffl distwithcut: Computes the minimum distance between pa and t, allowing zero or more cuttings at nodes
from t (cf. Section 2.3). (There is no cost associated with these cuttings.) Formally, let someroots(t)
represent a set of nodes in t where for any two nodes m;n 2 someroots(t), neither is an ancestor of the
other. Let cut(t; someroots(t)) represent the resulting tree after we remove the subtrees rooted at nodes
in someroots(t). Then
f dist(pa; cut(t; someroots(t)))g:
ffl distwithprune: Computes the minimum distance between pa and t, allowing zero or more prunings at
nodes from t (cf. Section 2.3). As for distwithcut, there is no cost associated with the prunings.
The expression can be a constant, or an aggregate expression; the latter has the form
where !iter-var? is !tree-type? of !file-name? )
The aggregate operator can be either min or max (see examples in Section 3.3). The expression is
evaluated by binding each (sub)tree in the specified file to the iteration-variable, and then computing the
distance between the pattern, which by convention is identified by pa, and the (sub)tree (with or without
cuttings or prunings). The minimum (or maximum) of these distance values is then returned as the result.
3.2.1 Semantics of the Distance Operators
In addition to having constant nodes, namely, ones whose labels and contents are specified, a pattern may
contain the following marks:
ffl variables ( x, y, etc.);
Both the bars and umbrellas appear on edges of the pattern tree. (They are collectively referred to as
variable length don't care marks [47].) The variables appear on leaves and are preceded with an underscore.
These marks may appear in several places in a pattern.
A mark-substitution (instantiation) s on the pattern pa replaces the marks by nodes in the data tree t as
ffl Each variable is matched with (replaced by) a subtree in t. (Repeated variables (i.e., different occurrences
of the same variable) are matched with identical subtrees.)
ffl Each bar is viewed as a pseudo node in pa, which is matched with part of a path from the root to a leaf
of t.
ffl Each umbrella is also viewed as a pseudo node, which is matched with part of such a path and all the
subtrees emanating from the nodes of that path, except possibly at the lowest node of that path. At
the lowest node, the umbrella is matched with a set of leftmost subtrees and a set of rightmost subtrees.
(The mark is so named because of this substitution pattern.) Formally, let the lowest node be n and let
the children of n be c 1 ; . ; c k in left-to-right order. Let i; j be such that 0 -
is matched with the subtrees rooted at c 1 in addition to the node n, ancestors along
a path starting at n, and the subtrees of those proper ancestors of n.
Let s(pa) be the resulting (mark-free) pattern tree. We require that any mapping from s(pa) to t map the
substituting nodes to themselves. (Thus, no cost is induced by mark substitutions.) The distance (distwith-
cut, distwithprune, respectively) between pa and t with respect to the substitution s, denoted dist(pa; t; s)
(distwithcut(pa; t; s), distwithprune(pa; t; s), respectively), is defined to be dist(s(pa); t) (distwithcut(s(pa);
6 The distance (distwithcut, distwithprune, respectively) between pa
and t is obtained from one of the best mark-substitutions, i.e.,
fdist-op(pa; t; s)g
where S is the set of all possible mark-substitutions, and dist-op is one of the above distance operators.
Intuitively, a query may minimize over substitutions, minimize over cuttings or prunings, and minimize the
resulting number of edits.
Notice that the APT [10] and LQL systems [5] also provide similar operations, though their capabilities
are much more limited. They only support the following exact match query (the pattern pa can have variables
[5] or bars [10], but no umbrellas):
retrieve tree t from !file-name? where distwithcut(pa;
In the next subsections, we illustrate the use of ATBE queries using examples drawn from various applications
6 Alternatively, dist(pa; t; s) could be defined as the minimum cost of all sequences of edit operations that transform the nodes
(excluding the marks) in pa to the nodes in t which are not involved in the substitution s.
xn
pa t
Fig. 6. (i) Variable instantiation: the variables in pa are matched with the shaded subtrees in t.
(ii) Bar instantiation: the bar is matched with the nodes (black dots) on a path P . (iii) Umbrella
instantiation: the umbrella is matched with the nodes (black dots) on a path P and the shaded
subtrees. Notice that some (consecutive) children along with their descendents of the lowest node
of P (represented by the unshaded subtree in t) are excluded from the instantiation; they will be
mapped to the nodes underneath the umbrella in pa.
3.3 Example Queries - Information Retrieval
One of the major functions of ATBE is to support (tree) information retrieval. A most commonly used retrieval
operation in applications is perhaps to find trees closest to a given pattern. 7 Assuming the pattern is
as shown in Figure 5, the query might be expressed as follows:
retrieve tree t from F
where dist(pa, dist (pa, u) where u is tree of F )
Trees obtained from the query are displayed one at a time on the screen. 8 The user is able to see the best
7 For example, in analyzing features of a newly sequenced RNA, there may not exist RNAs in the database that exactly match
the new RNA. Under this circumstance, researchers often attempt to get those that are most similar to the new one. This type
of query is also known as the best-match retrieval [32].
8 They are displayed either in vertical normal form (as shown in Figure 5) or in horizontal normal form (see Figure 7).
mapping that yields the distance. When a solution tree is large (e.g., contains hundreds of nodes), its edges
and nodes are shrunk proportionally, so that the entire tree can fit in the window; users may then lasso the
part of interest and zoom in to see more detail [42]. 9
I
Fig. 7. Horizontal normal form for the tree in Figure 5.
In situations where trees represent noisy information, users might wish to find data trees that are within
certain distance of the pattern (this type of retrieval is known as the good-match retrieval [39]). For example,
the query
retrieve tree t from F
finds data trees that are within distance 7 of the pattern and whose height is greater than 5.
It is possible that some portion of a data tree is unimportant. In such a situation, the user may provide a
pattern containing umbrellas, as shown in Figure 8.
retrieve tree t from F
where dist(pa,
I
Fig. 8. ATBE query for retrieving data trees portion of which are unimportant.
This query retrieves data trees consisting of nodes N, I, M and H. The data trees have a subtree rooted at
M. The shape of the subtree is unimportant, provided it contains B as one of its leaves.
In some applications, users may wish to retrieve portions of trees, rather than entire trees. Hoffmann and
O'Donnell [14], for instance, discussed how to apply the tree replacement techniques to produce interpreters for
9 When the tree is too large to be in the window, the user can scroll the window up, down, left, and right, to examine one
portion of the tree at a time.
nonprocedural programming languages such as LISP or LUCID. One important operation in this application
is to locate occurrences of a pattern in a subject tree. Figure 9 shows an ATBE query that finds subtrees t in
file F that exactly match a given pattern, allowing zero or more prunings at nodes from t. Thus, the query
is able to locate portions of a tree that match the pattern in the sense of Hoffmann
and O'Donnell. (This can be generalized to approximate the Hoffmann and O'Donnell style matching by
replacing distwithprune(pa,
a
retrieve subtree t from F
where distwithprune(pa,
Fig. 9. ATBE query for retrieving subtrees.
Solution subtrees obtained from the query are displayed one at a time; they may also be displayed on a
tree basis, namely, the entire tree is displayed with corresponding subtrees highlighted [42].
Like most other query languages [3], [33], ATBE also allows users to store solution (sub)trees in a file,
rather than only display them on the screen. For example,
retrieve tree t from F into G
where dist(pa, is tree of F)
stores trees of file F that are most dissimilar to (worst matching) the pattern in file G.
3.4 Example Queries - Information Extraction
The previous section presents several examples for information retrieval. Another major function of ATBE is
to support information extraction from trees. Let us consider some examples drawn from natural language
processing.
Consider the tree shown in Figure 1. Suppose the user wishes to find all the nouns that can be the direct
object of the verb "reads" from a database of parsed text [10]. He would type the query as shown in Figure
10. This query retrieves data trees that exactly match the pattern, allowing zero or more cuttings at nodes
from t. The cut subtrees (nodes) represent don't-care parts, i.e., they specify that the given pattern should
match a data tree even if that data tree has some additional subtrees, which are thus considered irrelevant
with respect to this pattern. The query illustrates the use of variables. Nodes used to instantiate the variable
x represent objects of the verb "reads"; they are highlighted when displaying the mapping between trees [42].
retrieve tree t from F
where distwithcut(pa,
s
vp
reads
Fig. 10. ATBE query for finding nouns that are the direct object of "reads".
The above query can be generalized by replacing distwithcut(pa,
k. We took our inspiration for these operations from the APT system [10], which handles only exact tree
matching (i.e., distwithcut(pa, t) must be zero). The extension to approximate tree matching can help in
many applications. For example, users may type verbs of the past tense in their pattern tree, even though the
matching data tree uses the present tense. Using distwithcut(pa, find the matching
data tree, whereas distwithcut(pa, (as in APT) would not.
At times, computational linguists might want to find the semantic properties of a noun, particularly as
determined by the predicates for which it may serve as an argument [26]. Consider, for example, the query
"what can be done to a book?" Here the user wishes to get the set of verbs for which "book" is the object
from the database. In ATBE, this query can be expressed as shown in Figure 11.
s
vp
x
book
retrieve tree t from F
Fig. 11. ATBE query for finding verbs for which "book" is the object.
This query illustrates the use of bars. The bar specifies that a path may contain at certain points zero
or more unspecified, intermediate nodes. This mark is useful in locating verbs in more complicated sentences
such as "The boy wanted to read the book", or "The woman knew that the boy wanted to read the book" [10].
The approximate matching helps locate verbs in sentences where "book" is misspelled or appears in plural
form [42].
Thus, by attaching variables and bars in the pattern, users can extract information (nodes, subtrees) from
the database.
Updating Operations
Having described ATBE's retrieval/extraction operations, we now turn to its updating operations. ATBE
provides insert and delete operations to maintain a database. For example, if the user wishes to erase the
pattern (with name, say, foo) shown in Figure 5 from file F, he would type the statement
delete foo from F
Modifying a tree can be achieved by first retrieving it from the file, e.g.,
retrieve tree foo from F
editing it, erasing the original copy, and then storing the new copy by typing
insert foo into F
4 System Organization and Implementation
ATBE is implemented in C and X-windows. It currently runs on SPARC workstations under the Sun operating
system version 4.1.1. Figure 12 shows the organization of the system. The display manager displays trees,
their mapping information, and assists the user to form an ATBE query. The query processor parses the query
and performs query optimization. The tree comparator is responsible for computing distances between trees.
When the system is first started, the display manager is activated. It accepts a query, performs syntax
checking on the pattern and the input statement. If there is no error, the query is passed to the query
processor. When the processor is parsing the query, it recognizes files to be accessed, retrieves trees from
them, and invokes the tree comparator whenever necessary to perform tree comparison. The query processor
produces the output of the query. The control then returns to the display manager, which displays the answers
on the screen. These modules communicate by writing data into files; file names are passed as parameters.
4.1 Display Manager
The display manager consists of two components, one that provides a user interface for editing queries, and
one that displays solution trees. In editing a pattern, ATBE allows users to use the text editor (for keying
in the pattern), or alternatively a tree editor. The tree editor enables users to insert, delete, remove, copy
subtrees, modify node labels and contents, and attach bars and umbrellas. There is no restriction on the order
of these operations. The display manager also helps users format their output (in vertical or horizontal normal
forms), and shows the mapping information.
Display
Manager
tree
distances
trees
Comparator
Tree
database
tree
(sub)trees
(sub)trees
Query
Processor
query
(sub)trees
solution
Fig. 12. ATBE system organization.
The display manager is menu-driven, with a number of available functions at each state. Each state
is associated with a set of commands (displayed via pop-up menus) which causes a state transition when
executed. When a command is entered, the display manager either updates the current window (if the
command is recognized), or prints an error message and goes back to the state from which the command was
entered (if the command is erroneous). After the user finishes constructing the query, he types a carriage
return and the display manager goes through a series of consistency checks on the query. If errors appear in
the query, the display manager shows error messages on the screen, and goes back to the initial state, waiting
for another query. (Typical errors may include inserting unmatched parentheses or braces when the pattern
is keyed in, or giving wrong information for trees being compared, as we will explain later.) If no errors are
found, the display manager transforms the pattern into its linear form (if the query was not keyed in), and
stores the query into a file. It then passes the file name as well as control to the query processor.
4.2 Query Processor
When the query processor gets control, it reads the query from the file specified by the passed file name. It
then parses the query, and accesses files where relevant data trees are stored. (The data trees are stored in their
linear forms; cf. Section 3.1.) Whenever encountering tree comparison operators such as dist, distwithcut,
distwithprune, the processor encodes corresponding trees into a file and invokes the tree comparator to execute
these operators. Since computing distances between trees is usually time-consuming, we have developed
algorithms to prevent the query processor from exhaustively searching data trees in a file (see Section 5).
In addition, the query processor is responsible for choosing appropriate heuristics from a pool of heuristics
tailored to different queries. After finding solution trees, the query processor stores them in a file; it then
passes the file name and control back to the display manager.
4.3 Customizing the ATBE System
The ATBE system is customizable. By providing node definitions and cost functions for trees, users can tailor
the system to meet the need for different applications. We briefly describe the procedure here. For further
details about the use of the system, the reader is referred to [41], [42].
Recall that each node in an ordered, labeled tree has a label and possibly some node contents. A simple
language (written in C) is provided to specify these node contents. As an example, consider the molecular
structure shown in Figure 5. Each node in the structure is associated with a label and an integer, which
represents the size property of the node. The node definition for this type of tree thus looks as follows:
Node L
f
int
Here, Node L is node's identification. size refers to the information the integer represents. Notice that the
node's label (of string type) is not specified in the above definition. This is so because we assume that every
node has a label and it is treated as an implicit field.
Users feed various node definitions as described above into the system. A set of I/O programs (which are
used to read trees with specified node formats) as well as cost function programs specific to the user-defined
node formats are generated. Users may then modify the cost function programs to meet their application's
requirements. 10 The resulting programs are then compiled and linked with other modules (i.e., those for tree
comparison, query optimization and graphical interface) to produce a custom system.
In using the generated system, users may input queries that refer to trees with different node formats. To
compute the distance between these trees, users must provide correct format information and the corresponding
cost functions for edit operations. (This is done by clicking appropriate pop-up menu items and filling in the
information in dialogue windows.) If the information does not match the trees being compared, the system
shows error messages and ignores the corresponding query.
Underlying Algorithms
Major algorithms used to implement ATBE can be classified into three categories: (1) those for computing
distances between trees without marks (i.e., variables, bars, or umbrellas), (2) those for computing distances
between trees with marks (and also instantiating them), and (3) those for query optimization.
The first set of algorithms was presented in [46]. Given two trees T 1
and T 2
, the algorithms compute
their distance (with or without cuttings or prunings) in O(jT 1 j \Theta jT 2 j \Theta min(depth(T 1
using O(jT 1 j \Theta jT 2 space. The computation also produces (as a by-
product) the best mapping that yields the distance, as well as distances between all subtrees of the two
trees.
For example, insert or delete a node of format Node L may cost 5, and relabeling cost 3. When users do not specify cost
functions, the system provides some default cost functions.
The second set of algorithms was given in [47]. The algorithms compute the distance between T 1 and T 2 ,
assuming one of them contains variables (bars, umbrellas). The algorithms also find the best mapping yielding
the distance, and locate appropriate subtrees (or nodes) used for instantiating the marks. The time and space
complexities of these algorithms are the same as those for the first set of algorithms.
The third set of algorithms essentially employs the triangle inequality to reduce computational efforts
during query evaluation. Illustration of these algorithms might be instructive to researchers who develop
query optimizers for other advanced information systems, such as pictorial [6], [8], [28], or spatial database
systems [24], [38]. Below, we describe these algorithms, first focusing on patterns without marks, and then
those with.
5.1 Algorithms for Processing Queries without Marks
For exposition purpose, we use the best-match query as a running example. More general cases can be found
in [32], [39], [40].
Given a file F of n trees and a pattern pa, one straightforward way of finding trees in F that are closest
to pa is to compute the distance between each tree of F and pa, and then search for the trees with minimum
distance. The major problem with this approach is its computational expense, particularly when trees or files
are large.
Our approach instead is to first precompute pairwise distances between trees in F . We then proceed in
stages, picking one tree at a time, comparing it against the pattern, updating the current best-matches (if
necessary), and eliminating certain trees from consideration. Specifically, suppose at some point, we just
computed dist(pa; t i ) for a tree t i in F , and the current best-matching tree is t b . Two cases may arise:
Case 1. In this case, we can eliminate trees t that are farther away from t i than
or closer to t i than dist(pa; because they cannot
contribute to solutions. To see this, notice that
(by the triangle inequality)
or
which implies that t would never become the best-matching tree.
Case 2. In this case t i becomes the current best-matching tree, and we can
eliminate trees t that are farther away from t i than 2 \Theta dist(pa; t i ) from consideration, because
(by the triangle inequality)
Moreover, let t 1 . t i\Gamma1 be the whose distances to pa have been computed in previous stages. We can
eliminate trees t from consideration where t is farther away from t j , 1
or closer to t j than dist(pa; trees haven't been eliminated from consideration yet).
Thus, by exploiting the triangle inequality and precomputed intra-file distances, we can filter out trees that
could not possibly satisfy the query, given the distances known so far. 11 To expedite the query evaluation, we
also developed a heuristic, called pick least lower bound, for picking trees. The heuristic works by picking its
first tree randomly, and then in subsequent stages, it selects a tree t such that the lower bound of the distance
between t and the given pattern pa is minimized based on all previously computed trees. (These lower
bounds are obtained by dist(pa; picked previously.)
Intuitively this heuristic uses the lower bound to estimate the exact distance. Thus the tree having the
least lower bound is expected to be (potentially) the closest tree to pa. If several trees have the same
lower bound, the heuristic selects one that has the least upper bound. (The upper bound is obtained by
The reason for doing so is that we expect the smaller the difference
between the lower and upper bounds, the more precise the estimated distance is. Ties on the difference are
broken arbitrarily. 12
5.2 Algorithms for Processing Queries Containing Bars
In general, the triangle inequality does not hold when the pattern contains marks. Figure 13 illustrates this
case.
f
x
e
d
c
a
pa
a
a
b d e
gh i
Fig. 13. Assume that all edit operations cost one.
ffl In computing dist(pa; in pa would match their corresponding nodes in t, and x would match
the subtree rooted at c. The resulting distance would be 0.
ffl In computing dist(t; t 0 ), c would be deleted and all other nodes in t would match their corresponding
nodes in t 0
. The resulting distance would be 1 (representing the cost of deleting c).
ffl In computing dist(pa; t 0
in pa would match their corresponding nodes in t 0
, and x would
match either the subtree rooted at d or the subtree rooted at e. In both cases, the resulting
distance would be 3 (representing the cost of deleting the three nodes in the subtree not used for
the instantiation).
Thus, dist(pa; t 0
We put data trees and intra-file distances in UNIX files. The distance information is stored separately from the trees, and
is read into the query processor when needed. Distances between newly inserted trees and all other trees already in the file
are added periodically. Deleted trees are marked; these trees, together with their distances, are erased manually in a periodical
manner as well.
We have conducted experiments to evaluate the behavior of the algorithms. The results indicate that the performance of
these algorithms is slightly influenced by file sizes, but is strongly dependent on distance distribution [32], [39]. The algorithms
work well if distance distribution is not seriously skewed. For instance, in processing best-match queries, our algorithms can
eliminate nearly 80% trees, on the average, from consideration for uniformly distributed distances.
However, in very special cases, namely, those in which the pattern contains only bars, we can derive
bounding procedures similar to those used in the mark-free case. Consider a simple case where the pattern pa
contains one bar. Suppose dist(pa;
and dist(t; t 0
for two arbitrary trees t and t 0 in F . We claim
that
To show this, consider the set E of edit operations that transform pa to t, and the set E 0 of edit operations
that transform t to t 0 . Let P be the path in t that is matched with the bar. Let E be the set of edit
operations applied to P , and let P 0 be the resulting path in t 0 . (The length of P 0 may be zero.) Thus, the
distance between pa and t 0 is bounded by because at worst, we can match the bar with P 0 (at zero
cost) and apply edit operations in E to transform the remaining part of pa to t 0 .
The upper bound offers a useful cut-off criterion to eliminate trees from consideration when searching for
farther trees from the pattern. For example, consider retrieving trees in file F that are most dissimilar to
the pattern pa. Let t w represent the current worst match. Using our previous arguments, if dist(pa; t i
filter out trees t where
On the other hand, if becomes the current worst match, and we can eliminate
trees t from consideration where
This example illustrates how to use the triangle inequality to eliminate irrelevant trees for the worst-match
retrieval. Unfortunately, developing complete cut-off procedures for queries containing all types of marks is a
non-trivial problem. Techniques for optimizing such queries remain to be explored.
6 Conclusions and Future Work
This paper presents an overview of the ATBE system. The system makes several contributions.
1. It is the first system to support approximate tree matching.
2. It supports many useful optional functions such as the ability, while computing the distance between the
pattern and a data tree, to
ffl cut (and prune) those subtrees from the data tree that will minimize the distance;
ffl substitute for those variable length don't care nodes of the data tree that will minimize the distance;
and
ffl instantiate variables placed as leaves in the pattern tree.
3. It provides a query language that allows users to combine a variety of constraints in flexible ways.
We use the editing distance to measure the dissimilarity between two trees. In some applications, researchers
have proposed different measurements for tree matching (see, e.g., [7], [19]). To adapt our system
and develop it into a custom environment for specific distance measures, we have designed the system in a
very modular way, cleanly separating routines for tree comparison from all other routines (i.e., those for query
optimization, graphical interface, etc. Thus, the user can modify those routines without changing the rest
of the system. The modular design also facilitates augmenting the system with additional functions.
Work on ATBE is continuing. We have two main goals.
1. Our system, as well as our algorithms, deal with ordered, labeled trees. In [48], it was shown that the
problem of finding the editing distance between unordered, labeled trees (i.e., trees in which the left-to-
right order of each node's children is unimportant) is NP-complete. We are investigating heuristics for
comparing these trees, and plan to extend our system to handle them.
2. The present lack of techniques for optimizing queries containing variables (or umbrellas) may degrade
the system performance seriously. We are currently working on optimization strategies for such queries.
ATBE is used in several universities. We would be pleased to share ATBE software and experiences with
other groups pursuing relevant research. Readers interested in obtaining the software should send a written
request to any one of the authors.
Acknowledgements
We would like to thank members of the lexical systems project at IBM T. J. Watson Research Center, in
particular B. Boguraev, R. Byrd, M. Chodorow, J. Klavans, M. Neff, and Y. Ravin for helpful discussions
concerning this work. The systems of R. Byrd and M. Chodorow were very inspirational. We would also like
to thank: P. Kilpelainen of the University of Helsinki, for providing valuable comments in using ATBE; our
colleagues A. Howell, M. Smosna and K. Snyder, for assisting us in implementing ATBE; and the anonymous
referees and the editor D. Spooner, for their constructive suggestions that have greatly improved the readability
of this paper.
--R
"Code generation using tree matching and dynamic programming,"
"OQL: A query language for manipulating object-oriented databases,"
"System R: A relational approach to data management,"
"A fast string matching algorithm,"
An Informal Guide to the Lexical Query Language
"Pictorial data-base systems,"
"Waveform correlation by tree matching,"
"Database structure manipulation capabilities of the picture database management system (PICDMS),"
"Extracting semantic hierarchies from a large on-line dictionary,"
"Locating syntactic patterns in text corpora,"
"Approximate pattern matching in a pattern database system,"
Pattern Classification and Scene Analysis
"Representation of random waveforms by relational trees,"
"A generalized query-by-example data manipulation language based on database logic,"
"The noisy substring matching problem,"
"Efficient Tree Pattern Matching,"
"Introducing efficient parallelism into approximate string matching and a new serial algorithm,"
"A tree-matching algorithm based on node splitting and merging,"
"A tree system approach for fingerprint pattern recognition,"
"Approximate matching of regular expressions,"
"Creating and querying hierarchical lexical data bases,"
"Dictionaries, dictionary grammars and dictionary entry parsing,"
"PROBE spatial data modeling and query processing in an image database application,"
"Query processing techniques in the summary-table- by-example database query language,"
"The semantic representation of lexical knowledge,"
"Tidier drawings of trees,"
"An efficient pictorial database system for PSQL,"
"Distance transform for images represented by quadtrees,"
"Comparing multiple RNA secondary structures using tree comparisons,"
"Structural descriptions and inexact matching,"
"New techniques for best-match retrieval,"
"The design and implementation of INGRES,"
"Three dimensional structure of a transfer RNA in two crystal forms,"
"The tree-to-tree correction problem,"
"Time-by-example query language for historical databases,"
"Finding approximate pattern in strings,"
"Design and architectural implications of a spatial information system,"
"Query processing for distance metrics,"
"Query optimization in database and information retrieval systems,"
Reference manual for ATBE: A tool for approximate tree matching
"A tool for tree pattern matching,"
"Tidy drawings of trees,"
Programming Manual
"The editing distance between trees: algorithms and applications,"
"Simple fast algorithms for the editing distance between trees and related problems,"
"On the editing distance between unordered labeled trees,"
"Query-by-example,"
"Office-by-example: A business language that unifies data and word processing and electronic mail,"
--TR
Introducing efficient parallelism into approximate string matching and a new serial algorithm
A tree system approach for fingerprint pattern recognition
Time-by-Example Query Language for Historical Databases
Code generation using tree matching and dynamic programming
Simple fast algorithms for the editing distance between trees and related problems
Query processing techniques in the summary-table-by-example database query language
OQL: a query language for manipulating object-oriented databases
New techniques for best-match retrieval
Query optimization in database and information retrieval systems
On the editing distance between unordered labeled trees
System R
The design and implementation of INGRES
The Tree-to-Tree Correction Problem
Pattern Matching in Trees
A fast string searching algorithm
PROBE Spatial Data Modeling and Query Processing in an Image Database Application
An Efficient Pictorial Database System for PSQL
Query Processing for Distance Metrics
Fast Serial and Parallel Algorithms for Approximate Tree Matching with VLDC''s
The editing distance between trees
--CTR
Pavel Makagonov , Celia B. Reyes Espinosa, Elements and principal stages in the design of non-profit websites, Proceedings of the 2nd WSEAS International Conference on Computer Engineering and Applications, p.115-119, January 25-27, 2008, Acapulco, Mexico
Kemal Oflazer, Error-tolerant tree matching, Proceedings of the 16th conference on Computational linguistics, August 05-09, 1996, Copenhagen, Denmark
Chia-Hsin Huang , Tyng-Ruey Chuang , Hahn-Ming Lee, Fast structural query with application to chinese treebank sentence retrieval, Proceedings of the 2004 ACM symposium on Document engineering, October 28-30, 2004, Milwaukee, Wisconsin, USA
Jason Tsong-Li Wang , Dennis Shasha , George J. S. Chang , Liam Relihan , Kaizhong Zhang , Girish Patel, Structural matching and discovery in document databases, ACM SIGMOD Record, v.26 n.2, p.560-563, June 1997
Sachindra Joshi , Neeraj Agrawal , Raghu Krishnapuram , Sumit Negi, A bag of paths model for measuring structural similarity in Web documents, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Sudarshan S. Chawathe, Comparing Hierarchical Data in External Memory, Proceedings of the 25th International Conference on Very Large Data Bases, p.90-101, September 07-10, 1999
S. Flesca , E. Masciari, Efficient and effective web change detection, Data & Knowledge Engineering, v.46 n.2, p.203-224, August
Xifeng Yan , Philip S. Yu , Jiawei Han, Substructure similarity search in graph databases, Proceedings of the 2005 ACM SIGMOD international conference on Management of data, June 14-16, 2005, Baltimore, Maryland
Alfredo Ferro , Giovanni Gallo , Rosalba Giugno , Alfredo Pulvirenti, Best-Match Retrieval for Structured Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.7, p.707-718, July 2001
Xifeng Yan , Feida Zhu , Philip S. Yu , Jiawei Han, Feature-based similarity search in graph structures, ACM Transactions on Database Systems (TODS), v.31 n.4, p.1418-1453, December 2006
Marcello Pelillo , Kaleem Siddiqi , Steven W. Zucker, Matching Hierarchical Structures Using Association Graphs, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.11, p.1105-1120, November 1999
Jason Tsong-Li Wang , Gung-Wei Chirn , Thomas G. Marr , Bruce Shapiro , Dennis Shasha , Kaizhong Zhang, Combinatorial pattern discovery for scientific data: some preliminary results, ACM SIGMOD Record, v.23 n.2, p.115-125, June 1994
Luca Lombardi , Alfredo Petrosino, Distributed recursive learning for shape recognition through multiscale trees, Image and Vision Computing, v.25 n.2, p.240-247, February, 2007
Michal A. van Wyk , Tariq S. Durrani , Barend J. van Wyk, A RKHS Interpolator-Based Graph Matching Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.7, p.988-995, July 2002
Philip Bille, A survey on tree edit distance and related problems, Theoretical Computer Science, v.337 n.1-3, p.217-239, 9 June 2005
Erhard Rahm , Philip A. Bernstein, A survey of approaches to automatic schema matching, The VLDB Journal The International Journal on Very Large Data Bases, v.10 n.4, p.334-350, December 2001
Didier Dubois , Henri Prade , Florence Sdes, Fuzzy Logic Techniques in Multimedia Database Querying: A Preliminary Investigation of the Potentials, IEEE Transactions on Knowledge and Data Engineering, v.13 n.3, p.383-392, May 2001
Dennis Shasha , Jason T. L. Wang , Rosalba Giugno, Algorithmics and applications of tree and graph searching, Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 03-05, 2002, Madison, Wisconsin | semantic taxonomy;trees mathematics;molecular biology;approximate-tree-by-example;approximate tree matching;labeled trees;tree data structures;programming compilation;database theory;search problems;pattern recognition;natural language processing;vision;RNA structures;automatic error recovery;dictionary definitions;query languages;query language |
627641 | Compiling Conceptual Graphs. | AbstractThis paper addresses problems in conceptual graph implementation: subsumption and classification in a taxonomy. Conceptual graphs are typically stored using a directed acyclic graph data structure based on the partial order over conceptual graphs.We give an improved algorithm for classifying conceptual graphs into this hierarchy. It prunes the search space in the database using the information gathered while searching.We show how conceptual graphs in this hierarchy can be compiled into instructions which represent specialized cases of the canonical formation rules. This compiles subsumption of conceptual graphs and compresses knowledge in a knowledge base. Conceptual graphs are compiled as differences between adjacent graphs in the hierarchy. The differences represent the rules used in deriving the graph from the adjacent graphs. We illustrate how the method compresses knowledge bases in some experiments.Compilation is effected in three ways: removal of redundant data, use of simple instructions which ignore redundant checks when performing matching, and by sharing common processing between graphs. | Introduction
A central element of many natural language processing,
information retrieval, and knowledge based systems is a
large collection of information. This information may be
viewed as a large set of sentences. This paper concentrates
on the problem of answering queries on this set of sentences.
A query is a sentence. The question is whether the sentence
is implied by the set of sentences. Sentences that imply the
query sentence may be extracted as answers. This paper
discusses methods which seek answers which are explicit
sentences in the collection, rather than answers that can be
deduced from more than one sentence in the collection. The
method has been designed with the intention of extending
to handle the latter case in the future.
For example the query "Is there a person eating pie?" on
a set of sentences may extract the answers "A girl is eating
pie fast" and "A girl, Sue, is eating pie in the kitchen".
This example illustrates that there is typing information
embedded in the sentences. When searching for sentences
that imply something about people, sentences containing
information about the subtype girls were considered.
The research of G. Ellis was supported by a University of Queensland
Postgraduate Scholarship while enrolled in the PhD programme at Key
Centre for Software Technology, University of Queensland, QLD, 4072,
Australia. He is currently with the Dept. of Computer Science, Royal
Melbourne University of Technology, Victoria 3001, Australia. This work
was also supported by the Baskin Center at the University of California,
Santa Cruz, USA, during a visit there.
This paper is not concerned with the natural language
front-end of this system, rather, it concentrates on work at
the internal level where sentences are encoded in the conceptual
graph knowledge representation [14]. The knowledge
base is a set of conceptual graphs, each graph representing
a sentence. Queries are conceptual graphs which
are checked for subsumption against graphs in the knowledge
base.
The method outlined here constructs a directed acyclic
graph representing the partial order over conceptual graphs.
Nodes of the hierarchy are conceptual graphs. Querying
the set of graphs is achieved by selecting paths through
the hierarchy. By using the ordering information in the
hierarchy many of the graphs in the knowledge base are
eliminated from consideration in the search. The hierarchy
is a content addressable memory. The content of the
query determine its position in the hierarchy. The solutions
are ordered in the subhierarchies of the immediate
specializations of the query.
Levinson [5] developed a method for chemical graphs on
which the method in this paper is based. In more recent
work [6], Levinson has adapted his method to conceptual
graphs and developed hybrid indexing mechanisms.
Garner and Tsui [4] added the idea of storing graphs as
the differences between adjacent graphs in the hierarchy.
This has the potential to save store.
The method described here takes the differences idea
further. In [4], a graph is reconstructed from the differences
when traversing the hierarchy. This graph is then
compared to the query by using a general subsumption
algorithm. Our method differs from this in a number of
ways. We use different differences, rather than the incident
arcs being labelled with the differences, we only label
the node representing the conceptual graph. The differences
are between a graph and all of its adjacent graphs,
rather than a single adjacent graph. This method is especially
suited to the topological search method proposed
by Levinson [6]. Another difference is the interpretation of
the differences. The differences between graphs represent
instructions which are specialized cases of the canonical
formation rules of conceptual graph theory. An instruction
performs part of the matching of the database graph
with the query graph. The canonical formation rules are
the basis of the partial ordering defined over conceptual
graphs. Rather than just reconstructing the graphs, the
differences are applied to the query graph using the mappings
of the adjacent graphs into the query graph. In many
cases if the adjacent graphs have already been compared
to the query, the differences need only be mapped into the
query graph to implement the comparison.
Our method achieves compilation of conceptual graphs
in three ways: removal of redundant data, use of simple
instructions which ignore redundant checks when performing
matching, and by sharing common processing between
graphs.
Section II introduces basic conceptual graph theory. Section
III outlines algorithms and data structures used to
store and retrieve conceptual graphs. Section IV explains
what we mean by compilation of conceptual graphs in the
generalization hierarchy. Section V gives descriptions of
instructions which are specialized cases of the canonical
formation rules. A small example database is compiled. A
query on the compiled database is then examined. Section
VI details some experiments on compressing some
knowledge bases, and discusses ramifications for compilation
II. What are Conceptual Graphs
Conceptual Graphs [14] is a system of logic based on
Charles Sanders Peirce's Existential Graphs [12]. Conceptual
graphs have the full power of first-order logic, can
represent modal and higher-order logic, and have simple
and elegant inference rules. Conceptual graphs also have a
direct translation into natural language. The following is
a short introduction to the basic formalism. The reader is
advised to read [14] for a more thorough understanding.
A conceptual graph is a finite, connected, bipartite graph.
The two kinds of nodes are concepts and conceptual rela-
tions. Every conceptual relation has one or more arcs, each
of which must be linked to some concept. A single concept
by itself may form a conceptual graph, but every conceptual
relation must be linked to some concept.
The function type maps concepts into a set T whose elements
are type labels. The function referent maps concepts
into a set I = f#1; #2; of individual markers
or the generic marker *. An individual marker is a surrogate
for some individual in the real world, a perceived
world, or a hypothetical world. The label of concept c,
lab(c), is the
and referent(c) = r. A concept may be displayed in the
linear form as For example, the concept [Person: *]
or more simply [Person] represents an unspecified person,
and may be read A person. A box replaces the square
brackets in the graphical form.
The partial order - over the type labels in T , known
as the type hierarchy, forms a lattice, called the type lat-
tice. The type hierarchy makes analytic statements about
types: they must be true by intension. The statement
Person is true, because the properties of a person
are also associated with a girl.
The minimal common supertype of a pair of type labels
s and t is written s [ t. The maximal common subtype
is t. There are two primitive type labels: the
universal type ? and the absurd type ?. For any type label
t, ?. The minimal common supertype of Cat
and Dog could possibly be Carnivore depending on the
hierarchy. The maximal common subtype of Pet and Cat
is PetCat. The maximal common subtype of Cat and Dog
is ? (absurd), which means that it is logically impossible
for an entity to be both a dog and a cat.
The denotation of type t, written ffi t, is the set of all
entities that are instances of any concept of type t. For
extensions, the union is the set of all cats and
dogs in the world and nothing else. For intensional type
labels, Cat [ Dog is their minimal common supertype Car-
nivore, which also has subtypes Bear, Weasel, Skunk, etc.
The type lattice represents categories of thought, and the
lattice of sets and subsets represents collections of existing
things. The two lattices are not isomorphic, and the denotation
operator that maps one into the other is neither
one-to-one nor onto.
The function type also maps conceptual relations to type
labels. A relation r with may be written (t) in
the linear form. A ellipse replaces the parenthesis in the
graphical form. For two relations to have the same type
they must have the same number of arcs. Concepts and
conceptual relations have no type in common.
The conformity relation :: relates type labels to individual
markers: if t :: i is true, then i is said to conform to type
t. The conformity relation obeys the following conditions:
ffl The referent of a concept must conform to its type
label: if c is a concept, type(c) :: referent(c). For
example the concept [Integer: 1] is well-formed, but
[Integer: 3.14] is not.
ffl If an individual marker conforms to type s, it must
also conform to all supertypes of s: if s - t and s :: i,
then t :: i. For example the number 3 conforms to the
type Prime, Prime::3. Hence it also conforms to the
supertype Integer, Integer::3.
ffl If an individual marker conforms to type s and t, it
must also conform to their maximal common subtype:
For example since 3
conforms to types Odd and Prime: Odd::3; Prime::3,
then 3 also conforms to their maximal common sub-type
OddPrime, OddPrime::3.
ffl Every individual marker conforms to the universal type
?; no individual marker conforms to the absurd type
?: for all i in I, ? :: i, but not ? :: i.
ffl The generic marker * conforms to all type labels: for
all type labels t, t :: .
The operator OE maps conceptual graphs into formulas in
first order predicate calculus. For the conceptual graph
the translation is
Generic concepts map to variables, individual concepts map
to constants. Alternatively, conceptual graphs could also
be mapped into a modern typed logic
c
Girl Agent Eat Manner Fast
Person: Sue Agent Eat Object Pie
Fig. 1: Two canonical graphs
A. Canonical Graphs
To distinguish the meaningful graphs that represent real
or possible situations in the external world, certain graphs
are declared to be canonical. One source is the derivation
of new canonical graphs from other canonical graphs by
formation rules.
There are five canonical formation rules for deriving a
conceptual graph w from conceptual graphs u and v [14,11]:
copy(u). w is an exact copy of u.
ffl restrict(u; c; l). For any concept c in u, type(c) may
be replaced by a subtype t; if c is generic, its referent
may be changed to an individual marker i where
These changes are permitted only if referent(c)
conforms to type(c) before and after the change, that
is t :: i.
relations r and s in
the graph u are duplicates, then one of them may be
deleted from u together with all its arcs.
ffl Join(u; c; d). If a concept c in u is identical to a concept
d in u, then Join(u; c; d) is the graph obtained
by deleting d and linking to c all arcs of conceptual
relations that had been linked to d.
ffl Fuse(u; v; c; d). Let u and v be two disjoint conceptual
graphs. If a concept c in u is identical to a concept d in
v, then Fuse(u; v; c; d) is the graph obtained by deleting
d and linking to c all arcs of conceptual relations
that had been linked to d.
To illustrate the formation rules, Fig. 1 shows two canonical
graphs. Each concept and relation is identified with ci
and rj, respectively, and each graph is also labelled for
reference in the text. The graph b may be read A girl
is eating fast; and the graph c, A person, Sue, is eating
pie. These are not formal translations of the graphs, but
informal verbalizations for discussion of the graphs here.
The graph d in Fig. 2 shows the result of restricting the
concept c3 [Girl] in the graph b in Fig. 1 to [Girl: Sue].
The graph e is the result of restricting the type Person
in the concept c7 in graph c to type Girl. Before doing
the restrictions, the conformity relation must be checked
to ensure that Girl :: Sue is true.
The identical concepts c3 and c7 [Girl: Sue] in d and e
in Fig. 2 can be fused together to form a single graph x9 in
Fig. 3. Then the identical concepts c1, c5 [Eat] in x9 can
be joined together to produce x10.
In Fig. 3, the graph x10 can be simplified by removing
one of the duplicate relations r2 and r4 (Agent) resulting in
graph f in Fig. 4. Two conceptual relations of the same type
are duplicates if for each i, the ith arc of one is linked to the
Girl: Sue
Girl: Sue
d
e Agent Eat Object Pie
Fast
Manner
Eat
Agent
Fig. 2: Restriction of two graphs in Fig. 1
Girl: Sue
Girl: Sue
Agent Eat Manner Fast
Agent Eat Object Pie
Agent Eat Manner Fast
Pie
Object
Agent
Fig. 3: Join of the two graphs in Fig. 2
same concept as the ith arc of the other. The graph f may
be read A girl, Sue, is eating pie fast. The simplification
rule corresponds to the rule of logic that R(x; y) - R(x; y)
is equivalent to just R(x; y).
The formation rules are a kind of graph grammar for
canonical graphs. Besides defining syntax, they also enforce
certain semantic constraints. The formation rules
make no guarantee about truth or falsity. However, the
formation rules are refutation rules. If we assert that the
graph
is false (No person is eating a pie), then we can use the
formation rules to show that
is false (Sue is not eating a pie). That is if a graph can be
derived from a false graph, then it must in turn be false.
The formation rules are falsity preserving.
The canon contains the information necessary for deriving
a set of canonical graphs. It has four components: a
type hierarchy T , broken into a concept hierarchy T c and
relation hierarchy T r ; a set of individual markers I; a conformity
relation :: that relates labels in T to markers in I;
f Girl: Sue Agent Eat Manner Fast
Pie
Object
Fig. 4: Simplification of Fig. 3
Place
Location
Act
Entity
Act Object
Event
Act PhysicalObject
Animate
Entity
Give Animal
Pie
Girl
Person
Attribute
Eat
Fast
Place
Kitchen Food
Girl::Sue
Manner Agent Object Location
Act
Act
Agent
Manner Attribute
Animate
Fig. 5: A sample canon
and a finite set of conceptual graphs B, called the canonical
basis, with all type labels in T and all referents either
or markers in I. The canonical graphs are the closure of
B under the canonical formation rules. Fig. 5 shows the
canon used in this paper.
B. The Relationship between the Canonical Formation Rules
and Subsumption of Conceptual Graphs
If a conceptual graph u is canonically derivable from a
conceptual graph v (possibly with the join of other conceptual
wn ), then u is called a specialization of
v, written u - v, and v is called a generalization of u.
Generalization defines a partial ordering of conceptual
graphs called the generalization hierarchy. The ordering is
reflexive, transitive, and antisymmetric. For any conceptual
graphs u, and v, the following properties are true:
ffl Subgraph. If v is a subgraph of u, then u - v.
ffl Subtypes. If u is identical to v except that one or more
type labels of v are restricted to subtypes in u, then
ffl Individuals. If u is identical to v except that one or
more generic concepts of v are restricted to individual
concepts of the same type, then u - v.
ffl Top. The graph [?] is a generalization of all other
conceptual graphs.
The graphs in Figs. 1, 2, and 3 are all generalizations
of the graph in Fig. 4. We call the graphs defined so far
atomic conceptual graphs (ACGs). They do not contain
logical connectives and hence neither quantification other
than the default existential quantification, nor have we consider
definition of concepts and relations. A subsumption
test for ACGs can be implemented as subgraph morphism
modulo subtyping and individuation.
The generalization hierarchy is not a partial order over
conceptual graphs as stated in [14], rather it is a partial order
over equivalence classes of conceptual graphs. Consider
the graphs
The graph v is a proper subgraph of u. The graph u can be
derived from v by joining a copy of v on the concept [Eat],
Girl: Sue Agent Eat Manner Fast
Object
Pie
q3
d3
d1 q1 d4 q2 d2
Query u
Fig. query graph u
thus u - v. However, v can be derived from u by joining
the two identical concepts [Person], then simplifying the
duplicate (Agent) relations, thus v - u. Hence according
to the generalization hierarchy. This property of
the generalization hierarchy has also been noted independently
in [15].
If u - v, a canonical derivation of u from v corresponds
to the reverse of a proof of the formula OEv from the formula
conceptual graphs u and v, if u - v, then
OEu oe OEv. The result that the two graphs given in the
paragraph above are equivalent should not be surprising
considering their translations into sorted logic
This is a subtle point that doesn't affect any of the subsequent
theory of conceptual graphs. In practice, graphs
with redundant branches can always be simplified to derive
the smallest one in each equivalence class. Conceptual
graphs from now on are assumed to be the minimal element
of their class.
For any conceptual graphs u and v where u - v, there
must exist a mapping is a subgraph
of u called a projection of v in u. The projection operator
- has the following properties:
ffl For each concept c in v, -c is a concept in -v where
referent(-c).
ffl For each conceptual relation r in v, -r is a conceptual
relation in -v where If the ith arc
of r is linked to a concept c in v, the ith arc of -r must
be linked to -c in -v.
For example, the projection of graph c in Fig. 1 into the
graph u in Fig. 6 is
Person ? Girl and Act ? Eat.
A graph v can be represented by the set of instances of
the canonical formation rules used to construct the graph
v from the graphs w wn . To test if the graph v subsumes
a graph u, these rule instances can be applied to
the projections of u in w wn . The rule instances will
only succeed if v subsumes u. We use this technique to
compile conceptual graphs in a data structure representing
the generalization hierarchy partial ordering.
Now that we have the theory of atomic conceptual graphs
we will consider how to store large sets of conceptual graphs
and how to retrieve conceptual graphs once stored.
Animate Act
Agent Act Object Entity
Manner
Act Attribute
Girl: Sue Eat Manner Fast
AgentB
a
Manner Fast
Girl Eat
Agent Eat Object Pie
Girl: Sue Manner Fast
Eat
Pie
Object
Eat Object Pie
Girl: Sue Agent
c
c3 r2 c4
e
d
Agent
Person: Sue
Agent
f
Fig. 7: The generalization hierarchy of the graphs in Figs
III. Storing and Retrieving Conceptual Graphs
The common data structure used to store conceptual
graphs is a hierarchy; a directed acyclic graph representing
the non-transitive links of the partial ordering, generalization
hierarchy, over conceptual graphs [10,6,2,3]. Levin-
son's earlier work used a similar data structure for organizing
chemical graphs [5]. The taxonomy over KL-ONE
concept descriptions [13] is a hierarchy.
A. The Generalization Hierarchy as a Data Structure for
Storing Conceptual Graphs
The nodes in the generalization hierarchy are conceptual
graphs and the arcs represent the non-transitive ordering
between the graphs. In Fig. 7 the hierarchy is given for
the graphs from the previous section. The canonical basis
in this example consists of the set of graphs g.
The arc (b; d) indicates each of the following:
1. A girl, Sue, is eating fast is canonically derivable from
A girl is eating fast.
2. A girl, Sue, is eating fast implies A girl is eating fast.
3. A girl is eating fast is a generalization of A girl, Sue,
is eating fast.
4. A girl, Sue, is eating fast is a specialization of A girl
is eating fast.
In the following sections we examine how to use the hierarchy
for searching a set of conceptual graphs and how
to construct the hierarchy.
B. Searching for a Conceptual Graph in the Generalization
Hierarchy
The generalization hierarchy indexes the knowledge base.
When we apply a conceptual graph query u to the knowledge
base we search for u in the hierarchy. The hierarchy
is a content addressable memory.
Fig. 8 illustrates the search space for a query u in a
hierarchy. Atoms or Primitives are the graphs closest to
objects unifiable with u
coatoms
(bottom object)
atoms (primitives)
(top object)
objects more general than u
objects more specific than u
objects non-unifiable with u
Fig. 8: The search space for the graph u in a generalization
hierarchy
the concept [?], which are not derivable from any other
graphs. Coatoms are the leaf nodes of the knowledge base.
The generalization space contains all the generalizations
of u in the hierarchy. The specialization space or solution
space contains all the specializations of u in the hierar-
chy. Immediate generalizations (Parents) and immediate
specializations (Children) of u are adjacent generalizations
and specializations of u respectively.
In Fig. 8, u is explicitly stored in the hierarchy. How-
ever, in many cases u will not be stored in the hierarchy
explicitly. The search for u can proceed in two directions:
top-down, from the graph [?] to u or bottom-up, from the
coatoms to u. The methods we examine here search top-down
Consider a depth-first search of the generalization space.
Any path in the generalization space can be taken as they
all lead to u in the hierarchy. Consider searching for the
query u
[Girl: Sue]/(Agent)/[Eat]!(Object)![Fast]?
Is the girl, Sue, eating fast, in the hierarchy in Fig. 7. The
query u matches the graph d in the hierarchy. The search
starts at the graph [?]. To find the graph search from the
children of [?] for a generalization of u. The children of v
is the set of immediate specializations of v. The children
of a, B1, B2, B3, b, c, d, e, and f are fB1, B2, B3g, fbg,
fb, cg, fcg, fdg, feg, ffg, ffg, and fg respectively.
In a depth-first search we could select the first graph in
the children which is a generalization of the query u as a
continuation in the path to u. The basis graph B1 is a generalization
in the first children set, so we select it. So are
B2 and B3, so they could equally be chosen. There is only
one child of B1, b, which is also a generalization, so we select
it. We now search the children of b for a generalization.
The graph d is the only child and it is a generalization of u.
In fact d is isomorphic to u. The query graph is matched
so the search terminates successfully. In this case there are
two solutions fd, fg. The English answers to the question
Agent Eat
Person Eat Object Pie
Location Kitchen
Agent Eat
Person
Object Food
Object
Location Kitchen
Agent Eat
Girl: Sue
Pie Object
Agent Eat
Pie
Girl Manner Fast
f
e
c
a
d
Fig. 9: A generalization hierarchy
Is the girl, Sue, eating fast? are d - yes; and f - yes, Sue is
eating pie fast.
The search does not necessarily start from the graph [?].
Indexing techniques can be used to start further down the
generalization space. The ultimate goal of indexing techniques
is to index directly to the top of the specialization
space which includes u (see [8] for indexing techniques).
C. Inserting a Conceptual Graph into the Generalization
Hierarchy: Classification
To insert a graph u into the hierarchy we need to compute
the set of immediate generalizations and the set of
immediate specializations of u in the hierarchy. This information
gives us the virtual location for inserting u.
Consider inserting the graph
read A person is eating pie, into the hierarchy in Fig. 9.
The immediate generalizations in this case are b, A person
is eating, and c, A pie is being eaten. The immediate specializations
are f, A girl, Sue, is eating pie in the kitchen,
and e, A girl is eating pie fast. Notice that d, A person
is eating food in the kitchen, and u are incomparable. To
insert u we remove the arcs (b, e), (c, e), and (c, f), then
add (b, u), (c, u), (u, f) and (u, e) to get the new hierarchy
in Fig. 10.
D. Searching the Generalization Space
Woods [16] describes the standard two phase breadth-first
search used for classification of KL-ONE like terms in
a taxonomy. The first phase calculates the set of immediate
predecessors, IP (generalizations), of the query by
breadth-first search of the generalization space. The second
phase breadth-first searches the subhierarchies of the
immediate predecessors calculated from the first phase, the
first specializations encountered in the hierarchies are the
immediate successors, IS, of the query.
Woods [16] in summarizing research on classification says
"More sophisticated algorithms can and should be devel-
oped."
Agent Eat
Person Eat Object Pie
Location Kitchen
Agent Eat
Person
Object Food
Object
Location Kitchen
Agent Eat
Girl: Sue
Pie Object
Agent Eat
Pie
Girl Manner Fast
Agent Eat
Person Object Pie
e
c
a
d
Fig. 10: The generalization hierarchy in Fig. 9 after inserting
In [6], Levinson describes algorithms which show deeper
insights into the problem. Subsumption can in general be
an expensive operation. Hence methods of classification
that avoid as many subsumption tests as possible are de-
sirable. The algorithm given here for inserting an object
into a hierarchy improves the implementation of Levinson's
method, but does the same number of graph comparisons
in each phase.
procedure insert(u)
begin
IP := immediate predecessors(u);
if IP 6= fug then
begin
IS := immediate successors(IP , u);
insert(u, IP , IS)
Fig. 11: Insert u in a partial order
Consider the algorithm for insert in Fig. 11. The first
phase of computing the immediate predecessors, IP , is
done by the function immediate predecessors(u) in Fig. 13.
If u is already stored in the hierarchy, u is returned rather
than u's immediate predecessors and the second phase is
avoided. Otherwise the subhierarchies of the members of
IP are searched using immediate successors(IP , u) in
Fig. 14. Once the sets IP , and IS are found then the
procedure insert(u, IP , IS) in Fig. 12 does the necessary
housekeeping linking u to immediate predecessors and immediate
successors. The procedure also maintains levels
of graphs in the hierarchy. This information is used to
traverse the hierarchy in topological order.
Levinson [6] pruned the search space using the fact that
a graph is only in the generalization space (generalizations
of the query u) if all of its immediate predecessors are also
in the generalization space. Levinson does this by sorting
the hierarchy by size of the graphs and traversing the hierarchy
in this order. Size was a necessary requirement for
procedure insert(u, IP , IS)
begin
for each v 2 IP do
for each w 2 IS do remove (v; w) if present;
for each v 2 IP do add (v; u);
for each w 2 IS do add (u; w);
u.level :=
propagate level(IS)
Fig. 12: Insert u in the partial order, given its neighbourhood
IS).
ordering the kinds of graphs that Levinson was working
with. However, for conceptual graphs size is not a necessary
requirement.
Topological order is the level order of a hierarchy. This
is reflected for each node by the distance the node is from
the top. For example, in Fig. 9 the graph a is on level 0,
graphs b and c are on level 1, graphs d and e are on level
2, and graph f is on level 3.
To see why topological order is a more efficient search
method than depth-first or breadth-first search in terms of
avoiding comparing objects in the hierarchy consider the
hierarchy in Fig. 9. Remember that all immediate predecessors
of a graph must be compared before that graph.
The immediate predecessors of the graph must be generalizations
of the query if the graph is also a generalization.
Assume the query is u in Fig. 10, and hence the generalizations
of u are a, b, and c. A breadth-first traversal of
the hierarchy in Fig. 9 (reading right to left): a, c, b, e,
f, d; would compare f. A depth-first traversal (from right
to left) would be fa, c, e, f, b, dg, also comparing f. A
topological traversal (also reading right to left): a, c, b, e,
d, f; would not compare f, since d is encountered beforehand
and is noted as being incomparable to u. Topological
search ensures that all predecessors of an element v are
seen before v.
In the algorithm in Fig. 13, the level information associated
with each graph in the hierarchy is used traverse
the hierarchy in topological order. The queue used for this
modified breadth-first search is a minimum priority queue.
Priority is given to elements with the smallest level num-
ber. Using an array of FIFO queues we can enqueue and
dequeue from this priority queue in constant time. Enqueuing
the weighted element (i; u) involves adding u to
the front of the ith queue. Dequeuing involves removing
the first element on the current minimum weighted queue.
Whenever the current minimum queue j becomes empty
the (weight) index is incremented. Traversing the hierarchy
by level order maintains the property that the j
FIFO is necessarily non-empty at this point.
The sets IP (v) and IS(v) correspond to the stored sets
of immediate predecessors of v and immediate successors
of v, respectively, where v is already stored in the hierar-
function immediate predecessors(u)
begin
Q.enqueue(level(?), ?)
while not Q.empty() do
begin
predecessors match(v) -u - v) then
begin
for each w 2 IP (v) do
IP
IP
for each w 2 IS(v) do
return IP
Fig. 13: Find the immediate predecessors of u
chy. In the algorithm the call to subsumption u - v is
guarded by the test predecessors match(v). The topological
traversal guarantees that all v's predecessors are seen
before v. The predicate predecessors match(v) is true if
all of vs predecessors are predecessors of the query u. In
depth-first search the object v is compared if v is has an
immediate predecessor which is a predecessor of the query
u. In breadth-first search an object v is compared to u
if one or more (but not necessarily all) of v's immediate
predecessors are predecessors of u. Thus the precondition
for checking if v is a predecessor of the query u is stronger
in a topological search. If v is a predecessor, add it to the
set IP , and remove all of v's immediate predecessors from
IP , then search the successors of v for closer predecessors
of the query.
E. Searching the Specialization or Solution Space
In the second phase Woods [16] searches one of the sub-hierarchies
of the immediate predecessors from the first
phase. Only one of the hierarchies needs to be traversed,
since any specialization of the query must also be a specialization
of a generalization of the query. When a specialization
is found it is added to the set IS and its subhierarchy
is removed from consideration. However, if a graph is incomparable
its subhierarchy must be traversed.
Levinson [6] devised a method of avoiding many of the
comparisons that are inherent in traversing a particular
subhierarchy in the second phase. Notice Wood's method
does not use any of the information about the other members
of IP . Levinson [6] noted that in the second phase for
any database graph to be a successor of the query graph it
must be in the intersection of the subhierarchies of the immediate
predecessors from the first phase. If subsumption
tests are relatively expensive compared to pointer traversal
involved in walking the subhierarchy, this is particularly
useful. The intersection is computed by traversing each of
the subhierarchies incrementing a counter for each graph.
For any graph to be in the intersection it must have a count
equal to the number of elements of IP . This intersection is
then traversed in the breadth-first manner used by Woods
above.
In the algorithm for immediate successors in Fig. 14 we
avoid this multiple traversal by computing the intersection
incrementally in one constrained topological search. The
algorithm uses the insight that for a graph to be in the intersection
of the subhierarchies of IP the graph must have
a path to each of those elements of IP . If each element of
the set IP is represented with a bit, the immediate successors
of elements of IP which have paths to all elements can
be determined by ORing the bit strings of their immediate
predecessors. By propagating this information we can restrict
subsumption testing to graphs that have all bits set
(in the intersection space). This algorithm also relies on
the level (topological) traversal implemented by the minimum
priority queue. The predicate IP reachable(v) ORs
the bit strings of vs immediate predecessors and is true if
all bits are set.
function immediate successors(IP , u)
begin
for each v 2 IP do
begin
for each w 2 IS(v) do
while not Q.empty() do
begin
if not seen(v) then
begin
if (IP reachable(v) -v - u) then
begin
see successors(v);
IS
else
for each w 2 IS(v) do
return IS
Fig. 14: Find the immediate successors of u given the immediate
predecessors
Notice that for each insert the "seen" information must
be reinitialised. This would mean the algorithm would perform
linearly in the size of the database in every case. This
can be avoided by using a token for each query. For a graph
to be seen it must have the same token as the current query.
If we consider the query graph u in the classification
problem as a query on the database of graphs in the hi-
erarchy, then solutions to the query would be everything
that implied the query: the specialization space. These
solutions can be listed by walking the subhierarchies of elements
of IS. The same support algorithms used in insert
for finding the immediate predecessors and immediate successors
can also be used for querying.
In the worst case these algorithms perform no better
than comparison of the query to each of the graphs in
the database. These methods are not suited to databases
where: there is little ordering information; or for total or-
ders, where the hierarchy is a chain. The methods are
suited for wide shallow hierarchies of data. We believe that
many of the domains that conceptual graphs are intended
to be used in do have this property. Woods [16] argues
that the typical-case complexity is logarithmic in the size
of the database, and Levinson [6] gives empirical evidence
to support this argument.
Levinson proved that his topological methods do less
comparison of graphs than previous known methods [6].
Levinson [6] also describes an indexing scheme which is a
hybrid of the above method. The method is particularly
useful for graphs with high degree of symmetry. Also see
[8] for its application to conceptual graphs.
We have shown how to prune the search within the
database down to the generalization and specialization space.
In the following section we show how to share matching
information gained from subsumption testing between related
graphs.
IV. Compilation of Conceptual Graphs
in the Generalization Hierarchy
How can the efficiency of querying the database be im-
proved? In the previous sections we saw a method for minimizing
the number of database graphs compared to the
query graph. In the following sections we look at minimizing
the cost of each of these comparisons. We examine how
to represent conceptual graphs in a generalization hierarchy
to improve individual subsumption tests.
Woods [16] states about his algorithm "No deep insights
have been exploited to gain efficiency. For example, in clas-
sification, no advantage is taken of what might be learned
in the course of one subsumption test that might be redundant
with part of another subsumption test."
Garner and Tsui [4] proposed representing graphs as differences
between adjacent graphs in the generalization hier-
archy. Fig. 15 illustrates how they stored the graph
girl, Sue, eating pie fast" as the difference from the adjacent
generalization "A girl eating food". The difference
means replace the referent of the concept c1
in u with Sue. The difference c2 ! "!(Manner)![Fast]"
means to connect a new binary relation (Manner) to the
concept c1 in u and a new concept [Fast]. The difference
means replace the type of the concept c3 in u
with the type Pie.
A difference between Garner and Tsui's method and the
Girl Eat Object
Agent
Fast
Eat
Manner
Object
Girl: Sue Pie
Agent
Girl Eat Object
Agent Food
c3 -> "Pie:"
Food
Transformed to:
Fig. 15: Representing graphs as differences from a generalization
method outlined below is that the former method places
the difference between adjacent graphs on the incident arc,
whereas the latter places differences between a graph and
all its immediate generalizations in the node representing
the graph. The graph differences in Garner and Tsui's
method are the nodes, arcs, and restrictions in the specialization
that are not in the generalization. The graph
differences are treated as data, reconstructing graphs by
traversing the arcs, and hence adding the graph differences.
Reconstructed graphs are then compared with the query
using a general matching algorithm. This method does not
compile the graphs into matching instructions, nor does it
share common computation in queries.
Storing graphs as differences fulfills our aim of removing
redundant data from the database. Another aim of
our method is to share common computation. This can be
achieved by storing mappings between the adjacent graphs,
in conjunction with the differences. As we will see the mappings
do not have to be stored explicitly, but can be com-
posed. This allows us to fulfill another aim: to represent the
differences in such a way that they may be used as instructions
for a future conceptual graph unification machine. A
graph may be compared to a query using the mappings of
generalizations into the query, and instructions which perform
small parts of the general matching operation relative
to these mappings. The following details this alternative
to Garner and Tsui's method.
Consider the query u in Fig. 6 on the generalization hierarchy
in Fig. 7. In the discussion below the notation
- v!u represents the mapping -
are graphs. Let us assume that a subgraph morphism
- b!u of the graph b in Fig. 7 has been found in the query
u. In the search for solutions to the query u, the search
method, outlined in previous sections, takes paths through
the generalization hierarchy that contain more specialized
generalizations of the query at each step. The graph d is
the only choice in paths from b. To traverse this path d
must be compared to u to see if it is a generalization of
u. Can a full subsumption test be avoided? Notice that
the only difference between d and b is that the concept c3
with type Girl is restricted from the generic form to the
individual Sue.
To compute a match the mapping - d!u must be com-
puted. Assume that the mapping - b!d is stored in the
database and the generalization b has been mapped into
the query u, - b!u . For every concept and relation x in d:
x is in b then
x) is in - d!u ,
otherwise find a match for x in u, y, that does not violate
the rest of the match. Insert
The mapping - d!u is equal to the mapping - b!u , since
and the mapping - b!d is the identity mapping.
The difference could then be represented as
referent(- d!u
Sue, d is a generalization of u. In general, this is only
possible if there is no symmetry in the graphs involved, that
is, there are unique mappings between the graphs involved.
If this is not the case, then the differences must be applied
to each of the possible mappings. For many of the domains
in which conceptual graphs are used the graphs contain
unique morphisms.
Thus differences between graphs can be used, if mappings
between adjacent graphs (- b!d in the previous ex-
ample) and the current generalization b and the query u
are kept. It is not necessary to store the mapping
between each adjacent graph explicitly. The mappings
are composed when traversing the generalization hierarchy.
The canonical formation rules construct the mapping when
constructing the graphs. The copy rule sets up a mapping
of the whole graph that was copied. The restrict rule does
not affect the mapping. The join rule computes the union
of the mappings of the two graphs being joined, it then
maps one of the identical concepts to the other. The simplify
rule maps a duplicate relation onto another.
Conceptual graphs in a generalization hierarchy can be
replaced with sets of applications of the canonical formation
rules. The instances of the rules apply to the immediate
generalizations of the graph being represented. Fig.
illustrates this method for the generalization hierarchy in
Fig. 7.
This method has the potential to reduce the cost of graph
comparison by sharing computation already done through
mappings between adjacent conceptual graphs, and also
has the potential to save space in storing conceptual graphs.
V. Instructions
We examine how to use the canonical formation rules
in differences between adjacent graphs in the generalization
hierarchy. Here we will concentrate on the first phase
of topological search: searching the generalization space.
The following discussion assumes that only graphs in their
canonical form are compared and stored.
restrict(B3, c5, Eat)
Act Manner Attribute
restrict(B1, c1, Eat)
restrict(x1, c2, Fast)
restrict(x2, c4, Eat)
fuse(x5, x6, c1, c4)
d
restrictRef(b, c3, Sue)
Animate
a
Act Object Entity
restrict(B2, c3, Girl)
x3
c3 r2 c4
Agent Act
copy(B2, ({c8, c7}, {r4}))
restrict(x4, c6, Pie)
restrict(B2, c7, Person)
restrict(x3, c8, Eat)
c
e
x9 fuse(d, e, c3, c7)
join(x9, c1, c5)
simplify(x10, r2, r4)
restrict(c, c7, Girl)
fuse(x7, x8, c5, c8)
Fig. 16: Encoding conceptual graphs in a generalization
hierarchy with canonical formation instructions
In the first phase the aim is to find subgraph morphisms
of database graphs in the query. In the second phase the
aim is to find subgraph morphisms of the query in the
database graphs. In the first phase the database graphs
could be thought of as reading from the query graph. In
the second phase the database graphs write to the query
graph constructing specialized solutions. These modes correspond
to the modes of reading and writing in Prolog compiler
unification instructions [1].
Here we give a specialized interpretation of the canonical
formation rules based on the mode of operation: read
or write. We only examine the read mode here. The
graphs are reconstructed by the instructions, however we
only show the operations that construct the mapping between
the database graphs and the query q.
ffl copy(u; w) - Find some subgraph morphism - w!q .
Where w is an exact copy of u, which has been recon-
structed. The general matcher is used to find -w!q .
ffl restrict(u; c; t; w) - if type(- u!q c) - t then - w!q :=
For the database graph w to be a generalization of the
query graph q, q must have a subtype of the type of
the corresponding concept in u.
ffl restrictRef(u; c;
This instruction only handles restriction to individual
markers, rather than more complex referents, such as
nested graphs, and sets. For q - u to be true the query
must have the same individual marker i as the one
in the corresponding concept in the database graph u.
Joining concepts c and d of database graphs u and v
respectively in read mode means that c and d must already
pointing at the same concept in the query graph
q.
ffl simplify(u;
Simplifying two duplicate relations in a database graph
in read mode means that the two relations must be
mapped to the same relation in the query, since the
query graph cannot contain duplicates as it is a minimal
graph.
These instructions can be separated into more specialized
cases. For example, if the input and output graph
are the same, then a new mapping is not constructed,
rather modifications to particular entries in the mapping
are made.
In Fig. 16, conceptual graphs have been replaced with
instructions. Compare this representation with the generalization
hierarchy in Fig. 7. Fig. 6 contains the query
graph u and the generalization hierarchy in Fig. 7 contains
the solution f. Let us consider what happens in each stage
of the topological search of the generalization hierarchy for
the query u. We will examine the process in the middle of
the search where subgraph morphisms of b and c in u have
been found: -
q1g.
Now we look at b and c's adjacent graphs for generalizations
of u. The adjacent graphs are d and e. The graph d is
represented by restrictRef(b, c3, Sue, d). This instruction
translates into "if referent(- b!u
Sue, d is a generalization of u and - d!u := - b!u .
The graph e is represented by restrict(c, c7, Girl, e). This
instruction is implemented as: if type(- c!u c7) - Girl then
Girl, e is a generalization of u, and - e!u := - c!u .
Now we examine the adjacent graphs of d and e. The
only one in this case is f. The graph f is represented by
three instructions. The first instruction, fuse(d, e, c3, c7,
x9), means if - d!u
we calculate -
q1g.
The second instruction is join(x9, c1, c5, x10). Since
The third instruction in f is simplify(x10, r2, r4, f). Since
q3g. Thus f is a generalization of u. In
u. Compare this result with the graphs u and f in
Fig. 6 and Fig. 7, respectively.
Fig. 17 shows an alternative compilation based on canonical
derivations applied to a single parent and joining simple
basis relations. This approach follows a new formalization
of conceptual graph theory by Mugnier and Chein
[11]. This approach has some similarities with Garner and
Tsui's method of representation as differences [4].
VI. Experiments
The compilation methods above are still in the design
phase. To examine the usefulness of such methods in conceptual
graph databases we wrote some programs which
Animate
f
a
x3
d
c3 r2 r3 c6
e
c
Act Manner Attribute Agent Act Act Object Entity
restrict(x3, c7, Girl)
restrict(B2, c3, Person: Sue)
restrict(x5, c4, EAT)
fuse(x6, B3, c4, c5, {c5, c8})
restrict(x7, c8, Pie)
restrict(c, c3, Girl)
fuse(d, B3, c5, {c5 c8})
restrict(x4, c8, Pie)
restrict(b, c7, Sue)
fuse(x2, B2, c1, c4, {c4, c7})
restrict(B1, c1, Eat)
restrict(x1, c2, Fast)
Fig. 17: Some linear derivations of graphs in the hierarchy
in Fig. 7
Animate Act
Agent Act Object Entity
Manner
Act Attribute
a
Manner Fast
Girl Eat
Agent Eat Object Pie
c3
Girl: Sue
Girl: Sue
Eat
Pie
Object
c
c3 r2 c4
Agent
Person: Sue
f
e
d
Fig. 18: The generalization hierarchy Fig.7 compressed using
differences from Largest Parent
Animate Act
Agent Act Object Entity
Manner
Act Attribute
a
Manner Fast
Girl Eat
Agent Eat Object Pie
Girl: Sue
c3
Girl: Sue
c
c3 r2 c4
Agent
Person: Sue
e
d
f
Fig. 19: The generalization hierarchy Fig.7 compressed using
differences from All Parents
Original
File
Compressed
Largest
Parent
All
Parents
All
Parents
Parents
Compressed
All Parents
Database
with basis18245220854918644189369374412901247493477971882616823 1222111923A
A
with basis
Fig. 20: Results of compressing chess files
Piece Piece
Piece Piece
Piece Piece
Piece Piece Piece
Piece
Piece
Piece
Fig. 21: Schemas from the Morph chess seed database
compressed conceptual graph databases by representing a
graph u as the differences between u and (i) u's largest
parent, (ii) all u's parents. Method (i), which we shall call
the Largest Parent method, is a best case in Garner and
Tsui's method. Method (ii) corresponds to the compilation
method shown in Fig. 16 which we will call All Parents
method. The main problem with All Parents method is
that a join of all parents may need to be stored. In the
tests below we did not store such joins.
The files that were tested in Fig. 20 were from the Morph
adaptive chess playing system [9]. The file B is a seed
database of 3104 patterns of the form shown in the schemas
in Fig. 21, where a Piece could be a WhitePawn, Black-
Here an arc between
concepts represents support or attack depending on colour
of the piece. The file B contains 1778 chess patterns learnt
by Morph similar to Fig. 22. The files "A with basis" and
"B with basis" include 42 basis graphs in addition to the
graphs in files A and B, respectively.
The column "Database" of Fig. 20 lists the names of
the files. The column "Original File" lists the size of each
ascii file containing the conceptual graphs in a conceptual
graph linear notation [14]. The column "Lempel Ziv Com-
pressed" column gives the size of the files when compressed
using a UNIX compression utility. The column "Largest
WhiteRook(Rank=>0, File=>0)
WhitePawn(Rank=>1, File=>1)
Fig. 22: A conceptual graph of a chess pattern
Parent" shows the size of each file when compressed using
the Largest Parent method of representing conceptual
graphs. The column "All Parents" shows the size of each
file when compressed using the All Parents method of representing
conceptual graphs. The column "All Parents
without Parents" stores the same differences as the All
Parents method, but leaves out the list of parents which
the differences refer to. The column "Lempel Ziv Compressed
All Parents" shows the size of the file generated by
the All Parents method after compression using a UNIX
compression utility.
The main columns to compare are "Largest Parent" and
"All Parents". The "All Parents" method results in smaller
files, even though more parents are referred to. The cost of
referring to the parents is the difference between "All Par-
ents" and "All Parents without Parents" columns. For ex-
ample, for database A listing parents cost 30534 bytes. In
all cases, the "All Parents" method resulted in smaller files
than the files using the "Largest Parent" method. The All
Parents method resulted in a compression ratio of between
2.56:1 and 3.84:1 in the four knowledge bases we tested.
Potentially a similar reduction could also be achieved in
information retrieval times.
VII.
Summary
Compilation of conceptual graphs can be achieved by
storing them as derivations from immediate generalizations
in a directed acyclic graph representing the generalization
hierarchy partial order over conceptual graphs. A graph
can be inserted into the generalization hierarchy by computing
its immediate neighbourhood in the hierarchy, then
attaching the newly inserted graph to graphs in the neigh-
bourhood. The neighbourhood is computed by a two phase
topological search.
The canonical formation rules distinguish conceptual
graphs from other semantic network formalisms. They enforce
semantic constraints on the canonical graphs. Algorithms
to process them must be developed.
Conceptual graphs are compiled into instructions which
are special cases of the formation rules. The instructions
operate on immediate generalizations, and construct a mapping
between the immediate generalizations and the graph,
and hence the query graph during search. Common computation
involved in matching database graphs to the query
graph is shared through these mappings. Further, there
is a potential for store to be saved by storing these dif-
ferences. Compression of knowledge using differences has
been illustrated on some sample databases.
Compilation is effected in three ways: removal of redundant
data, use of simple instructions which ignore redundant
checks when performing matching, and by sharing
common processing between graphs.
In future work, we will examine methods for handling
complex conceptual graphs for use in such domains as chem-
istry. Levinson [7] has recently developed a new tuple and
skeleton-based compression technique called UDS. UDS is
based on a new compact representation of conceptual graphs
which make storage and retrieval more efficient. UDS can
be extended so that processing in a hierarchical search can
shared. Early work suggests that storing possible mappings
in matrix form between parents and children in the
database and combining mappings between parents and a
query and children using matrix multiplication to get first
approximations in more specific matches to queries may
more adequately propagate binding information gathered
in search within a conceptual graph database.
Acknowledgements
I thank my supervisors Peter Robinson and Robert Levin-
son. I thank Fritz Lehmann, Guy Mineau, and John Staples
for comments on earlier drafts of this paper. Fritz
Lehmann inspired me to revise an earlier version of this
paper.
--R
"Compiling conceptual graphs,"
"Sorting conceptual graphs."
"A self-organizing dictionary for conceptual structures,"
"Pattern associativity and the retrieval of semantic networks,"
"UDS: A Universal Data Structure,"
"Multi-Level Hierarchical Re- trieval,"
"Adaptive Pattern Oriented Chess,"
"Induction on conceptual graphs: Finding common generalizations and compatible projections,"
"Characterization and Algorithmic Recognition of Canonical Conceptual Graphs,"
The Existential Graphs of Charles S.
"Classification in the KL-ONE knowledge representation system,"
Conceptual Structures: Information Processing in Mind and Machine.
"Knowledge graphs versus conceptual graphs,"
"Understanding subsumption and taxonomy: A framework for progress,"
--TR
--CTR
Ahmad Kayed , Robert M. Colomb, Extracting ontological concepts for tendering conceptual structures, Data & Knowledge Engineering, v.40 n.1, p.71-89, January 2002
Vilas Wuwongse , Ekawit Nantajeewarawat, Declarative Programs with Implicit Implications, IEEE Transactions on Knowledge and Data Engineering, v.14 n.4, p.836-849, July 2002
Gian Piero Zarri, Ontologies and reasoning techniques for (legal) intelligent information retrieval systems, Artificial Intelligence and Law, v.15 n.3, p.251-279, September 2007
Ahmad Kayed , Robert M. Colomb, Using BWW model to evaluate building ontologies in CGs formalism, Information Systems, v.30 n.5, p.379-398, July 2005 | associative retrieval;partial orders;conceptual graphs;compilation;hierarchical knowledge bases |
627645 | Structuring Knowledge In Vague Domains. | AbstractIn this paper, we propose a model for structuring knowledge in vague and continuous domains where similarity plays a role in coming up with plausible inferences. The model consists of two levels, one of which is an inference network with nodes representing concepts and links representing rules connecting concepts, and the other is a microfeature-based replica of the first level. Based on the interaction between the concept nodes and microfeature nodes in the model, inferences are facilitated and knowledge not explicitly encoded in a system can be deduced via mixed similarity matching and rule application. The model is able to take account of many important desiderata of plausible reasoning and produces sensible conclusions accordingly. Examples will be presented to illustrate the utility of the model in structuring knowledge to enable useful inferences to be carried out in several domains. |
--R
A Connectionist Scheme For Modeling Context
Symbolic Logic and echanical Theorem Proving
The Logic of Plausible Reasoning: A Core Theory
Mundane reasoning by parallel constraint satisfaction
An Introduction to Possibilistic and Fuzzy Logics
Neural Representation of Conceptual Knowledge
Recognition of Semantically Incorrect Rules
Connectionist Expert Systems
Integrating Knowledge-based Systems and Neural Networks for Robotic Skill Acquisition
The Adaptive Brain
In defence of logic
Marker Passing and Microfeature
Multilayer feedforward networks are universal
Backpropagation learning in expert networks
Frame selection in a connectionist model
Principle of Reasoning
The Society of ind
Probabilistic Reasoning in Intelligent Systems
the PDP Research Group
A athematical Theory of Evidence
Designing inference engines based on a discrete neural network model
Rules and Connectionism
The Discrete Neuronal Models
The Discrete Neuronal Models and the Discrete Neuronal Models
Chunking and Connectionism
Neurally Inspired Massively Parallel Model of Rule-Based Reasoning
A connectionist model of commonsense reasoning incorporating rules and similarities.
Beyond associative memories
The athematics of Inheritance
Refinement of approximate domain theories by knowledge-based neural networks
Features of Similarity
Similarity and Analogical Reasoning
Massively Parallel Parsing
Fuzzy Sets
Fuzzy Logic
An Introduction to Expert Systems
--TR
--CTR
Z. Ghalwash, A Recency Inference Engine for Connectionist Knowledge Bases, Applied Intelligence, v.9 n.3, p.201-215, November-December 1998
Samuel W. K. Chan, Integrating Linguistic Primitives in Learning Context-Dependent Representation, IEEE Transactions on Knowledge and Data Engineering, v.13 n.2, p.157-175, March 2001 | knowledge-based systems;vagueness;artificial intelligence;reasoning;knowledge representation;neural networks |
627655 | Chain-Split Evaluation in Deductive Databases. | AbstractMany popularly studied recursions in deductive databases can be compiled into one or a set of highly regular chain generating paths, each of which consists of one or a set of connected predicates. Previous studies on chain-based query evaluation in deductive databases take a chain generating path as an inseparable unit in the evaluation. However, some recursions, especially many functional recursions whose compiled chain consists of infinitely evaluable function(s), should be evaluated by chain-split evaluation, which splits a chain generating path into two portions in the evaluation: an immediately evaluable portion and a delayed-evaluation portion. In this paper, the necessity of chain-split evaluation is examined from the points of view of both efficiency and finite evaluation, and three chain-split evaluation techniques: magic sets, buffered evaluation, and partial evaluation are developed. Our study shows that chain-split evaluation is a primitive recursive query evaluation technique for different kinds of recursions, and it can be implemented efficiently in deductive databases by extensions to the existing recursive query evaluation methods. | Introduction
regular chain forms [8, 9, 21]. Interesting recursive query evaluation techniques [2], such as transitive closure
algorithms [10], magic sets and counting [1], can be applied to the efficient evaluation of compiled chains in
deductive databases. However, it is interesting to observe that some recursions, especially many recursions
containing function symbols, may often be evaluated appropriately by a different evaluation technique: chain-
split evaluation.
Like many researchers [2, 21], we assume that a deductive database consists of three parts: (i) an extensional
database (EDB) (a set of data relations), (ii) an intensional database (IDB) (a set of Horn-clause rules), and (iii)
a set of integrity constraints (ICs).
Definition 1.1 A predicate s is said to imply a predicate r (s ) r) if there is a Horn clause in IDB with
predicate r as the head and s in the body, or there is a predicate t such that s ) t and t ) r (transitivity). A
predicate r is recursive if r ) r. If r ) s and s and s are mutually recursive and are at the same
deduction level. Otherwise, if r ) s but not s r is at a lower deduction level than s.
Definition 1.2 A rule is linearly recursive if its body contains exactly one recursive predicate, and that predicate
is defined at the same deduction level as that of the head predicate. A rule is nested linearly recursive
if its body contains more than one recursive predicate but there is only one defined at the same deduction level as
that of the head predicate. A rule is nonlinearly recursive if it is recursive but it does not belong to the above
two categories.
Definition 1.3 A recursion is (single) linear if all of its recursive predicates are at the same deduction level
and every recursive predicate is defined by one linearly recursive rule and possibly some nonrecursive (exit) rules.
A recursion is multiple linear if all of its recursive predicates are at the same deduction level and every recursive
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under the grant OPG-
3723 and a research grant from the Centre for Systems Science of Simon Fraser University. A preliminary version of the paper
appeared in the Proceedings of the 8th International Conference on Data Engineering, Tempe, AZ, February 1992.
y The author is with the School of Computing Science, Simon Fraser University, Burnaby, B.C., Canada V5A 1S6
predicate is defined by one or more linearly recursive rules (but at least one is defined by multiple linearly recursive
rules) and possibly some nonrecursive rules. A recursion is nested linear if every recursive predicate in the
recursion is defined by one linearly or nested linearly recursive rule (but at least one is defined by a nested linearly
recursive rule) and possibly some nonrecursive rules. A recursion is nonlinear if it contains some nonlinearly
recursive rule(s). A recursion is function-free if it does not contain function symbols; otherwise, it is function-
bearing, or functional.
Example 1.1 The rule set f(1.1), (1.2)g defines a popular function-free linear recursion, sg, which indicates that
and Y are same generation relatives if they are siblings or their parents are same generation relatives. The
notations adopted here are similar to Datalog [21].
The recursion can be compiled into a highly regular compiled chain form [9] as shown in (1.3). 2
(parent i (X
where [ denotes disjunction, and parent i (X
parent i (X
ae
true
parent
Definition 1.4 A chain of length k (k ? 1) is a sequence of k predicates with the following properties: (1) all
predicates have the same name, say p, and the l-th p of the chain is denoted as p (l) , (2) there is at least one
identical variable in every two consecutive predicates, and if i is the variable position in the first predicate, j the
variable position in the second, and the variables at the two positions are identical, then the i-th variable of p (l)
is identical to the j-th variable of p (l+1) for every l where 1 - l - k \Gamma 1. Each predicate p l is called a chain
predicate or a chain generating path if it consists of a sequence of connected predicates (i.e., predicates
which contain shared variables). A unit-length chain is trivially a chain generating path, and a 0-length chain is
defined as a tautology.
linear recursion is an n-chain recursion if for any positive integer K, there exists a k-th
expansion of the recursion consisting of one chain (when synchronous (of the same length) chains
(when each with the length greater than K, and possibly some other predicates which do not form a
nontrivial chain. It is a single-chain recursion when or a multi-chain recursion otherwise. A
recursion is bounded if it is equivalent to a set of nonrecursive rules.
A compiled n-chain recursion can be rewritten into the form of a normalized linear recursion [9], which
consists of a set of exit rules and one normalized recursive rule in the form of (1.4), where X i and Y i (for
are variable vectors, and each c i (for 1 - i - n) is a chain predicate. Notice that a chain-predicate c i for some i
may be null in the sense that there is no c i predicate and Y is an exit variable if Y i
otherwise, it is a chain variable for chain predicate c i .
The rule set f(1.1), (1.2)g is in the normalized form. Recursive rules with complex variable connections can
be normalized by a compilation process [9]. Normalization greatly facilitates systematic analysis of recursions on
their binding propagation and other regularities.
Previous studies [8, 9] show that a linear recursion can be compiled into a bounded recursion or an n-chain
recursion, and many other kinds of recursions can also be compiled into chain forms. A compiled chain form
can be viewed alternatively as one or a set of normalized recursions. This study is focused on the chain-split
evaluation of compiled or normalized recursions.
1.1 Chain-split for efficient evaluation
Usually, a single-chain recursion is evaluated efficiently by a transitive closure algorithm [10], and a multi-chain
recursion by magic sets or counting [1, 2]. One may have wondered whether queries on multi-chain recursions can
be evaluated efficiently by merging multiple chain generating paths into one and then applying transitive closure
algorithms [11]. However, since such multiple paths do not share variables, the merge of them implies iterative
processing on the cross-product(s) of several relations, each corresponding to a path. It is terribly inefficient to
perform iterative evaluation on the cross-product of two or more database relations [14].
In contrast to merging multiple chains, one may split a chain into multiple chains in the evaluation. Such
a split implies that an n-chain recursion will be evaluated by a more sophisticated (n
evaluation technique. Can chain-split improve the performance of query evaluation? We examine an example.
Example 1.2 Suppose the recursion scsg (same-country same-generation relatives) is defined by the rule set
f(1.5), (1.6), (1.7)g. The definition is similar to sg [2] except that the parents of each pair of scsg must be born
in the same country.
same
birth
merged parents(X; Y;
(merged parents i (X
Since same country connects two parent-predicates in (1.5), the three predicates can be merged into one,
merged parents, as shown in (1.8), and the compilation derives a single-chain as shown in (1.9). Because the
same country predicate provides very weak restriction, the relation merged parents is not much smaller than the
cross-product of two parent relations. Obviously, it would be efficient to split merged parents into two subchains
in the evaluation of query (1.10).
The magic sets method encounters the same problem on this recursion. Since same country links two parent-
predicates in the body of the recursive rule, the binding propagation merges all the nonrecursive predicates
into one [21], and the derivation of magic sets requires iterative computation on the cross-product-like relation,
merged parents. This can be easily seen from the adorned rules (1.11) and (1.12) [2]. 2
parent bf (X; parent fb (Y; Y 1 ); scsg bb (X
parent bf (X; parent bb (Y; Y 1 ); scsg bb (X
1.2 Chain-split for finite evaluation
For recursions containing functions or evaluable predicates, chain-split evaluation may play another important
role: transforming an infinitely evaluable program into a finitely evaluable one.
To facilitate the analysis of functional recursions, a function-predicate transformation is performed which
maps a function together with its functional variable to a predicate (called functional predicate), where the
functional variable is the variable which unifies the returned value(s) of the function. That is, each function of
arity n is transformed to a predicate of with the last argument representing the functional variable.
For example, transformed to f(X similar transformation has also been
discussed by other researchers [12, 15, 17].
Since the transformation maps a functional logical rule to a function-free one, the analysis of a functional
recursion can be performed in the framework of a function-free one. Notice that the transformation converts
constructors to predicates. Since constructors mainly serve as constraints in the unification process, and the
transformation merely delays such constraint solving (in unification), the transformation is theoretically sound.
However, a transformed functional predicate usually represents a potentially infinite relation constructible by the
corresponding term/list construction function, such as cons, etc., or computable by the corresponding computational
function, such as sum, etc. Such a relation cannot be represented by a finite EDB relation. Thus the
evaluation of a functional predicate still relies on its corresponding function definition.
To facilitate the compilation and analysis of logic programs, rules in different forms should be rectified [21].
The rules for a predicate p are rectified if all the functions in the rules are mapped to the corresponding functional
predicates by the function-predicate transformation, and all the heads of the rules are identical and of the form
Example 1.3 A functional linear recursion, append, is defined by the rule set (1.13) and (1.14), where [XjL 1
denotes a list construction function, or a corresponding functional predicate, cons(X; L 1 ; L), which represents
that a resulting list L is formed by taking X as the head and L 1 as the remaining of the resulting list.
The rule set can be rectified into f(1.15), (1.16)g and compiled into (1.17) [9], where cons is the functional
predicate for the list construction function "[]". Notice that the rectified rule set is also the normalized rule set
for this recursion [9].
append(U;
append(U;
append(U;
cons
where
cons
ae
true
cons
Since two cons predicates are connected in the body of (1.16), they can
be merged into one, merged cons, as shown in (1.18).
merged
When both U and W are instantiated in a query, such as "? \Gamma append([a; b]; V; [a; b; c])", the iterative evaluation
on the merged cons proceeds successfully. However, if one of U and W is not instantiated, the evaluation
on the merged cons cannot proceed since it will encounter infinitely evaluable predicates. Take query "? \Gamma
append([a; b]; [c]; W )" as an example. In the evaluation of the first chain generating path
the first cons, "cons(X finitely evaluable with the instantiation, It derives "X
a". Unfortunately, the second cons, "cons(X 1 )", is not finitely evaluable with the only instantiation
However, if the chain predicate merged cons is split into two sub-chains
)", the first sub-chain can be evaluated first, the result can be passed via the body of the exit
rule (1.15) to instantiate the second argument of the second sub-chain, and so on. Thus, the recursion is finitely
evaluable by chain-split evaluation. 2
Since chain-split evaluation may lead to efficient and/or finite query evaluation, it is worthwhile to study the
chain-split evaluation techniques. The remaining of the paper is organized as follows. The conditions when a
query requires chain-split evaluation are examined in Section 2. The techniques for chain-split evaluation are
studied in Section 3. An extension to the chain-split evaluation techniques for complex classes of recursions is
examined in Section 4. Our discussion is summarized in Section 5.
When Chain-Split Evaluation Should Be Applied?
Section 1 shows that chain-split may sometimes lead to efficient and/or finite evaluation. It is interesting to
examine at what conditions the chain-split evaluation should be applied. Our discussion is based on the analysis
of two different kinds of chain split: efficiency-based chain-split and finiteness-based chain-split.
2.1 Efficiency-based chain-split
When a chain generating path contains neither functions nor evaluable predicates, chain-split evaluation should
be performed if the split may lead to more efficient query evaluation plans than evaluating all the components
of a chain together (i.e., chain-following). In general, such a decision should be made based on the quantitative
analysis of competitive query evaluation plans (such as chain-following vs. chain-split) based upon the size of
potential intermediate relations, the available accessing paths, cost estimation functions and database statistics
[13, 18].
The following quantitative measurements are introduced in our discussion.
Definition 2.1 The propagation ratio, ff X!W , in relation P defined as the ratio of the number
of distinct values in the attribute W (denoted as nW ) over that in the attribute X (denoted as nX ) in data relation
. That is,
The join expansion ratio, fi XY , for a join expression is the join attribute
(the variable vector shared by both predicates p and q), is the potential number of distinct hX; Y i pairs which can
be generated from each (distinct) W in the join.
In general, we have
The formula (2.2) is derived based on the following reasoning: One distinct W value corresponds on average
to ff W!X distinct X's in relation P and ff W!Y distinct Y 's in relation Q, and W is the join attribute of relations
P and Q. The join of the two relations pairs all the distinct X's and Y 's according to the definition of join.
Thus, the potential number of distinct hX; Y i pairs which can be generated from each W in the join should be
ff W!X \Theta ff W!Y . Notice that this does not imply that the number of tuples of the relation -XY (P 1 Q) will be
nW \Theta fi XY because different W 's may share the same hX; Y i pairs. Nevertheless, fi XY is a good indicator of the
approximate size of the join relation.
Example 2.1 The predicate same country(X; Y ) is defined in Example 1.2 as below,
same
Suppose the corresponding data relation for birth country(X; W ) is B(X;W ). Let nX be 100,000 and nW
be 50. The propagation ratio, ff 2,000. The join expansion ratio, fi
ff W!X \Thetaff This indicates that the join of two predicates "birth country(X; W )"
and "birth country(Y; W )" may expect to generate about 4 million tuples for each distinct W . One cannot expect
that such weak binding propagation may lead to efficient processing. 2
In general, suppose that in the compiled form of an n-chain recursion (2.3), one chain generating path is in
the form of "p(X two predicates p and q are connected via a set of predicates,
and each predicate has a pair of variables, such as X i\Gamma1 and X i , linking to the corresponding variables of the
consecutive chain generating paths in the chain.
As a notational convention, P represents the relation for the predicate p, and jP j the size of a relation P
measured by the number of tuples in the relation, etc. We examine when and how the chain generating path
should be split in the evaluation.
For the efficiency-based chain-split, we have the following heuristic.
Heuristic (efficiency-based chain-split). The following evaluation strategy should be adopted in the evaluation
of a chain generating path, "p(X in the compiled form (2.3).
should be performed if
1. the chain is being evaluated after the evaluation of exit portion (the body of the exit rule);
2. instantiations of X 0 and Y 0 are both highly selective; or
3. fi X0Y0 - 1.
ffl Case 2: Otherwise, chain-split evaluation should be performed if X 0 or Y 0 is highly selective, and fi X0Y0 AE 1.
ffl Case 3: Otherwise, perform a detailed cost analysis to determine whether the chain-split benefits the
evaluation.
Rationale. Chain-split evaluation splits the chain into two (connected) subchains, with one evaluated first and the
other buffered until the evaluation of the exit portion passes more bindings to the buffered chain. Obviously, no
chain-split should be performed if the chain is a down-chain (i.e., the chain is being evaluated after the evaluation
of the exit portion) [2].
Suppose the evaluation starts at a chain with the path "(p; instantiated and proceeds towards
the exit portion e and then the other chains in the compiled recursion. For efficient evaluation, it is important to
examine the size of the chain relation, "P (X the size of (i.e., the number
of tuples in) the chain relation cannot be beneficial; similarly when
both X 0 and Y 0 are highly selective. If the evaluation of the entire chain generating path generates a very large
relation (when only one of X 0 and Y 0 is highly selective and fi X0Y0 AE 1: Case 2), such as the merged parents
relation in Example 1.2, chain-split evaluation should be performed because the evaluation of the split chain, such
as will lead to relatively efficient evaluation. Otherwise, it is not obvious which method (chain-split
or chain-following) is more efficient (Case 3), and a detailed cost estimation should be performed to compare the
approximate size and cost for the evaluation of "P 1 vs. that of "(P 1
This can be accomplished by a quantitative analysis of two expressions with the incorporation of the available
accessing structures and database statistics, etc. [13]. 2
The heuristic indicates that it is easy to judge in some obvious cases whether a chain-split evaluation should be
applied based on the join expansion ratio and the selectivity of the provided query constants. However, detailed
quantitative analysis should be performed for most non-obvious cases. Such an analysis is similar to the query
plan generation and access path selection developed in the studies of relational and deductive query processing
[21, 13], which is not to be presented in detail in this study.
2.2 Finiteness-based chain-split
In a compiled functional recursion, a chain generating path may contain functions or evaluable predicates defined
on infinite domains. To ensure that the evaluation generates all the answers and terminates, three issues should
be examined: (1) finite evaluability, that is, the evaluation of every i-th formula in the compiled form generates
finite intermediate relations, (2) chain-level finite evaluability, that is, the evaluation of a chain generating
path generates finite intermediate relations, and (3) termination, that is, the evaluation generates all the answers
and terminates at a finite number of iterations. The finiteness-based chain-split is based on the analysis of the
first two issues.
The justification of finite evaluability relies on both query information and finiteness constraints. A finiteness
constraint predicate r implies that each value of attribute X corresponds to a finite set of
Y values in r [6]. Finiteness constraint is strictly weaker than the functional dependency studied in database
theory [21]. It holds trivially for all finite predicates. Since all the EDB relations are finite, all the arguments in
EDB relations satisfy the finiteness constraint. In a functional predicate f(X all the domains for
arguments are finite, V must be finite no matter whether f is a single- or a multiple-valued function,
that is, (X
Specific finiteness constraints should be explored for specific functions. In many cases, one argument of a
function can be computed from the values of the other arguments and the value of the function. For example, in the
functional predicate sum(X;Y;Z), any argument can be finitely computed if the other two arguments are finite.
Such a relationship can be represented by a set of finiteness constraints, such as "(X; Z) ! Y ", and "(Y; Z) ! X".
An interesting finiteness constraint, "Z ! (X; Y )", holds in the functional predicate "cons(X; Y; Z)", which
indicates if the list Z is finite, there is only a finite number of choices of X and Y .
Since query constants may bind some infinite domains of variables to finite ones, the analysis of finite evalua-
bility should incorporate query instantiation information. Similar to the notations used in the magic sets transformation
[2, 21], a superscript b or f is used to adorn a variable to indicate the variable being bound (finite) or
a string of b's and f's is used to adorn a predicate to indicate the bindings of its corresponding
arguments.
Algorithm 2.1 Testing the finite evaluability of a query in an n-chain recursion.
Input: (1) An n-chain recursion consisting of an n-chain recursive rule and a set of exit rules, (2) a set of finiteness
constraints, and (3) query instantiation information.
Output: An assertion of whether the query is finitely evaluable.
ffl Initialization: A variable is finite if it is in an EDB predicate or is equivalent to one or a set of constants.
ffl Test the finite evaluability of (1) the exit rule set, and (2) the first expanded exit rule set (the rule set
obtained by unifying the n-chain recursive rule with the exit rule set). This is done by pushing the query
binding information into the rules being tested and propagating the finiteness bindings iteratively based on
the following two finiteness propagation rules:
1. if there is a finiteness constraint "(X
2. if
ffl Return yes if every variable in the two sets of rules being tested is finite after the finiteness binding propagation
or no otherwise.
Remark 2.1 Algorithm 2.1 correctly tests the finite evaluability of an n-chain recursion in O(k 2 ) time in the
worst case, where k is the number of predicates in the recursion.
Rationale. By initialization and query constant propagation, the variables in the EDB predicates or those equivalent
to one or a set of constants (including query constants and constants in the body of the rule) are finite.
Propagate the finiteness bindings in the body of the (recursive or exit) rule according to the two finiteness propagation
rules in the algorithm. If every variable in a predicate p i is finite by such a propagation, p i is removed
from the list of predicates to be tested. At each iteration, at least one such predicate will be removed from the
list of predicates to be tested. Otherwise, the rule is not finitely evaluable. Since there are initially k predicates
in the body of the rule, the second iteration will need to test at most predicates, and so on, the total
number of predicates to be tested in the worst case is \Sigma i=0
1)=2. Thus the worst case time complexity
of the algorithm is O(k 2 ). Notice when both recursive rules and exit rules are finitely evaluable, the recursion is
finite evaluable by induction (since the n-th iteration may treat those derived in the previous (i \Gamma 1) iterations as
a finite base relation in its derivation). 2
Algorithm 2.2 Finiteness-based chain-split for a chain generating path in an n-chain recursion.
Input: (1) A chain generating path in a compiled n-chain recursion, (2) a set of finiteness constraints, and (3)
query instantiation information.
Output: A finiteness-based chain-split evaluation plan for a compiled chain in an n-chain recursion.
ffl Application of Algorithm 2.1. The query is not finitely evaluable if Algorithm 2.1 returns no. Otherwise,
proceed to the following steps.
ffl Initialization: A variable is finite if (i) it is in an EDB predicate, or (ii) it is equivalent to one or a set of
constants.
ffl Propagation of the finiteness bindings on the chain generating path according to the same two finiteness
propagation rules as in Algorithm 2.1. If every variable in the chain generating path is finite after the
finiteness binding propagation, it is chain-level finitely evaluable. Otherwise, the path is split into two,
A-portion and B-portion. The former consists of the set of predicates in which every variable is finite; and
the latter consists of the remaining set of predicates in the chain generating path. The chain-split evaluation
should be performed by first evaluating the sub-chain formed by the A-portion, and then the B-portions
after evaluating the exit portion.
Remark 2.2 Algorithm 2.2 determines correctly whether a chain-split evaluation should be performed based on
finite evaluability and, if it should, how the chain-generating path should split.
Rationale. If a query is not finitely evaluable, no iterative evaluation should be performed. Thus, step 1 is
necessary. When a query is finitely evaluable but a chain generating path is not, the chain generating path should
be split into two portions: the immediately evaluable portion and the buffered portion. After the evaluation of
the evaluable portion of the chain and the exit portion, the binding so obtained must make the buffered portion
finitely evaluable (otherwise, the query is not finitely evaluable). Thus, we have the above algorithm. 2
Example 2.2 For the predicate append(U; V; W ), there are 2 possible query binding patterns: bbb, bbf ,
bfb, bff , fbb, fbf , ffb, and fff . Among these eight patterns, bff , fbf and fff are not finitely evaluable; bbf ,
fbb, and ffb require chain-split; and the remaining two do not require chain-split. One such case is presented
here.
The query with the binding pattern ffb, such as "? \Gamma append(U; V; [a; b]):", is finitely evaluable because all of
the variables in the exit rule and the first expanded exit rule are adorned with b after binding propagation. The
adornment transformation by the binding propagation in the first expanded exit rule is presented in (2.4), where
the notation for the adornment "ffb ! bbb" indicates that the initial adornment ffb is changed to bbb by the
binding propagation.
append ffb!bbb (U; V; W ) /
The binding propagation proceeds as follows:
1. U 1 and W are adorned with b according to the instantiations in the exit rule and the query;
2. X 1 and W 1 are adorned with b because W b and there exists a finiteness constraint: "W
3. V is adorned with b since finally,
4. U is adorned with b because X b
Since every variable in the rule is adorned with b after the binding propagation, the query is finitely evaluable.
Furthermore, Algorithm 2.2 asserts that chain-split evaluation should be performed on the chain generating
path, )". This is because W b makes "cons(X 1 finitely evaluable but
not shown below,
cons ffb (X 1 cons bff (X 1
Therefore, the chain should be split into two with evaluated first and the other cons-predicate
delayed until the first sub-chain and the exit portion have been evaluated. The adorned normalized linear recursive
rule can be written in the chain-split form as follows.
append ffb (U; V; W ) /
cons ffb (X 1 cons bbf (X 1
The rewritten rule indicates that the double-cons chain should be split into two subchains with one subchain
represented by "cons(X 1 evaluated first and the other "cons(X 1 delayed until the exit rule is
evaluated. 2
3 Chain-Split Evaluation Techniques
There are two typical evaluation methods, magic sets and counting [2], in the evaluation of n-chain recursions
without chain-split. With appropriate modifications, these methods are applicable to chain-split evaluation.
3.1 Efficiency-based chain-split magic sets evaluation
Example 1.2 shows that undesirably large magic sets could be derived by strictly enforcing the binding propagation
rules without consideration of the size of intermediate relations [21]. Since the binding propagation rules do not
distinguish strong linkages (those effectively reducing the size of relevant sets) from weak ones (those involving
huge, cross-product like relations), some bindings, like the one in "same country bf (X; Y )", can still be passed to
the next subgoal in the body of the rule via a weak linkage. Obviously, if a restriction is enforced to confine the
passing of bindings to be via strong linkages only, effective magic sets can still be derived for efficient semi-naive
evaluation. This is the idea of efficiency-based chain-split magic sets evaluation.
Example 3.1 We re-examine the magic sets evaluation of the query "? \Gamma scsg(john; Y )" in Example 1.2. For
the merged parents chain, the binding X b is propagated as,
parent bf (X; parent fb (Y; Y 1
Suppose on average a person has 2 parents and less than 5 children, and more than 2,000 persons share the
same country in the database. We have,
1. for parent bf (X; X 1 2.
2. for same country bf (X
3. for parent fb (Y; Y 1 5.
Clearly, fi ? 4; 000; 000 indicates a weak linkage. Thus, the binding propagation from X 1 to Y 1 should be
prohibited via such a linkage. The merged parent should be split into two subchains, (i) parent(X;
"same with the first one evaluated first and second one delayed until the exit
rule is evaluated. Such binding passing generates,
parent bf (X; parent fb (Y; Y 1
This binding propagation derives the same magic sets as the sg recursion on which the semi-naive evaluation
can be performed efficiently. 2
The join expansion ratio can be used as a simple judgement of whether a particular binding should be
propagated to another subgoal in the binding propagation. A relatively large number, such as 100, can be set as
a chain-split threshold. If the join expansion ratio is greater than this threshold, the binding propagation cannot
proceed. On the other hand, a relatively small number, such as 10, can be set as a chain-following threshold. If
the join expansion ratio is smaller than this threshold, the binding propagation proceeds. These two thresholds
can be tuned based on experimental results and system behavior. However, when the join expansion ratio is
greater than the chain-following threshold but less than the chain-split threshold, it is still necessary to perform
a detailed quantitative analysis based on chain characteristics and database statistics and compare the relative
costs of chain-following vs. chain-split in order to make an appropriate decision. Thus, we have,
Algorithm 3.1 Efficiency-based chain-split magic sets evaluation of a function-free linear recursion.
Input: A query and a compiled function-free linear recursion.
Output: An efficiency-based chain-split magic sets query evaluation plan.
ffl In the derivation of magic sets, the binding propagation rule [1] is modified as follows: If the join expansion
ratio for hX; Y i is above the chain-split threshold, the binding will not be propagated from X to Y ; if it
is below the chain-following threshold, the binding will be propagated from X to Y ; otherwise, a detailed
quantitative analysis is performed to determine whether a chain-split is beneficial.
ffl Based on the modified binding propagation rules, the magic set(s) are derived, and the semi-naive evaluation
[1] is performed on the sets of relevant facts. 2
Based on the reasoning presented before the example, it is easy to see that Algorithm 3.1 derives a more
efficient query evaluation plan than the method which relies on blind binding passing without distinction of
strong linkages from weak ones.
3.2 Buffered chain-split evaluation
A chain-split evaluation can be implemented by another technique: buffered chain-split evaluation, which
splits a chain generating path into two portions in the evaluation: (1) A-portion, which is the set of predicates
being evaluated, and (2) B-portion, which is the set of predicates being buffered. The buffered portion will not
be evaluated until the exit portion (the body of the exit rule) is evaluated. The variable values shared between
the A- and B- portions are buffered in the evaluation of the A-portion for later use in the evaluation of the
corresponding B-portion.
a) Chain-merged processing
processing
Figure
1: chain-following vs. chain-split evaluation.
Fig. 1 shows the distinction between a) chain-following evaluation, and b) buffered chain-split evaluation.
In the chain-following evaluation, A and B are treated as one (merged) predicate. In the buffered chain-split
evaluation, A-portion is first evaluated, with the shared value(s) buffered. After the evaluation of the exit portion
E, the B-portion obtains sufficient binding information. Thus, the evaluation proceeds in a way similar to the
evaluation of a regular multi-chain recursion except that the corresponding buffered values are patched to the
corresponding variables in the evaluation of the buffered-portion. Therefore, we have the name, buffered chain-split
evaluation.
Example 3.2 According to the discussion in Example 2.2, "? \Gamma append(U; V; [a; b])", should be evaluated by
chain-split. The chain generating path "cons(X partitioned into two portions:
the U-predicate "cons(X and the W -predicate As shown in Fig. 2, when
the evaluation essentially passes through the exit portion and derives the first set of answers:
the U-predicate is not finitely evaluable. The evaluation proceeds along
the W -predicate only, which derives "W buffered, and W 1
is passed to the exit portion, making []". Then the U-predicate is evaluable since
are available. It derives )". Thus, the second set of answer is
Similarly, the evaluation may proceed on the W -predicate further, which derives "W
and buffered, and W 2 is passed to the exit portion, making
and "U []". Then the U-predicate is evaluable, which derives
Thus, the third set of answer is
cons
cons
cons
cons
cons
Figure
2: Evaluation of "? \Gamma append(U; V; [a; b])".
In general, the following algorithm presents the buffered chain-split evaluation of a single-chain recursion,
where buffering is based on chain-level finite evaluability or evaluation efficiency. The algorithm can be easily
generalized to multi-chain recursions.
Algorithm 3.2 Buffered chain-split evaluation of a single-chain recursion.
Input: A query and a compiled functional single-chain recursion.
Output: A query evaluation plan which applies the buffered chain-split evaluation.
Suppose the chain generating path is partitioned into two portions according to the available query bindings:
a being evaluated portion A and a buffered portion B. The partition can be based on chain-level finite evaluability
or evaluation efficiency. Suppose in the i-th chain generating path of the compiled form, A and B share a variable
shares a variable U i with the (i 1)-st A, and B shares a variable W i with the (i 1)-st B.
ffl First, suppose the query instantiates U 0 . At the i-th iteration, based on the available binding U
is evaluated which derives U i and buffers the corresponding X i value. The iteration terminates when it
satisfies the termination condition (e.g., when the list shrinks to empty or when the cyclic counting method
determines its termination condition). Suppose that it terminates at the k-th iteration.
ffl Evaluate the exit portion of the compiled form.
ffl Pass the bindings obtained in the processing of the exit portion to B. Based on the available binding W i and
the buffered X i , B is evaluated which derives W i\Gamma1 at the (k \Gamma i)-th iteration. The evaluation terminates
at the k-th iteration or when there is no W i\Gamma1 derivable at an iteration. 2
Remark 3.1 The buffered chain-split evaluation performed by Algorithm 3.2 correctly evaluates a compiled single-chain
recursion.
Rationale. The algorithm is similar to counting [1] except that the values of variable X i 's are buffered in the
processing of the being evaluated portion of a chain generating path and reused in the processing of its buffered
portion. Notice that if there were no linking the two portions in the recursion, there would be two chains in
the compiled recursion to which counting applies. Since the being evaluated portion is linked to the corresponding
buffered portion via X i in the chain generating path, it is necessary to buffer X i and reuse it in the evaluation
of the buffered portion. After the evaluation of the exit portion, the buffered portion must be finitely evaluable
based on the finite evaluability of the recursion. Therefore, the chain-split evaluation derives correct and complete
answers in the query processing. 2
Notice that the termination of buffered chain-split evaluation should be judged carefully. For a function-free
recursion, the evaluation terminates easily on acyclic data. For cyclic data, the method can be extended in a
way similar to cyclic counting algorithms (such as [5]). For a functional recursion, termination is often based on
the monotonicity of certain arguments [6]. The partial evaluation method also contributes to the termination of
chain-split evaluation which will be discussed in the next subsection.
3.3 Chain-split partial evaluation
In the buffered chain-split evaluation, every intermediate value shared between the split portions of a chain is
buffered. Along the derivation path of a split chain, there will be a sequence of buffered values associated with
each derived intermediate value. In the evaluation of the buffered portion, patching is performed by popping the
buffered value in reverse sequence. When the derivation sequence grows, such buffering and patching could be
quite costly.
As an improvement to the simple buffering scheme, partial evaluation can be performed on the buffered
values for many functional recursions as follows: Instead of storing a sequence of buffered values, the sequence
of buffered values should be evaluated as much as possible, and the partially evaluated values should be carried
along the evaluation path. Such partial evaluation reduces the complexity of patching the buffered values and
often facilitates the pushing of query constraints and the judgement of termination.
Example 3.3 A recursion travel defined by a rule set f(3.2), (3.3)g represents flights or a sequence of connected
flights, leaving a departure city Dep at DTime, arriving at a destination city Arr at ATime, with a total fare
equals to Fare.
travel([F no]; Dep; DTime;Arr;ATime;Fare) /
f light(F no; Dep; DTime;Arr;ATime;Fare): (3.2)
travel([F nojL]; Dep; DTime;Arr;ATime;Fare) /
The rule set is rectified into f(3.4), (3.5)g, where sum is a functional predicate for the arithmetic function
"+", and cons is a functional predicate for the corresponding list construction function. According to [9], the
rectified rule set is in the normalized form, and its compiled form is (3.6), which consists of one chain with three
connected predicates, flight, sum, and cons.
Suppose a query is to find the sequences of connected flights from Vancouver to Ottawa, departing between 8
to 9 am, with the fare less than 600.
Dtime - 8; Dtime - 9; Fare - 600: (3.7)
It is difficult to apply the magic sets method in the evaluation because the query involves a functional recursion
and the semi-naive evaluation cannot terminate on such recursions (since Fare and the length of FnoList keep
growing).
The chain-based evaluation can be performed as follows. Since the query provides more selective information
at the departure end rather than at the arrival end, the processing should start at the departure end. Then
the departure airport "vancouver" is treated as a query constant, and similarly, the departure time constraint
"Dtime - 8; Dtime - 9" is a query constraint to be pushed at the departure end. The other constraints,
and "F are - 600", will be pushed during the query processing based on the constraint-
pushing principles [6].
The propagation of the binding "departure = vancouver" in the normalized recursive rule is shown below.
travel fbfff (L; D;DT; A; AT; F ) /
f light fbfff
travel
sum bbf cons bbf
When the evaluation starts at the departure end, the two functional predicates sum and cons are not finitely
evaluable because S i is uninstantiated in the sequence of functional predicates sum,
and L i is uninstantiated in the sequence of functional predicates cons,
The buffered chain-split evaluation may proceed by buffering a sequence of F i and Fno i values. That is, when
are buffered (for each generated tuple); and when 2, the corresponding F 2 and
are buffered, and so on. After it reaches Ottawa, we have S Then the corresponding
buffered values are patched in the evaluation of sum(F i
However, it is preferable to partially compute the buffered values in the evaluation. When the buffered
predicates are sum(F 1
Fno 1 are the instantiated values. When 2, the buffered predicates are sum(F 2
that is, S are the instantiated values. Therefore, we have
In general, S are partially
evaluable. When the f light relation is being evaluated,
are computed. The evaluation of the buffered portion is trivial when it reaches Ottawa. At this point, S
and
Furthermore, since S and length(L) are monotonic functions, they can be used in the determination of
termination and constraint pushing [6]. When S ? 600, the continued search following this intermediate tuple
will be hopeless and such intermediate tuple should be pruned from the intermediate result buffer. That is, the
constraint Fare - 600 can be transformed into S - 600 because and be pushed into
the iteration. 2
Algorithm 3.3 Chain-split partial evaluation of a compiled functional single-chain recursion.
Input: A compiled functional single-chain recursion, a set of integrity constraints, a query predicate, and a set
of query constraints.
Output: A query evaluation plan which incorporates query constraints and implements chain-split partial evaluation
ffl Test whether the query is finitely evaluable and terminable. If it is not, stop and inform the user.
ffl Determine the start end of the chain processing based on the relative selectivity of the query constraints at
both ends of the compiled chain. Apply the query constraints belonging to this end as query instantiations
to reduce the size of the initial set.
ffl For the compiled chain, determine whether the chain-split evaluation should be performed based on the
chain-level finite evaluability and evaluation efficiency. If the chain-split evaluation should be performed,
determine (i) the partition of the being-evaluated portion and the buffered portion, and (ii) the partial
evaluation plan, if possible, for the buffered portion. The partial evaluation is performed by evaluating the
buffered portion partially. That is, evaluate them as much as possible using the instantiated values and
leave only the uninstantiated portion buffered and carried to the next stage.
ffl Instantiate the termination constraints based on the monotonicity constraints and the remaining query
constraints. Push the termination constraints into the chain for iterative chain evaluation [6]. 2
Remark 3.2 Algorithm 3.3 correctly incorporates query constraints and implements chain-split partial evaluation
in the evaluation of compiled functional single-chain recursions.
Rationale. Step 1 is necessary since a query must be finitely evaluable and terminate. Step 2 is necessary and
correct since the most selective information should be pushed into the compiled chain for initial processing [3].
Step 3 is correct since if the chain-split evaluation is to be performed, partial evaluation should be explored. Step
4 is correct based on the study of constraint-based query processing in deductive databases [6]. 2
A similar algorithm can be derived for constraint-enforced chain-split partial evaluation of multi-chain recursions
4 Chain-Split Evaluation of Complex Logic Programs
Chain-split evaluation is not confined to (single) linear recursions. Since similar binding propagation rules may
suffer from the same kind of inefficiency and/or infinite evaluation problems in complex classes of logic programs,
chain-split should be applied to such programs as well. In this section, chain-split evaluation in complex classes of
recursive programs is examined, which demonstrates that chain-split and chain-following are two basic recursive
query evaluation techniques.
4.1 Evaluation of nested linear recursions
According to the definition of nested linear recursion, if every lower level IDB predicate in a nested linear recursion
is treated like an EDB predicate, the recursion at each level can still be viewed as a (single) linear recursion.
Thus, the recursion at each level can be normalized independently, and query analysis can be performed on each
normalized recursion.
Example 4.1 The insertion sort recursion, isort, defined by the following program [20] is a nested linear recursion
because the predicate insert in the body of the recursive rule (4.1) is in turn defined by a linear recursion.
It can be rectified into the following program in which every recursive rule is normalized [9].
Treating insert like an EDB predicate, the recursion isort, (4.6) and (4.7), is a normalized single-chain
recursion. The recursion insert, f(4.8) - (4.10)g, is also a normalized single chain-recursion.
Query analysis can be performed on the normalized recursion at each level. Taking the query "?\Gammaisort([5; 7; 1]; Y s):"
as an example, the analysis proceeds as follows.
The adorned query predicate is isort bf . The query binding propagation leads to the following adorned program,
where the notation Y indicates that in the built-in predicate "=", the first argument Y s is free and the
second one [] is bound.
isort bf (XXs;Y s) / cons ffb (X; Xs; XXs); isort bf (Xs; Zs);
insert bbf (X; Zs; Y s): (4.11)
isort
insert bbf (X; Y Y s; Y Zs) cons bbf (X; []; Y Zs): (4.13)
insert bbf (X; Y Y s; Y Zs) / cons ffb (Y; Y s; Y Y s); X - bb Y; cons bbf (X; Y Y s; Y Zs): (4.14)
insert bbf (X; Y Y s; Y Zs) / cons ffb (Y; Y s; Y Y
insert bbf (X; Y s; Zs); cons bbf (Y; Zs; Y Zs): (4.15)
In comparison with the normalized but not adorned program, some predicates in the adorned program are
reordered based on the analysis of finite evaluability. For example, the two predicates isort and insert in the
normalized rule (4.6) are swapped in the adorned rule (4.11). This is because the query binding propagation
following the original ordering will lead to a nonfinitely evaluable adorned predicate insert bff . The predicate
ordering in (4.11) makes every predicate finitely evaluable. Since the two predicates, "cons(X; Xs; XXs)" and
"insert(X; Zs; Y s)", in the chain generating path share a variable X, chain-split evaluation should be performed
on the recursion isort bf . Similarly, chain-split evaluation should be performed on the recursion insert bbf .
The evaluation of query "? \Gamma isort([5; 7; 1]; Y s):" proceeds as follows. The evaluation of (4.11) leads to
(which is buffered) and "Xs = [7; 1]", and then a call "isort([7; 1]; Zs)" which in turn leads to "X
7" (which is buffered) and "Xs and a call "isort([1]; Zs 0 )". This leads to "X (which is also
buffered) and "Xs and a call "isort([]; Zs 00 )". This call executes (4.12) and results in Zs
executes a sequence of calls (with the buffered values popped in the reverse sequence), that is, "insert(1; []; Zs 0 ),
s)". The evaluation of this sequence of calls is performed as follows. First,
"insert(1; []; Zs 0 )" results in "Zs since it can only execute (4.13). Second, "insert(7; [1]; Zs)" leads to "Zs
[1, 7]" since it executes (4.15) which in turn calls "insert(1; [7]; Zs)" and then executes the rule (4.15). Third,
"insert(5; [1; 7]; Y s)" calls "insert(5; [7]; Zs); cons(1; Zs; Y s)". This leads to the final answer, "Y
This example demonstrates that chain-split evaluation is a popular technique in the evaluation of nested linear
recursions. Similarly, it can be shown that chain-split evaluation is a primitive query evaluation technique for
multiple linear recursions.
4.2 Evaluation of nonlinear recursions
Finally, we demonstrate that chain-split evaluation should also be a primitive query evaluation technique for
nonlinear recursions. Since many nonlinear recursions cannot be compiled into highly regular chain forms, "chain-
split" is a misnomer. However, the split of a set of connected EDB and/or lower-level IDB predicates in the
evaluation of a nonlinear recursion shares the same spirit as the chain-split evaluation in a linear recursion. Thus,
it can still be called "chain-split evaluation". One such example is examined in this subsection.
Example 4.2 The quick sort recursion, qsort, defined by the following program [20] is a nonlinear recursion
because the recursive rule (4.16) is a nonlinear recursive rule.
It is rectified into the following program.
qsort(Littles; Ls); qsort(Bigs; Bs); append(Ls; XBs;Y s);
Treating the lower-level predicates partition and append as EDB predicates, the recursion qsort, (4.21) and
(4.22), is a nonlinear recursion; whereas the lower-level recursion partition is a multiple linear recursion, and
append is a (single) linear recursion. Query analysis can be performed at each level of the recursions.
Taking the query "? \Gamma qsort([4; 9; 5]; Y s):" as an example, the analysis proceeds as follows.
The adorned query predicate is qsort bf . The query binding propagation leads to the following adorned
program.
qsort bf (XXs;Y s) / cons ffb (X; Xs; XXs); partition bbff (Xs; X; Littles; Bigs);
qsort bf (Littles; Ls); qsort bf (Bigs; Bs);
cons bbf (X; Bs; XBs); append bbf (Ls; XBs;Y s): (4.26)
qsort
partition bbff (XXs;Y;XLs;XBs) / cons ffb (X; Xs; XXs);X - bb Y;
partition bbff (Xs; Y; Ls; XBs); cons bbf (X; Ls; XLs): (4.28)
partition bbff (XXs;Y;XLs;XBs) / cons ffb (X; Xs; XXs);X ? bb Y;
partition bbff (Xs; Y; XLs;Bs); cons bbf (X; Bs; XBs): (4.29)
partition bbff (XXs;Y;XLs;XBs) /
Notice that all of the transformed primitive predicates and the low-level IDB predicates in the rectified rule
(4.21) are connected together by shared variables. However, this set of connected predicates are split into two
portions in the adorned rule (4.26) to facilitate finite evaluation. Similar chain-split is performed in the adorned
rules (4.28) and (4.29) for the recursion partition. Chain-split is also performed in the evaluation of append bbf ,
which is similar to the process demonstrated in Example 3.2.
The evaluation of query "? \Gamma qsort([4; 9; 5]; Y s):" proceeds as follows.
ffl The evaluation of (4.26) leads to
qsort(Littles; Ls); qsort(Bigs; Bs);
ffl The evaluation of "partition([9; 5]; 4; Littles; Bigs)" leads to
ffl This leads to the evaluation of "partition bbff ([5]; 4; XLs;Bs)":
ffl The evaluation of "partition([]; 4; XLs;Bs 0 )" applying rule (4.30) derives It in
turn derives (4.32). (4.31) now
becomes
ffl The evaluation of "qsort([]; Ls)" applying the rule (4.27) derives "Ls = []", and the evaluation of qsort([9; 5]; Bs)
leads to "Bs = [5; 9]" by a similar process applying the rule (4.26). Finally, "cons(4; [5; 9]; [4; 5; 9]),
append([]; [4; 5; 9]; Y s)" leads to "Y
From this example, it is not difficult to see that chain-split is a primitive and frequently-applied evaluation
technique in the processing of many nonlinear recursions as well. To illustrate further the importance of chain-
split evaluation in the evaluation of nonlinear recursions, we examine a nonlinear recursion in the form of f(4.35),
(4.36)g.
Let the query be in the form of p b:::f . Usually, the binding on X can be passed from predicate a 1 to a 2 via
the variable W . However, if the binding passing via W is weak or leads to an infinitely evaluable predicate a 2 ,
chain-split should be performed by buffering the intermediate value of W , delaying the evaluation of a 2 by first
evaluating the first recursive predicate p in the body. After the evaluation of this p, a 2 can be evaluated efficiently
or finitely with the availability of an additional binding W 1 . A similar chain-split process can be performed for
the connected predicates b 1 and b 2 with respect to the second p in the body. Thus, chain-split is a commonly
used technique in the evaluation of nonlinear recursions.
Conclusions
An interesting recursive query evaluation technique, chain-split evaluation, is investigated in this study. Chain-
split evaluation splits a chain generating path (a set of connected EDB and/or lower-level IDB predicates) into
two portions in the evaluation: an immediately evaluable portion and a delayed-evaluation portion. Chain-split
evaluation should be applied when the split reduces the size of intermediate relations and/or transforms an
infinitely evaluable subprogram into a finitely evaluable one.
Our study demonstrates that chain-split evaluation is an important query evaluation technique. It is especially
useful for many functional recursions whose compiled chains consist of infinitely evaluable functions. The necessity
of chain-split evaluation and the judgement of when a chain needs split based on chain-level finite evaluability
and/or evaluation efficiency are studied. Three chain-split evaluation techniques: magic sets, buffered evaluation
and partial evaluation, are developed. The magic sets chain-split evaluation technique blocks the binding propagation
via unpromising paths during the magic rule rewriting, which leads to the derivation of efficient magic sets.
The buffered chain-split evaluation buffers the shared values in the evaluation of one split subchain and patches
back the buffered values in a later evaluation. Partial evaluation is a refinement to the buffered evaluation by
evaluating as many buffered functional predicates as possible to reduce the cost of maintaining the sequences of
buffered values and facilitates the termination judgement and constraint pushing.
A set of frequently-encountered, interesting examples are analyzed in our study. Such an analysis demonstrates
that chain-split evaluation, together with the chain-following evaluation, forms two primitive evaluation techniques
in the evaluation of different classes of recursions. Furthermore, the evaluation should be integrated with existence
checking and constraint-based query evaluation techniques [6] to achieve high performance in the evaluation of
sophisticated logic programs.
To the best of our knowledge, no detailed study on chain-split evaluation was performed in previous deductive
database research [4, 16, 21, 23, 22]. Many deductive database system projects, such as LDL [4], EKS-V1
[23], CORAL [16, 19], etc. have been focused on the evaluation of function-free recursions; whereas chain-split
evaluation is frequently encountered in functional recursions, as demonstrated in this study. Recent studies
[4, 16, 19] have extended the Datalog data model to handle function symbols to a limited extent, however, based
on our knowledge, no chain-split evaluation has been performed in those projects.
Our analysis demonstrates that a large set of logic programs with different classes of recursions can be implemented
efficiently using a compilation-based query analysis and optimization technique originated from the
deductive database research. In comparison with other logic programming implementation techniques, the deductive
database approach derives efficient query evaluation plans based on compilation, normalization, program
transformation and query analysis. The effectiveness and completeness of query evaluation in deductive databases
is independent of predicate ordering in rules, independent of the ordering of rules and facts in logic programs,
and independent of different query forms. Such flexibility in the analysis of logic programs leads to powerful and
efficient query evaluation mechanisms for both data-intensive and logic-intensive programs and may represent an
interesting direction towards fully declarative programming of logic programs.
We are currently implementing a sophisticated query analyzer and query evaluator as a part of the LogicBase
project [7]. The LogicBase deductive database system consists of two major components: a rule compiler and a
query evaluator. The former classifies different kinds of recursions and compiles linear and nested linear recursions
into their normalized forms [9]; whereas the latter integrates chain-following, chain-split and constraint-based
evaluation techniques in deductive query evaluation. A preliminary version of the LogicBase system has been
implemented in the UNIX system using LEX, YACC and C, and has been successfully tested on many interesting
recursions, such as append, travel, isort, nqueens, etc. Queries with different input/output mode combinations
can be evaluated correctly and efficiently on such recursions, independent of predicate ordering or rule ordering
in logic programs.
Our current implementation of chain-based query evaluation in LogicBase is confined to logic programs consisting
of linear and nest linear recursions. A scan of the example programs in most logic programming textbooks
will discover that a majority of frequently-used logic programs belong to this category. Many sophisticated logic
programs beyond linear and nested linear recursions cannot be compiled into highly regular chain forms. How-
ever, similar chain-based evaluation techniques may still apply as demonstrated in our analysis of the quick sort
program. More systematic study should be performed on the analysis and evaluation of such complex recursive
programs, which may lead to a general and efficient query analysis and evaluation technique for deductive
database and logic programming systems.
Acknowledgement
The author would like to express his thanks to Ling Liu and Zhaohui Xie for their implementation of the method
in the LogicBase project and anonymous referees for their constructive comments which improved the quality of
the paper.
--R
Magic sets and other strange ways to implement logic programs.
An amateur's introduction to recursive query processing strategies.
Bounds on the propagation of selection into logic programs.
The LDL system prototype.
A counting algorithm for a cyclic binary query.
A system prototype for deductive query evaluation.
Asynchronous chain recursions.
Automatic generation of compiled forms for linear recursions.
Efficient transitive closure algorithms.
A study of transitive closure as a recursion mechanism.
A framework for testing safety and effective computability of extended datalog.
Optimization in a logic based language for knowledge and data intensive applications.
Safety of recursive Horn clauses with infinite relations.
On testing effective computability of magic programs.
Access path selection in a relational database management system.
Pushing constraint selections.
The Art of Prolog.
Principles of Database and Knowledge-Base Systems
An introduction to the ADITI deductive database system.
In AAAI-90 Workshop on Knowledge Base Management Systems
--TR
--CTR
Yangjun Chen, On the Graph Traversal and Linear Binary-Chain Programs, IEEE Transactions on Knowledge and Data Engineering, v.15 n.3, p.573-596, March | query optimization;deductive database;recursive query evaluation;query analysis;logic programming;query processing |
627662 | Unified Integration of Explicit Knowledge and Learning by Example in Recurrent Networks. | AbstractWe propose a novel unified approach for integrating explicit knowledge and learning by example in recurrent networks. The explicit knowledge is represented by automaton rules, which are directly injected into the connections of a network. This can be accomplished by using a technique based on linear programming, instead of learning from random initial weights. Learning is conceived as a refinement process and is mainly responsible for uncertain information management. We present preliminary results for problems of automatic speech recognition. | Introduction
The resurgence of interest in connectionist models has led several researchers to investigate
their application to the building of "intelligent systems". Unlike symbolic models proposed
in artificial intelligence, learning plays a central role in connectionist models. Many
successful applications have mainly concerned perceptual tasks (see e.g. [2, 6, 10, 20]),
This research was partially supported by MURST 40% and CNR Grant 90.01530.CT01.
1 The authors are with Dipartimento di Sistemi e Informatica, Universit'a di Firenze, Via di Santa
Italy, Tel. (39) 55-4796265 - Telex 580681 UNFING I - Fax (39) 55-4796363,
where discovering explicit rules does not seem either natural, or easy. Connectionist models
appear better suited for low level tasks than symbolic ones. Like humans, they rely
on learning for perceptual tasks. On the other hand, the learning by example paradigm
cannot be stressed too much for emulating any intelligent behavior. In many cases an
intelligent behavior follows explicit rules. As a matter of fact, any machine conceived for
these situations should not ignore this knowledge.
In order to prove the potential power of the learning by example paradigm, for problems
of learning sequences, some researchers have recently shown that explicit rules can
also be discovered by learning from tabula rasa configurations. In particular Cleeremans
[3], Elman [5], and Williams [22] have demonstrated that a full connected recurrent net-work
is capable of learning small automata. These investigations are very interesting,
since they show that connectionist models can learn rules only relying on presentation
of examples. However, a closer investigation shows that we cannot stress too much the
learning by example paradigm for any problem. Complex tasks may give raise to local
minima, and ordinary gradient descent learning algorithms are likely to fail in these cases.
At least for feedforward nets, an analysis of this problems has been carried out which allows
us to understand the success of Backpropagation [18] in several problems of pattern
recognition [11]. However, using the same theory, it can be easily proven that simple
examples exist in which the learning algorithm fails to discover the optimal solution.
The integration of explicit knowledge and learning by example appears to be a natural
way of evolving intelligent systems based on connectionist models. Our hypothesis is that
for a model to be effective, this integration should be uniform. As a consequence, explicit
and learned rules should be represented in the same way by the weight connections of a
neural network.
In this paper we address the problem of learning sequences and we assume that the
explicit knowledge on such problem is available in terms of automaton rules. Our basic
assumption for implementing automata is that of relying on their state equations. In so
doing, all the efforts are focussed on finding out a method for injecting automaton rules
into the connections of a recurrent network. In section II we demonstrate that automaton
states can be coded with neuron activities and, in section III, that each automaton rule
can be realized in terms of constraints on the weights. In particular, all the automaton
rules can be translated into a set of inequalities according to the linear programming
framework.
On the basis of these remarks, in section IV we propose a unified approach for integrating
explicit knowledge and learning by example paradigm in recurrent neural networks.
We propose an architecture composed of two cooperating subnets. The first one is designed
in order to inject the available explicit knowledge, whereas the second one is mainly
responsible of uncertain information.
The effectiveness of the proposed model is currently under evaluation for problems of
automatic speech recognition. In section V we report preliminary results for a problem
of isolated word recognition. The chosen test, based on our own Italian speech database,
is quite difficult, since all the words are composed of nasals and vowels. The purpose
of the experiments is mainly that of providing material to discuss on the behavior of
the proposed model in practice. Unlike many suggested solutions proposed in literature,
which are based only on learning by example [4, 13, 17], the proposed model is likely to
scale up very well when increasing the lexicon.
II Information latching
Let N and U be the set of neurons and external network inputs, respectively. Each
neuron receives inputs from N
S U . The recurrent network model we consider is based
on the following equations:
a
(1)
where
I
being is the network status which contains all the outputs of the
neurons. We also denote with W i
the vector of weights towards neuron
feeding the network with a sequence of inputs, the status
represents a codification of the information extracted from that sequence.
Let us investigate the possibility of latching the information of a given state. As we
will show in the next section, this is very useful in order to investigate the automaton
realization.
Definition 1.
We say that a given dynamic hidden neuron latches the information @ t 0 , represented by
its activation a the following inequalities hold:
a
a
(2)
The concept of information latching has been introduced in [8] for discussing the properties
of Local Feedback Multi-Layered Networks. This definition suggests the interpretation
a i
I
Figure
1: Graphical interpretation of information latching.
of the neuron output as a boolean status, in that only x b
relevant.
Henceforth, when referring to state and state transition in the network, we will assume
tacitly that x b
i (t) is involved and not the actual output x i (t).
Theorem 1.
Given a generic neuron i, the following facts hold:
1. if I i latching occurs provided that w ii ? 1=f
2. if w ii ? 2 then the latching condition also holds if jI i (t)j ! I
being
I
3. if w i;i ? 2 then a state transition occurs in a finite number of steps if
I
i low to high transition
I
i high to low transition:
Proof
Let us consider the of the generic neuron i 2 N which follows
the equation:
a
Because of the hypothesis w ii ? 2, equation (3) has three equilibrium points (see line (1)
in fig. 1). One of them corresponds with a Both the other points are asymptotically
stable. We prove this fact for the point a
which satisfies
a
Let us define the Lyapunov function ([14] pp. 166-221) V (a i ) as
Because of the hypothesis w ii ? 2 it follows:
If a i ? g(a
. As a result the first factor is positive and consequently
. Therefore, the
first factor is negative and we have \DeltaV ! 0 again. Hence the stability of a
each a i 2 (0; 1). A similar proof can be provided for the stability of the other non null
solution of (3). The equilibrium point a In fact, starting from any point
in the neighborhood of zero, the state trajectory goes to one of the two stable points a
according to the initial sign.
The same proof is also valid if the neuron receives a constant input I 0 suchthatjI
I
i . This has the effect of translating the input line in fig. 1. When jI
i such line
becomes tangent to the curve f(a i ). This situation corresponds to a degeneration of two
equilibrium points. A straightforward analysis allows us to check the relationship given
for I
i in the second statement of the theorem. 2
Now let us consider the effect of adding a time-variant forcing term I i (t) bounded in
module by a constant I 0 such that
. As previously done, let us limit the
analysis to the positive solution. From the previous discussion, it follows that the system
has a stable equilibrium point ff
. We can easily prove that the activation a i (t) of the
system
a
satisfies the inequality
a
By assuming null initial state, eq. (9) is obviously valid for us suppose that
is valid @t; then:
a
Because of the previous considerations on the stability of ff
(see eq. (7)), the activation
ff i (t), and therefore a i (t), cannot change their sign, and then information latching occurs.
Just remember thatf 0 (a i
No. steps neuron input I
9 0.593 2.008 3.637 4.493 6.255 8.063
I
Table
1: Relationship between transient duration L and neuron input I i , for different
values of w ii . For example, if w using I leads to a state transition in
three steps.
In order to prove the third theorem's statement let us consider the case of neuron
latched in high state. When an input I i ! \GammaI
i is applied, the input line has only one
intersection with the curve f(a i ) (see fig. 1, line(2)). Therefore, a i (t)'s evolution follows
the attractive trajectory towards the unique equilibrium point (see dotted lines in fig. 1),
which corresponds to a low state boolean value. A similar proof can be given for the low
to high transition. 2
This theorem indicates under which conditions information latching occurs. It makes
it clear that the more the local weights increase, the more the latching is related to
saturated configurations. Moreover, Theorem 1's second statement defines the conditions
under which the current state is latched. It indicates the limit condition which guarantees
information latching, and consequently the state transitions. If we increase w ii then I
increases, thus indicating more robustness in latching information.
The value of w ii also affects the transient duration L when a state transition occurs:
the greater is w ii , the longer is the transient. This behavior is summarized in table 1 for
the case of low to high transition. The table was created by assuming a
I The transient duration is evaluated assuming I i
For each column, all the I i values belonging to the interval with extremes given by two
subsequent row values determine the number of steps specified by first row value.
III K algorithm: learning by linear programming
As pointed out in the introduction, an "intelligent behavior" often follows rules that can
be explicit to some extent. In order to limit the complexity of the learning phase, any
intelligent system should exploit these rules. As we will show in section V, the input
information is sometimes represented by a continuous signal. However, in these cases, we
can derive a symbolic representation of that information by means of an input quanti-
zation. The string obtained in such way may contain subsequent repetitions of symbols
(e.g. "nnuuummaaa"). In problems of automatic speech recognition these repetitions are
related to the phoneme duration. Since we consider uncertain information, the number of
repetitions can help detecting low level errors. We assume that a sequence fU(t)g belongs
to a certain class c if it is accepted by the particular automaton A c representing class c.
In order to understand how such automaton operates, think of this machine as a
cascade of two blocks. The first one has the task of modeling the duration, and provides a
sort of filtering of the input sequence. It produces an instance of a symbol provided that
it is repeated at least for a given number of steps (e.g. if that number of steps is 2, then
processing of "nnmnuumummaaa" would produce "numa"). The second block is simply
a Finite State Automaton (FSA), and represents the basic knowledge we assume on the
problem. We notice that, unlike the above mentioned FSA, the cascade of the two blocks
may be regarded as a nondeterministic automaton [19].
In this section we demonstrate how such automaton rules can be realized by net-work
(1) in terms of weight boundaries. This turns out to be very useful for integrating
these rules with learning by example [7].
The first step is to choose a proper coding of the automaton states by means of the
boolean states x b
i (t) of neurons in the recurrent network (1). We assume that, for each pair
of present and next-state of the FSA, the Hamming distance between the corresponding
codifications is one. Thereafter, for each neuron i it is straightforward to derive the set
R i of neuron switching rules from the automaton rules. For each rule r 2 R i we denote
3 with x b
i;r and - x b
i;r respectively the present and the next boolean state of neuron i. The
neuron switching rules are implemented by using the results contained in Theorem 1. Let
~
be a vector of weights such that rules R i hold. The input to neuron i,
fulfilling rule r is:
I i;r ( ~
~
~
3 In the sequel, the index t may be omitted for the sake of simplicity.
Because of the coding assumption, I i is constant during the state transition. As a result
we can directly apply Theorem 1, and then the following linear constraints on the weights
must hold:
i;r I i;r ( ~
where oe a boolean state switching is required, otherwise oe \Gamma1. For example,
let us consider a rule r which requires a low to high switching for the boolean state of
neuron i. In this case - x b
equation (12) becomes I i;r ( ~
according to theorem 1's result.
Equations (12) are in the framework of linear programming. A feasible solution is
a point ~
W i of the weight space which lies in the convex region bounded by the set of
hyperplanes H i;r (W i
Any solution of (12) satisfies the requirements arisen from the FSA. It is important to
remember that each network state transition in ( 1) may need more than one step, and
that the number of these steps depends strictly on the relationship between I i and I
This fact is shown clearly in Table 1.
These considerations make it clear that the evaluation of a region in the weight space in
which the automaton rules are valid is very important. A parametric weight representation
of this space is quite difficult to achieve. Although more restrictive, a spherical subset of
this space can be easily determined by changing equations (12). The basic idea relies on
the computation of the distance d i;r between the weight solution ~
W i and the hyperplanes
This distance can be written as:
i;r I i;r ( ~
c r
We can put together equations (12) and (13) to setup the following optimization problem,
which can still be solved in the framework of linear programming.
ffl By solving, for each neuron i, the following set of equations:
i;r I i;r ( ~
we obtain the optimal spherical regions in the weights space, having center coordinates
~
The above described procedure is referred to as K algorithm. The recurrent network
(1), with the weights belonging to that spheres, is actually a nondeterministic automaton
[19]. Once the weights are specified, this automaton becomes deterministic. In section
.
(b)
Input symbol
Neuron i
(a)
Figure
2: a) Chain automaton of example; b) neural implementation.
IV the determination of these weights will be proposed by using supervised learning.
Example
Let us consider a very simple automaton with ordered states having a chain structure
(see fig. 2a). Basically from each state only a transition to the next-state is permitted.
The generic state S i is coded as follows:
The neural realization can be based on a recurrent network composed of dynamic neurons
with w ii ? 2. It is worth mentioning that, because of the particular codification adopted,
the network of fig. 2b can be used, instead of a full connected net. Equations (12) for
this case follows: 8
\Gammaw
An exclusive binary coding is chosen for the inputs and each neuron only receives one bit
of the input coding. We can determine the maximum sphere included in the weight space
by solving equations (14) in terms of W i
We found that the sphere had a
radius ae with center in ~
. The latching condition was imposed
by choosing w
IV Integration of rules and learning
NO
Output network
Inputs
rules
Learned
rules
A priori
Figure
3: K-L network.
As shown in the previous section, the neural realization of the nondeterministic automaton
leads to a network whose weights belong to specific regions of the weight space. The choice
of a particular point in that space is associated with the modeling of symbol duration.
Moreover, we must remember that the explicit knowledge, defined by the automaton, is
based on the input quantization. If the information is conveyed by a continuous signal,
then the quantization just represents an approximated view of the original problem. We
can model the duration by using a supervised learning scheme, based on presentation
of examples. That learning scheme can also prove useful for dealing with the continuous
nature of the input information. In many problems, however, the priori-knowledge injected
into the network connections may limit the possibility of learning new rules not specified in
the explicit model. For this reason, we propose the K-L (priori-Knowledge and Learning)
architecture shown in fig. 3, which is based on two cooperating subnets, NK and NL ,
devoted to explicit and learned rule representation, respectively. A third subnet NO takes
as input a subset of NK and NL neurons and provides the external output. In the simplest
case NO consists of just a single output neuron (see for example section V).
The weights of the first subnet NK are quickly initialized thanks to the method shown
in section III, which permits to begin learning from a configuration that already represents
the problem explicit knowledge. Learning the uncertain information is mainly
accomplished by the second subnet of the K-L architecture. It is a full-connected recurrent
network, randomly initialized, which has the task of discovering hidden rules.
The weights optimization is carried out by means of a modified version of Pearlmutter's
learning algorithm [16], adapted for discrete time. A formal definition of the procedure
may be found, for example, in [23]. The algorithm has to discover a solution which
optimizes the cost function:
i2NO
where the flag oe i means that a supervision request takes place on neuron i at time
t, and T is the length of the input sequence. Ordinary gradient descent is accomplished in
order to optimize all the weights. A relevant difference is that NK weights are constrained
in the spherical region described in section III, which guarantees that the automaton rules
are not destroyed.
In the proposed model, learning by example is essentially conceived as a refinement
process and it is relieved from the problem of discovering complex deterministic rules. In
these cases the only use of learning by example paradigm is likely to fail because of the
presence of local minima. These failures can be understood in the framework of complexity
theory, where, at least for feedforward nets, it has been proven that the loading problem
is NP-complete [12].
A particular class of automata, and consequently of recurrent nets, is of interest for
the application we are going to propose. In the following these nets are referred to as
chain-like nets. In the next section we discuss of chain-like nets bearing in mind their
application to speech recognition.
V Applications to automatic speech recognition
In order to validate our theoretical hypotheses and to better understand the proposed
model, we carried out several preliminary experiments of automatic speech recognition.
One of our primary goals, for applications in this area, is that of demonstrating the
capability of the proposed model to deal with isolated word recognition (IWR) in large
lexicons. So far, many attempts to build neural-based classifiers for (IWR) have assumed
"small" lexicons (see e.g.: [4, 13, 17]). Neural classifiers have succeeded in problems of
acoustic feature extraction, but have not exhibited significant results for applications to
large lexicons. Basically, this is due to the intrinsic limitations of all the methods which
only rely on learning by examples. Although some solutions have been proposed for
building modular architectures [21, 13], the scaling up to large lexicons appears to a be
a very serious problem. In order to overcome these difficulties, we propose to model each
word of a given dictionary with a K-L net. Each one must detect the word for which it
is built, and reject all the other words of the dictionary. During the recognition phase,
any word to be recognized is presented at all the nets. Simple decision criteria, such as
choosing the highest output value, can be used for performing word prediction.
An Experiment of isolated word recognition
Henceforth, we propose an experiment for discriminating 10 Italian words only composed
of vowels and nasals. In order to accomplish this task, we selected a hierarchical network
architecture in which a first net N P was devoted to perform phoneme hypotheses, while
A
A
A
A
U
A
U
Figure
4: Automaton devoted to detect the Italian word /Numa/. The table shows the
codification of the automaton states.
other nets NW i
fed by N P 's outputs, are used for modeling the words, as
indicated in section IV. A detailed description of the phonetic network N P can be found
in [9].
Each net NW i
was devoted to detect a word of the dictionary and the highest network
output criterion was used to perform word prediction. Fig. 5b shows the particular net
chosen for modeling the Italian word /Numa/. The subnet for priori rule representation
had a chain-like structure. It was conceived for representing the automaton of Fig. 4.
For each state, subsequent occurrences of the same phonetic symbol do not produce state
transitions. The automaton is capable of dealing with phoneme skips. It behaves like
a string parser, whose final accepted states are reached only if the right phoneme string
is applied. Basically, automata of this kind solve directly the problem of insertions and
deletions of phonetic symbols, which is very important in IWR.
In practice, we want K-L nets to consider state transitions only after 2 or 3 speech
frames in order to avoid several noisy predictions of phoneme net N P (see fig. 5a and 6a).
This feature is particularly useful to decrease the cross-talk from the other words. Moreover
it should not be forgotten that nets NW i
have to deal with analog values representing
the evidence for a given phoneme.
A full-connected network composed of two neurons was adopted as net NL . We investigated
the effect of learning, particularly on NL 's neurons. As we expected from theoretical
considerations, rules which were not included in NK automaton net were automatically
Figure
5: a) Phoneme outputs for the word /Numa/; b) the network which models the
word /Numa/ when fed with the word /Numa/. Both the input level and the activation
of the neurons are proportional to the gray level (black: high value).
Figure
a) Phoneme outputs for the word /inumano/; b) the network which models the
word /Numa/ when fed with the word /inumano/
learned. For example, this fact can be clearly understood by inspecting the behavior of
the network associated with the word /Numa/ when the words /Numa/ and /inumano/
are presented at its input, respectively. It is worth mentioning that no discrimination between
these words is possible if we consider only the first subnet, since /inumano/contains
/Numa/ as "sub-string". This fact comes out also by inspecting network's state of the
first subnet. However, learning by example makes it possible to develop an internal rep-
resentation, for neurons of the subnet NL , which permits the discrimination of these two
words. A quick glance to fig. 5b and fig. 6b suggests that the word discrimination is
gained thanks to the different information coding created in NL 's neurons. This is an
explicit example that shows how new rules can be discovered which were not included in
the net NK initialized with priori-knowledge. Obviously in this case the discrimination
between these words could be attained also directly by using more complex automata
injected in NK , since the difference is quite explicit. In practice we want the learning
process to develop rules which do not appear explicit or which are affected heavily by
uncertainty.
A preliminary speaker independent small test based on 284 words was performed. The
maximum output decision criterion was adopted. We found a recognition rate as high as
92.3 % [7]. The task is not simple, since the words considered are only composed of vowels
and nasals. We notice that although the dictionary is small (only 10 words), the model
proposed is likely to scale up much better than others suggested in literature [4, 13, 17].
This is mainly due to subnet NK , which only accept acoustic strings corresponding to the
words that it models. It is worth mentioning that if only a learning by example approach
is used for modeling each word, no guarantee at all can be provided to ensure that a given
word net do not reacts to other words.
VI Conclusions
In this paper we propose a novel method for integrating "priori-knowledge" with learning
by example in recurrent networks. We show that the behavior of nondeterministic automata
can be injected into the network's connections. This behavior can be very difficult
to learn by using only the learning by example approach, because of the presence of local
minima. In the proposed model these optimization procedures must not discover the solution
beginning from tabula rasa, but must rather produce a refinement, or to find out
some additional regularities which were not captured by the explicit rules. The preliminary
applications to problems of automatic speech recognition are very promising. Most
importantly, unlike many proposals for IWR with neural nets, the proposed model scales
up very well when increasing the lexicon dimension. Finally, it is worth mentioning that,
although mainly conceived for speech recognition and understanding tasks, this model
can turn out to be useful for other applications as well.
--R
"Approximation of Boolean Functions by Sigmoidal Networks: Part I: XOR and other Two-Variable Functions"
"Speech Pattern Discrimination and Multi-Layered Perceptrons"
"Finite State Automata and Simple Recurrent Networks"
"On the Use of Neural Networks for Speaker Independent Isolated Word Recognition"
"Finding Structure in Time"
"Learning the hidden structure of the speech"
"An Unified Approach for Integrating Explicit Knowledge and Learning by Example in Recurrent Networks"
"Local Feedback Multi-Layered Networks"
"Recurrent Networks for Continuous Speech Recognition"
"BPS: A Learning Algorithm for Capturing the Dynamical Nature of Speech"
"On the Problem of Local Minima in BackPropagation"
Neural Network Design and the Complexity of Learning
"Design of Hierarchical Perceptron Structures and their Application to the Task of Isolated Word Recognition"
Stability of Motion
"A logical calculus of the ideas immanent in nervous activity"
"Learning State Space Trajectories in Recurrent Neural Net- works"
"The Multi-Layer Perceptron as a Tool for Speech Pattern Processing Research"
"Learning internal representation by error propagation"
Formal Languages and Their Relation to Automata
"Phoneme Recognition Using Time-Delay Neural Networks"
"Modularity in Neural Networks for Speech Recognition"
"A Learning Algorithm for Continually Running Fully Recurrent Networks"
"An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Networks Trajectories"
--TR
--CTR
Ben Choi, Applying Learning by Examples for Digital Design Automation, Applied Intelligence, v.16 n.3, p.205-221, May-June 2002
Christian W. Omlin , C. Lee Giles, Rule Revision With Recurrent Neural Networks, IEEE Transactions on Knowledge and Data Engineering, v.8 n.1, p.183-188, February 1996
Barbara Hammer , Peter Tio, Recurrent neural networks with small weights implement definite memory machines, Neural Computation, v.15 n.8, p.1897-1929, August
Pasquale Foggia , Roberto Genna , Mario Vento, Symbolic vs. Connectionist Learning: An Experimental Comparison in a Structured Domain, IEEE Transactions on Knowledge and Data Engineering, v.13 n.2, p.176-195, March 2001
Steve Lawrence , C. Lee Giles , Sandiway Fong, Natural Language Grammatical Inference with Recurrent Neural Networks, IEEE Transactions on Knowledge and Data Engineering, v.12 n.1, p.126-140, January 2000
Stefan C. Kremer, Spatiotemporal Connectionist Networks: A Taxonomy and Review, Neural Computation, v.13 n.2, p.249-306, February 2001
Michael Berthold , David J. Hand, References, Intelligent data analysis, Springer-Verlag New York, Inc., New York, NY, | automatic speech recognition;recurrent neural networks;learning automata |
627680 | Implementing Temporal Integrity Constraints Using an Active DBMS. | AbstractThe paper proposes a general architecture for implementing temporal integrity constraints by compiling them into a set of active DBMS rules. The modularity of the design allows easy adaptation to different environments. Both differences in the specification languages and in the target rule systems can be easily accommodated. The advantages of this architecture are demonstrated on a particular temporal constraint compiler. This compiler allows automatic translation of integrity constraints formulated in Past Temporal Logic into rules of an active DBMS (in the current version of the compiler two active DBMS are supported: Starburst and INGRES). During the compilation the set of constraints is checked for the safe evaluation property. The result is a set of SQL statements that includes all the necessary rules needed for enforcing the original constraints. The rules are optimized to reduce the space overhead introduced by the integrity checking mechanism. There is no need for an additional runtime constraint monitor. When the rules are activated, all updates to the database that violate any of the constraints are automatically rejected (i.e., the corresponding transaction is aborted). In addition to straightforward implementation, this approach offers a clean separation of application programs and the integrity checking code. | Introduction
INCE the introduction of databases, the notions of data
consistency and integrity constraints have been playing
an important role in the database application design
process. Integrity constraints can usually be divided into
two categories: static (referring to a static snapshot of the
database) and temporal (referring to a sequence of snap-
shots, ordered in time). Temporal constraints allow imposing
restrictions on the transactions over the database
like: "salary of an employee cannot decrease", or "once a
student drops out, she should not be readmitted".
We propose a general architecture for a temporal integrity
constraint compiler based on compilation of temporal
specification (i.e., a set of temporal constraints) into
a set of First Order Logic (FOL) definitions. FOL serves
here as a very convenient intermediate language. The FOL
definitions are then converted into a set of active rules that
enforce the specified constraints without a need for an additional
run-time constraint monitor. This arrangement
allows easy modification of the system to incorporate different
query/constraint languages in a uniform way. In
addition, we show how the architecture can accommodate
a number of different optimization techniques. Our im-
Manuscript received Oct. 3, 1994.
J. Chomicki and D. Toman are currently with the Department of
Computing and Information Science, Kansas State University, Man-
hattan, KS 66506. E-mail: fchomicki,[email protected].
IEEECS Log Number K95048
plementation instantiates the general architecture by fixing
the constraint language to be Past Temporal Logic
(PastTL). The current implementation produces code for
Starburst [6] and INGRES [20] active DBMS.
The implementation allows the user to specify the constraints
declaratively instead of embedding the integrity
checks in application programs. The advantages of declarative
specification are clear: the designer can concentrate
on what constraints should be enforced instead of how to
enforce them. It leads also to much more compact and understandable
application programs. Moreover, the application
programs and the integrity constraints specification
form independent modules. This allows building modular
applications, where one (or more) modules specifies the integrity
constraints.
We pursue further the approach taken in [1], [3] where
was proposed as a language for specifying temporal
integrity constraints. For PastTL formulas, the truth of a
formula in the state n depends only on the finite history
of the temporal database (i.e., the past at
time n). Our approach detects violations of the constraints,
namely the situations where all the constraints are true in
the state n \Gamma 1 but not true in the state n.
We also develop space-optimization techniques for the
proposed architecture. The optimizations are introduced
at two different levels: the first optimization deals with
the specification language (PastTL), the second with the
intermediate language (FOL). When performing the optimizations
we need to keep in mind that
ffl we are dealing with temporal constraints. This means
that the optimizations need to explicitly handle the
progression of time.
ffl the final goal of the compilation is to produce a set
of active rules. In particular this means that all the
formulas have to be converted to an appropriate DML
(e.g., SQL). The optimization techniques should preserve
convertibility to the chosen language (cf., sections
V and VI).
So far, the work on integrity constraints has been mainly
focused on efficient detection of constraint violations. General
integrity constraints are included in [5] but difficult to
enforce efficiently, thus general-purpose integrity enforcement
subsystems are currently present only in a few experimental
database systems. Commercial DBMS's can usually
enforce only the simplest constraints, e.g., constraints
on primary and foreign keys [5].
Static constraints have been studied in many papers, e.g.,
[9]. They can be usually formulated in FOL. For dealing
with temporal constraints the choice of Temporal Logic
seems to be a natural solution. In the implementation we
IEEE TRANSACTIONS ON KNOWLEDGE AND
Temporal
Constraints
RALG2SQL
Rule
Generator
Transformation
Algebraic
Ordering Information
FOL2SQL
Magic Set
Transformation
Rules
Active
Fig. 1. Structure of the system
restricted the language to the Past fragment of the Temporal
Logic (Temporal Logic with temporal operators referring
solely to the past); the constraint checking then can
be done by using a space-efficient encoding of the database
history [1].
Utilization of active rules has an important advantage
compared with other methods of integrity enforcement:
there is no need for a standalone (temporal) integrity mon-
itor. The use of a separate monitor would not improve efficiency
as it would have to evaluate essentially the same set
of queries against the database as the rules do. However,
it would have to duplicate the transition datastructures
maintained already by the active DBMS.
There were several other recent proposals of general constraint
management subsystems, especially in [7], [10], [13].
The first paper [7] develops an SQL-based constraint specification
language and then shows several techniques of converting
such specification into triggers in the Starburst sys-
tem. But the system does not allow fully automatic translation
of logic formulas to Starburst rules. Also, only static
constraints are covered in this approach. The second approach
[10] is closer to our work. Temporal Logic is chosen
as the constraint specification language, but comparing to
our language, the future fragment of
In that approach quantifiers in logic formulas can have only
a very restricted pattern. Also, checking the formulas for
the safe evaluation property is solely the user responsibility
whereas our method accepts arbitrary PastTL formulas as
long as they can be safely converted to Relational Alge-
bra; unsafe formulas are rejected by the system. Another
approach can be found in [13]. The temporal language
chosen in that approach uses nonstandard freeze quantifiers
instead of first-order ones. The expressive power of
the constraint language depends on the underlying query
language. For a detailed comparison of our approach with
related work see [3].
The paper is organized as follows. In section II we describe
the overall structure of the system. In section III
we introduce the syntax and semantics of the specification
language. In section IV the transformation of the constraints
to the rule language of an active DBMS is shown.
The general schema for generating active rules is instantiated
for use with Starburst and INGRES active DBMS.
The suitability of the respective rule languages is briefly
discussed. The compiler uses an FOL to SQL translator
(described in section V) as one of its steps. We use a
modification of the approach in [4]. Section VI develops
space-saving optimizations to minimize the overhead connected
with the constraint enforcement mechanism. The
paper is concluded with a discussion of the possibilities of
future extensions of our system.
II. System Architecture
The general architecture consists of the following basic
building blocks [19]:
ffl Temporal Constraint language to FOL compiler
ffl FOL to DML compiler
ffl Rule generator.
Our system extends this architecture by providing a number
of optimization modules. The overall structure of the
system is shown in Figure 1:
ffl In the first step an algebraic transformation of the original
formula is performed. Unless additional
information is supplied, only conservative transformations
are performed.
ffl The TL2FOL module converts the PastTL formulas to
a set of FOL formulas and also produces information
needed for translating the FOL formulas to active rules
(partial ordering information).
ffl A variant of the Magic Set transformation [18] is applied
to each of the FOL formulas produced by the
TL2FOL pass. This transformation may also modify
the ordering information (i.e., some incomparable elements
may become comparable).
ffl The FOL2SQL module is responsible for converting
FOL to relational algebra and SQL. It consists of two
submodules: FOL2RALG which converts a FOL formula
to a Relational Algebra Normal Form (RANF)
[4] expression. This expression is then combined
with the magic conditions produced by the Magic Set
Transformation that were also converted to Relational
Algebra Normal Form using a modified FOL2RALG
algorithm (cf. Algorithm 36). The output of these
CHOMICKI AND TOMAN: IMPLEMENTING TEMPORAL INTEGRITY CONSTRAINTS USING AN ACTIVE DBMS 3
two modules is then put back together. In the end
the RALG2SQL module generates the final SQL statements
ffl The Rule Generator module combines the information
provided by the TL2FOL module with the SQL statements
produced by the FOL2SQL module, and creates
the active rules.
This arrangement allows easy modification in the future:
we can easily adopt a different rule system (by changing the
rule generator-this feature has been demonstrated by the
implementation that supports two conceptually different
rule systems), different DML (by changing the RALG2SQL
subsystem and possibly the rule generator), or a different
specification language. Also introduction of additional optimization
techniques can be easily embedded in the current
system: FOL serves here as a convenient intermediate
language.
III. Specification Language
This section gives a brief overview of the syntax and
semantics of the PastTL language used in our system. The
standard notation for temporal formulas is used here; for
full description of temporal logic see [8], [15].
The PastTL language is defined
as the smallest set of formulas built using following rules:
Atoms relations R in the database
are constants or variables.
xaey where ae 2 f=; !; -; 6=; -; ?g, and x; y are variables
or constants.
A - B, A - B, :A, 9x:A, A since B, 5A where A; B
are PastTL formulas.
(Semantics) The truth value of a closed
formula is defined with respect to an underlying
history of the database D
the state of the database at time i) as follows:
and 9x:A are interpreted
as standard FOL formulas in the appropriate
state D i .
5A is true in D i if i ? 0 and A is true in D i\Gamma1 .
A since B is true in D i if B is true in D k for some
is true in D j .
The constraints specified as closed formulas of PastTL must
be satisfied in every state D i of the history of the database.
In the following text we use the following standard ab-
breviations: 8x:A for :9x::A, 3A for (true since A), and
1A for :3:A.
Example 3: Using PastTL the constraint "salary of of an
employee cannot decrease" is expressed as
and the constraint "once student drops out, she should not
be readmitted" as
The truth of PastTL constraints is determined with respect
to the history of the database: the constraint c 2 is true at
the current moment (now) if there is no element x, such
that x is in the relation admitted now (i.e., in the current
state of the database) and also x was in the relation dropout
in the past (i.e., in some of the past states of the database).
The constraint specification consists of a definition of the
database schema (including the types of all attributes) followed
by a list of constraints. Each constraint is specified
by an identifier (name) followed by a closed PastTL formula
IV. Temporal Information Management
First we show how to convert a given PastTL formula to
a single FOL formula and a set of inductively defined auxiliary
atoms (these definitions are later converted to auxiliary
relations and active rules). In section VI we introduce
optimization techniques that help to cut down the (space)
overhead introduced in the TL2FOL pass of the compiler.
A. From PastTL to FOL
The naive approach to checking (past) temporal constraints
is to evaluate the formula with respect to the whole
history In practice this is not acceptable because
every past state of the database would have to be
stored. To avoid this problem we provide a space-efficient
encoding of the history [1], [3] using a number of auxiliary
atoms implemented as materialized views. To obtain the
atom definitions we convert every PastTL formula into a
set of FOL formulas.
Definition 4 (TL2FOL) Let F be a formula of PastTL.
1. Each temporal subformula ff is replaced by the auxiliary
atom r ff (introduction of this atom results in creating an
auxiliary relation r ff in the target code):
where the arity of r ff is equal to the number of free variables
in ff, and the type of each attribute is the same as the type
of the corresponding variable in ff.
2. the auxiliary atoms r ff for each temporal subformula ff
are defined by following table:
ff Auxiliary atom definition
ff := false
A since B r 0
ff := false
The superscripts 0, denote the appropriate
(temporal) state of the database the formula will be evaluated
in.
3. The remaining formula (i.e., after the substitutions have
been made) defines the top-level FOL translation of the
original PastTL constraint.
The superscripts n and are symbolic references to
database states, not specific numbers. Note that always
only the last two consecutive states are referenced in this
framework-the definition is by induction on the the length
of the database history, but the evaluation is incremental.
Each transition to a new state simply computes the new
interpretation of the auxiliary atoms from the current in-
terpretation. In the rest of this paper we use ' r ff
to denote
the formula that defines the new interpretation of the auxiliary
atom r ff . The generated auxiliary atoms are ordered
by the following partial ordering.
Definition 5 (Ordering) r ff OE r fi if ff is subformula of fi.
The top-level constraint is always the top element in this
ordering.
Example Using this approach the constraint no employee
is hired after she left, expressed in PastTL as
is converted as follows: we get one top-level FOL formula
per PastTL constraint and one inductive definition of
an auxiliary atom per temporal subformula of the original
formula. In this example we have two temporal subfor-
mulas:
(both with one free variable). The first order translation
then looks as follows:
Name FO translation
and r ff 2
OE C.
Note that the auxiliary atom definitions can refer both
to the states n and (n \Gamma 1), while the top-level constraint
C refers only the state n. This allows us to check all the
top-level constraints in every state, in particular in the initial
(0-th) state of the database where all the auxiliary
atoms are false (i.e., the corresponding auxiliary relations
are empty) by definition.
At the end of this transformation we obtain one FOL formula
representing the top-level constraint, several formulas
defining inductively the truth value of the auxiliary atoms
r ff generated from the original PastTL formula, and an ordering
OE of the evaluation (i.e., rematerialization) of the
auxiliary relations r ff and the top-level constraint. This
information is used when the active rules are generated in
the next phase of the compilation process.
B. Maintenance of Auxiliary Relations
The conversion of a temporal constraint to FOL generates
a set of auxiliary atom definitions. These auxiliary
atoms are represented by auxiliary relations in the resulting
code. The contents of the auxiliary relations has to
be kept up to date in agreement with Definition 4. This is
achieved automatically using the active DBMS rule system.
For each specified constraint the SQL statement that represents
the top-level constraint, the list of SQL statements
defining the next state of each auxiliary relation (all these
are supplied one by one by the FOL2SQL module), and
the subformula ordering OE (defined above) are used as the
input to the rule generator. The Rule Generator composes
all the pieces into a set of active DBMS rules.
In the ideal case, the syntax of the used rules is as shown
in
Figure
2. In that case we can simply create the rules
from the SQL statements as follows. be the
translation of a FOL formula to SQL (including all optional
optimization steps), then for each constraint C we create
the rules as follows:
Top-level constraint C. There is one rule for each constraint
(it is the translation of the top-level FOL formula created
in the TL2FOL phase):
create rule C
when commit
then rollback work
The rule is triggered when transaction attempts to commit
(i.e., before the actual committing). This is necessary
in the case of temporal constraints as even null transaction
can possibly violate the constraints (note that this cannot
happen, when only static constraints are used).
Example 7: Consider the constraint :9x:(p(x) -5p(x)).
This constraint is violated when an arbitrary element x is
in the relation p in two consecutive states. Clearly, if p is
not empty, a null transaction will violate this constraint.
Auxiliary atoms. Let ' r ff
be the formula defining r n
ff for
each auxiliary atom r ff (the whole definition of r ff is r 0
ff :=
false, r n
We create an auxiliary relation (i.e., a
table) for each atom r ff :
create table r ff ( list-of-attributes );
This statement also defines the 0-th state-the table is
empty (corresponds to r 0 ff := false). The proper transition
to the next state of r ff is achieved automatically by the
following rule (here we use the second part of the inductive
definition of r ff , namely r n
create rule r ff
when commit
then update r ff by ! ' r ff
precedes C, fr fi jr ff OE r fi g
This rule is also triggered when a transaction tries to com-
mit. The execution of the rule updates the auxiliary relation
r ff . This guarantees the consistency of the auxiliary
tables. The proper order of triggering these rules is defined
by the subformula ordering OE. This is reflected in
the precedes clause of the rule's body. The rules corresponding
to all subformulas of ' have to be processed prior
to "s rule.
When evaluating an SQL statement need
to access all the tables referenced in '. Some of the atoms
may refer to the n\Gamma1-st state of the database. The previous
state of the database (i.e., the old contents of all the rela-
tions) is not stored in the database-this would require to
maintain two copies of almost the same data. Instead, transition
tables are used to restore the previous state of the
database. Note that the transition information (for relation
r) has to be maintained by the DBMS anyway in order to
allow aborting of a transaction. The access to this information
should be provided by the (system maintained) transition
tables inserted(r), deleted(r), old-updated(r), and
new-updated(r). The state n \Gamma 1 of a relation r is restored
CHOMICKI AND TOMAN: IMPLEMENTING TEMPORAL INTEGRITY CONSTRAINTS USING AN ACTIVE DBMS 5
create rule rule-name
[when
[if SQL predicate]
then SQL action
[precedes rule-name
[follows rule-name
Fig. 2. Ideal Rule syntax.
During the commit the rule system checks the guard of the rule (when); if the guard is true, then evaluates the condition (if). If the condition
is satisfied it executes the actions specified in the (then) clause. The evaluation order of the rules is controlled with the precedes and follows
clauses. Only the then part of the rule is mandatory. Moreover, for all tables, specified in the when clause the system provides appropriate
transition tables inserted, deleted, old-updated, and new-updated [6].
as
r
Example 8: Now we can finish the example 6. The system
generates the following code: A rule for the top-level
constraint:
create rule C
when commit,
if not ! :9x:(r ff 1
then rollback work
and two rules to maintain the auxiliary tables r ff 1
and r ff 2
create rule ff 1
when commit,
then update r ff 1 by
precedes C;
create rule ff 2
when commit,
then update r ff 2 by ! emp
precedes
C. Problems with SQL
The actual DML and rule languages usually do not allow
to express the rules needed to enforce the constraints di-
rectly. The implementors of the Rule Generator are faced
with two main obstacles:
ffl problems with the data manipulation language, and
ffl problems with the rule system used.
The overhead introduced by the restrictions of the DMLs
and the rule systems languages is summarized in Figure 3.
C.1 View Rematerialization
The new state of each auxiliary relation r ff is computed
by the SQL equivalent of a FOL formula ' r ff
. But we need
to replace the whole contents of the table representing r ff
by
(where r ff may be referenced in ' r ff
). This can
be done by an analysis of the views to be materialized. We
notice that the next state of each of the auxiliary relations
r ff is defined by one of the following formulas:
The first case can be solved by following SQL code:
delete from r ff ;
insert into r ff ! A
To apply this idea to the second case, we need to reformulate
the assignment statement (2) as the equivalent SQL
insert and delete operations as follows:
delete from r ff where X not in ! r
insert into r
where X is list of all attributes of r ff . It is easy to see, that
these two operations are equivalent to the original assign-
ment. The disadvantage of this solution is that we can not
compile the right side of the assignment as a single formula,
but we must produce two separate SQL statements.
Example 9: Continuing our example, the update operations
on the auxiliary tables r ff 1
and r ff 2
are replaced by
insert into r ff 1
and
delete from r ff 2 ;
insert into r ff
respectively. Note, that the first update is simplified (by removing
the delete operation) due the structure of the definition
generated from the subformula rooted by the connective
3.
C.2 Restrictions on Rule Languages
The syntax and semantics of active rule systems varies
greatly among active DBMS. Our implementation currently
supports two major approaches to rule systems:
Set-oriented rule systems. A representative of such a system
is the Starburst active DBMS. The rule language of
Starburst is very close to the ideal rule syntax (cf. Figure
2). However, the restrictions on the rule syntax makes
direct application of the rule templates from the previous
section not possible. Especially:
ffl References to the transition tables are allowed only
inside of the rule's body, and only the transition information
for the table associated with this particular
rule is available.
ffl Each rule is connected with exactly one table. This
means that the rule is triggered if and only if the associated
table is accessed and moreover, the triggering
is defined in the terms of net effect of a transaction
on the table-committing itself may not be able to
trigger any rule even if the associated table was accessed
(i.e., updated with empty net outcome). Here
we slightly compromise the claim that the constraints
6 IEEE TRANSACTIONS ON KNOWLEDGE AND
Number of Starburst INGRES
Rules C+T C+4T+3B C+4T+4B
Auxiliary Tables T 3T+2B+1 3T+2B+1
Virtual Views 0 0 U
Database Procedures 0 0 C+3T+3B+1
C-number of constraints,
T-number of temporal connectives,
B-number of base tables, and
U-number of embedded disjunctions.
Fig. 3. Size of code generated for various systems.
enforcement mechanism is independent of the appli-
cation: to be on the safe side the application has to
update auxiliary commit table (introduced just for this
purpose) before the actual committing. This can be
avoided by allowing a rule being triggered just by attempt
to commit. Thus each rule must be associated
with the commit table. If no constraint can be violated
by a null transaction, then it is safe to associate the
rules with individual tables.
The workaround for these problems is as follows. For each
table T (including the auxiliary tables r ff ) we use two additional
auxiliary transition tables T add and T sub holding
the tuples inserted (deleted) from the table T respectively:
create table T add ( list-of-attributes );
create table T sub ( list-of-attributes );
and a view old T that allows access to the previous state
of the table T from all rules as follows
create view old T as (
select * from T
minus select * from T add
union select * from T sub );
Also we need to add rules that keep this information up
to date during the constraint checking phase. We use one
rule that extracts the transition information, and two rules
needed for cleanup of the auxiliary tables after the constraint
checking is finished. Evaluation of these rules is
synchronized with the remaining rules using the precedes
and follows clauses.
create rule old T on T
when (inserted,deleted,updated),
then (
insert into T sub (
select * from deleted()
union distinct
select * from old-updated() ),
insert into T add (
select * from inserted()
union distinct
select * from new-updated() ),
precedes (list of all rules using old T) );
create rule del add T on T add
when inserted,
then delete from T add, follows C;
create rule del sub T on T sub
when inserted,
then delete from T sub, follows C;
These rules and tables are used merely to extract the transition
information at the end of the transaction-during
the execution of the transaction the transition information
is managed by the DBMS itself (and this should be fixed by
simple changes to the syntax of the rule language). Note
also that the tables T add and T sub are cleared at the
end of the constraint checking phase, thus they are always
empty when a transaction ultimately commits. This allows
to store them in a temporary storage which may improve
the efficiency of the system. If any of the constraints is vi-
olated, the whole transaction is aborted and the transition
tables are emptied by the system automatically.
Example 10: For the constraint from Example 6, similarly
to the ideal case, one rule is created to enforce the
top-level constraint:
create rule C on commit,
when inserted,
if not ! :9x:(r ff 1
then rollback work
Two rules are created to maintain the auxiliary relations.
Note, that all the rules are connected with an additional
auxiliary table commit. This is needed in the case of null
transactions (see Example 7).
create rule ff 1 on commit,
when inserted,
then insert into r ff 1
precedes C;
create rule ff 2 on commit,
when inserted,
then (
delete from r ff 2
insert into r ff 2
precedes
In addition the system generates rules that maintain the
contents of the auxiliary transition tables.
Tuple-oriented rule systems. In the case of tuple-oriented
systems like INGRES the access to the previous state of
the database relations is provided using the transition information
similarly to the Starburst case. However, the
transition information has to be maintained by additional
rules in auxiliary transition tables 1 . Also, Starburst's capability
to order individual rules has to be compiled into
an explicit sequence of INGRES statements. The code produced
by the rule generator looks as follows:
1. For every table T (both database and auxiliary rela-
tions) a pair of additional auxiliary transition tables for
This looks very similar to the T add and T sub tables introduced
in the case of Starburst rules. However, in the Starburst case these
tables are used just to overcome syntactic restrictions of the rule
system as the transition information is maintained by the DBMS
itself. In the case of INGRES, these tables are essential to mimick
Starburst's internal capabilities.
CHOMICKI AND TOMAN: IMPLEMENTING TEMPORAL INTEGRITY CONSTRAINTS USING AN ACTIVE DBMS 7
storing the transition information is needed:
create table T add ( attributes );
create table T sub ( attributes );
The contents of the tables is maintained using INGRES
rules triggered by insertion, deletion, and update operations
on the table T . These rules maintain the contents
of the transition relations during the duration of the whole
transaction. This effectively duplicates the transition information
managed by the DBMS itself for the case of aborted
transactions. However, assuming only small changes to the
base relations by every transaction, the size of the duplicated
information is not significant (when compared to the
size of the whole database).
In the INGRES rule language rules have to be connected
with a single database procedure, so the rules come in pairs
with the corresponding database procedures:
create procedure Tinserted( attributes
insert into T add values ( attributes );
create procedure Tdeleted( attributes
insert into T sub values ( attributes );
create rule Tins after insert on T
execute procedure Tinserted( new.attribs );
create rule Tdel after delete on T
execute procedure Tdeleted( old.attribs );
create rule Tupdateins after update on T
execute procedure Tinserted( new.attribs );
create rule Tupdatedel after update on T
execute procedure Tdeleted( old.attribs );
At the end of the constraint checking the transition tables
are cleaned up using following rule:
create procedure
delete from T add;
delete from T sub;
create rule Tdoclean after delete on commit
execute procedure Tcleanup;
2. The actual code produced by the translation of the
temporal constraint (i.e., all definitions of the auxiliary relations
together with the top-level constraint) is used in
the body of a single main rule (so we avoid the need for
synchronization of the rules):
create procedure C
declare n integer;
begin
update r ff n by
update r ff 1 by
select
from true where exists
create rule C go after insert on commit
execute procedure C main;
where r ff n
OE C. Thus the update operations
are executed in the body of the rule in the correct order.
3. All the rules generated from the constraints are synchronized
with the cleanup rules using the following master
procedure using a additional auxiliary table commit:
create table commit (i integer);
create procedure docommit = begin
insert into commit values (1);
delete from commit;
commit;
else
rollback;
The application then has to use the execute docommit
statement in the place of the commit statement.
Example 11: The constraint from Example 6 produces
following INGRES procedure/rule pair:
create procedure C
declare n integer;
begin
delete from r ff 2 ;
insert into r ff
insert into r ff 1
select
from true where exists
create rule C go after insert on commit
execute procedure C main;
Again, this code comes with all the rules that maintain the
contents of the auxiliary transition tables. Note that there
are no separate rules for the individual auxiliary relations
generated for the temporal subformulas of the original constraint
V. Compilation of FOL formulas
The next issue that needs to be addressed is how to compile
the FOL formulas that specify the top-level constraint
and the auxiliary relations to Relational Algebra and eventually
to SQL. This is done in two steps: first the formula
is converted to Relational Algebra Normal Form and then
to SQL.
The conversion of FOL formulas to Relational Algebra
is based on the ideas presented in [4]. All the definitions
were carefully converted to functions that take advantage
of the structure of the formulas they are working on. This
reduces the number of passes needed to traverse the input
formula to two and leads to a more efficient (bottom-up)
execution.
The conversion from Relational Algebra to SQL is usually
not paid attention to at all. But what seems to be
an easy task turns out not to be as straightforward because
of various restrictions in the SQL language. Many
Fig. 4. Bottom-up computation of GEN and CON properties
workarounds had to be invented to get the system run-
ning. The presented version produces standard SQL (i.e.,
accepted by most commercial DBMS's); the Starburst variant
of the system can take advantage of the extensions
to SQL present in the system. The SQL/92 standard [5]
contains these extensions but unfortunately most available
DBMS (like INGRES) do not.
A. First Order Logic to Relational Algebra Normal Form
The whole conversion consists of two phases: first the
FOL formula is checked for safe evaluation property (Join
Anomaly Detection) and simultaneously simplified to a
normal form (ENF). All formulas that pass this phase are
guaranteed to have equivalent safe reformulation to Relational
Algebra. In the second step the simplified formula is
converted into Relational Algebra Normal Form (RANF).
This step removes all Join Anomalies.
A.1 Join Anomaly Detection
be the set of variable
fg the set of distinct tags (g
stands for GEN, c for CON, and f for not GEN and not
CON using terminology from [4]), and L the set of (fi-
nite disjunctions of) atoms (base relations). We define sets
\Theta L of annotated free variables
for each FOL formula A by following inductive definition:
where the operations t; u on the sets of (annotated) free
variables are defined in Figure 4.
We use the following notation in the rest of the paper: an
underscore will be the abbreviation for a fresh existentially
quantified variable, e.g., 9z
be abbreviated as
The decision if a given formula F is evaluable (i.e., it is
safe to convert the formula into an equivalent Relational
Algebra expression) is based on the FV set as follows:
Definition 13: A (FOL) formula F is called evaluable if
1. in F .
2. subformulas 9x:A of F .
called allowed if the second condition
is replaced by (x;
of F .
The evaluable property is the key for distinguishing formulas
that can be safely converted (see [4] for a more detailed
discussion). Conversion of formulas which do not meet this
criterion may lead to Join Anomalies and unsafe reformulations
of the original formula, and all such formulas must
be rejected.
Our specification language for temporal constraints is
PastTL; the evaluable property is extended to PastTL formulas
as follows:
Definition 14: A PastTL formula is evaluable if and only
if its FOL translation is evaluable (i.e., the top-level constraint
is evaluable and all the definitions of the auxiliary
views are evaluable as well).
A.2 Conversion to allowed and simplified formula
This step will be done simultaneously with the detection
of the evaluable property. When traversing the term representing
the given formula we will construct the sets FV and
together with an equivalent simplified formula defined
as follows.
Definition 15: A formula F is simplified if
1. Conjunctions are in a polyadic representation 2 .
2. Disjunctions are in a polyadic representation.
3. The : connective may occur only in the root of the
formula or inside a conjunction.
4. No disjunction is inside a negation.
5. No conjunction of only negative formulas is inside a
negation.
of the form 9x:(A - B).
7. No subformulas of the form (9x:A) - B.
To meet the requirements of Definition 15 we will convert
all conjunctions and disjunctions in the given formula to
Finite nesting of a binary connective is replaced by a single
polyadic connective operating on a list of arguments.
CHOMICKI AND TOMAN: IMPLEMENTING TEMPORAL INTEGRITY CONSTRAINTS USING AN ACTIVE DBMS 9
the following polyadic representation 3 :
\Gamma!
Note that the separation of positive and negative conjuncts
is not required in Definition 15, but it is helpful when enforcing
the other requirements of this definition (especially
those dealing with negation).
We use the following algorithm to traverse the input formula
and to determine if it satisfies the evaluable property
(and reject all formulas which do not). Simultaneously
we build an equivalent simplified formula starting from the
atoms and using rules for creating more complex formulas
(we will use a constructor for each FOL connective-
the constructors will be designed to preserve the simplified
property).
Algorithm 16: Let F be a FOL formula.
cases(F ) of
atom A -(A; ffl)
A -B A(traverse(A),traverse(B))
A -B O(traverse(A),traverse(B))
9x:A E(x,traverse(A))
The constructors N, A, and O are defined in Figure 5. The
definition of the constructor E is more complex:
Definition 17: Let A be a simplified formula. When
building the simplified formula corresponding to 9x:A we
first check if A is of the form -A i , and in this case we
perform the following transformation:
O(E(x;
Otherwise we distinguish the following cases (depending on
the variable bound by the quantifier):
1. x is not free in A and we can drop
the quantifier.
2. A is not evaluable and we reject it.
3. (x; c; G) 2 FV (A): A is evaluable. We perform a
transformation to obtain an equivalent allowed for-
mula. The transformation was described in [4] and
is defined as follows:
9x:A \Gamma!
where G is the third component of the (x; c; G) element
in FV (A)-this is finite disjunction of atoms such that
denotes the sequence of
existential quantifiers binding all free variables of -G
but x, and R = A[G=false] (all atoms that occur in G
are replaced by false in A). Note that the right side of
the transformation
G and R have to be built using the N, A, O, and E
functions in order to preserve the simplified property.
3 In the following sections we will use following notation for lists:
ffl for empty list and A:B for concatenation of lists. We will identify
single elements with one element lists.
4. (x; is allowed and we simply add
the quantifier and remove x from FV (A) according
the rules for building the FV and FV 0 sets.
Note that besides transforming the formula to a simplified
formula we also compute the FV and FV 0 sets of annotated
variables as in Figure 4. This is done simultaneously
with the simplification process. The rules for computing
the FV and FV 0 sets match exactly the structure of the
formula. The FV set is used to detect the evaluable property
(see Definitions 17 and 19).
Lemma 18: Algorithm 16 converts the input formula to
an equivalent simplified such that (x;
subformulas 9x:A of F (or rejects it if the formula is not
evaluable).
By induction on the structure of the input for-
mula. Each of the constructors N, A, O, and E preserves
the simplified property of formulas, the constructed formula
is also simplified.
Algorithm together with Definition 17 gives a recipe for
the conversion of a FOL formula to a simplified formula
In order to convert the simplified formula to Relational
Algebra we need to perform one more transformation.
Definition 19: A formula F has the allgen property (we
in F .
This property is easy to check. Moreover
Theorem 20: Let F be a simplified formula built from
atoms using only N, A, O, and E constructors. If F has
the allgen property, then F is allowed.
This is easy to see: both conditions of definition
are satisfied; moreover all variables bound by 9 satisfy
the requirement of the second part of this definition because
all 9 quantifiers were constructed using E).
Using the above transformation, we have been able to separate
evaluable formulas (i.e., those formulas, we will be
able eventually convert to SQL) from non-evaluable formulas
and convert them to allowed formulas. The rules
that build the simplified formula guarantee that all free
variables in the obtained formula are range restricted.
A.3 Conversion to RANF
Before we can convert the formula to relational algebra
we need one more conversion: for the proper evaluation
of the formula we need to propagate the allgen property
(Definition 19) to the subformulas of the formula as follows
(remember that the evaluation in the Relational Algebra is
bottom-up).
Definition 21: A subformula A 0 of A is called generating
if it is not of the form :B. A simplified allowed formula
A is in RANF if all generating subformulas have the allgen
property.
Lemma 22: Let F be an allowed and simplified formula.
Then it can be converted to an equivalent formula F 0 in
RANF.
inductive rules that are applied on
F recursively according to the top-level connective:
atom. In RANF by definition.
A -(ffl; L) -(N ) :A
O
Fig. 5. Rules for bottom-up construction of a simplified formula
The tables define rewrite rules that allow construction of more complex simplified formulas. The first line (column) defines the pattern(s)
(i.e., the left side of the rules) and the body of the table defines the results of the rewriting (see Algorithm 16).
9x:A. by definition of the E constructor
and all other variables free in FV (A) are also free in
FV (9x:A) and thus allgen(A) holds by our assumption.
-L. Let allgen(-L) hold. Let x be free in -L. Assume
there is L i 2 L such that (x;
definition of the FV set for
any formula T , contradiction. Thus allgen(-L) implies
We know allgen(-(P; N )). But this does not
guarantee that all members of P and N share this property.
We will use the following transformations to propagate the
allgen property to all elements of P and also all elements
of N as follows:
Positive conjuncts. By definition of the simplified formula
we know that the elements of P are either atoms or unions,
and by definition of FV the allgen property holds for each
atom. Let -(L) be an element of P such that allgen(-L)
does not hold for variables fx g. Because (by as-
sumption) the whole conjunction has the allgen property,
there must exist P 0 ae P such that
for all k. We can distribute P 0 into the -L as follows
(note that the distribution law guarantees equivalence
of the original and the resulting formulas):
where L is the offending union, P 00 :=
these formulas are constructed
using E, N, O, and A to preserve the simplified structure
of the formula). This transformation is repeated until all
elements of P have the allgen property 4 (note that we may
end with only one element in P . If moreover ffl, the
polyadic conjunction -(P; ffl) is converted to single atom
Negative conjuncts. Here each element N i of N represents
subformula of the form :N i . To meet the RANF requirements
we need to enforce allgen(N i ). Using A - :B j
4 Yes, this might be a problem. In the worst case we end with
exponential expansion of the formula. As in
A-:(A-B), we find for each N i subset P 0 ' P such that
may be empty). Then the
following transformation will propagate the allgen property
to all N i 's:
In each case we reduce the depth of the term by one. On
the subformulas we can apply the rules recursively.
Because the term representing the given simplified formula
has finite depth, the proof immediately gives us an algorithm
that correctly converts given simplified allowed formula
into a RANF formula.
B. Relational Algebra Normal Form to SQL
Now all the transformations of the original formula at
the level of logic are finished. In the following section the
conversion from RANF to SQL statements is described.
This transformation is done again in two steps. First the
logic formula is translated to Relational Algebra operations
and then the Relational Algebra term is rewritten to SQL.
B.1 Conversion to Relational Algebra
The conversion to RANF prepared everything for the
next step-the conversion of a logic formula to Relational
Algebra. This step will make no changes to the structure
of the formula. The only goal here is to convert:
1. nested existential quantifiers to projection (-),
2. disjunction into set union ([) 5 , and
3. conjunction into product (\Theta), selection (oe E ), and
(generalized) set difference (\Gamma). This will be the most
difficult part: we have also to convert the variable
bindings into set of equalities between the columns of
the cartesian product and constants.
Remember, that we've been using the polyadic conjunction
and except of the root of the formula all negations are
hidden inside conjunctions. Thus we don't have any special
rule for converting negation.
5 We need to watch the attribute order in all subformulas of the
union. This is done by the projection operator (-) that is used for
permuting the attributes when necessary.
CHOMICKI AND TOMAN: IMPLEMENTING TEMPORAL INTEGRITY CONSTRAINTS USING AN ACTIVE DBMS 11
B.2 Relational Algebra to SQL
Now we are prepared to convert relational algebra expressions
from the previous step into actual SQL queries.
This conversion due to the limitations of SQL language is
not as straightforward as generally believed. We need to
carry out the following transformations:
1. Standard SQL (SQL/89) can not handle nested occurrences
of negation and union inside SELECT clauses 6 . We
need to remove these subterms by assigning a (virtual) view
to each Relational Algebra subterm of the form
-(oe F (L[W=
where W :=
A i for each
where L; M; and N are finite lists of relations to be joined.
2. Closed FOL formulas also can not be handled immediately
because SQL does not allow empty projection (or
0-ary relation(s)). The solution will use a single attribute
auxiliary relation true(t) which will always contain one
tuple ( true ). This relation will be joined with the rest
of the FROM clause and its (only) attribute will be SELECTed,
e.g.,
3. When using formulas of PastTL, we need to access the
current and the last states of the database. This information
is kept during all the conversions and all the atoms
are annotated by the state they need to be evaluated in.
Here we must convert this information to proper names of
tables and views (in our case T and old T ).
VI. Optimization
The translation described so far produces a set of active
rules sufficient to enforce a given set of temporal con-
straints. However, the auxiliary relations introduced during
the translation of PastTL formula to FOL (and eventually
to active rules) increase the amount of data stored in
the database. In this section general optimization techniques
that allow to cut down this overhead are devel-
oped. Note that these optimizations can be used because
the auxiliary relations are not used for direct querying of
the database. Their contents is irrelevant for evaluating
standard first order queries (over the current state of the
database) but on the other hand the information stored
is not sufficient for answering (general) temporal queries.
Thus any restrictions on their contents or changes in their
definitions do not affect the database user (we can think
about these tables as being invisible to the user).
Also, we deal only with space optimization. The optimization
of the running time of the queries is left to the
query optimizer of the underlying DBMS.
6 Starburst and systems conforming to the SQL/92 standard can
handle nested queries directly.
We explore two methods of limiting the amount of data
stored in auxiliary relations:
Context-based optimization. We can limit the number of
tuples stored in an auxiliary relation by the analysis of
the context(s) the relation is used in. This technique
is similar to the Magic Set transformation [16], [17],
[18]. However in the context of temporal formulas we
need to be more careful than in the time less case.
The information passed sideways must agree with the
flow of time because we can't predict the future. The
advantage of this method is its universal applicability
(i.e., we don't need any statistical information about
the database).
Algebraic optimization. The other option is to use techniques
based on algebraic transformations. These can
be used before the PastTL formula is converted to a set
of FOL formulas (i.e., before the auxiliary relations are
introduced). In this case laws that allow to move the
temporal connectives over the first order ones are used.
The transformations allow to move the places where
the auxiliary relations are introduced up and down in
the (parse tree of the) original PastTL formula. Careful
choice of the transformations can significantly reduce
the amount of tuples stored in these relations.
Unfortunately the choice often requires obtaining additional
information about the database (like average
sizes of the relations involved, average size of joins,
etc.
When using the optimization techniques we need to keep
in mind that the resulting formulas need to be converted
to Relational Algebra (and later to SQL). Thus we need
to be careful not to introduce constructs not expressible in
SQL. Especially all the formulas have to be recursion-free
and to have safe reformulations [4].
A. Magic Set Transformation
The main idea of this transformation is based on the
following observation: the auxiliary relations r ff are used
only in finitely many known contexts (all of which can be
easily determined by traversing the original formula). The
analysis of the contexts allows to restrict the contents of the
auxiliary relations to relevant tuples only. This technique
is very general: it is applicable to all auxiliary relations 7 in
an application (i.e., it is not limited to auxiliary relations
introduced by the TL2FOL conversion).
Note that the classical application of the Magic Set transformation
to a set of Horn rules [18] works in a completely
different setting: the magic sets are used to pass (partial)
information about the intended outcome of the bottom-up
evaluation of the rules. In our case the setting is different:
we do not have any information about the future states of
the temporal database (except for constant relations). But
we can show how to exploit the definitions of the auxiliary
relations to create restricting conditions in a similar
fashion.
Also we use a different mechanism for computing the
sideways information passing strategy (SIPS) than [17]
7 I.e., relations defined by a formula from other relations.
where the strategy is based on adornments of literals
(atoms). The adornments then provide conservative guidelines
for computing the SIPS so that the resulting rules are
evaluable (range-restricted). We take a more optimistic
approach for computing the SIPS: In the first step we compute
all the information that can be passed sideways in
the form of a formula (without being concerned about its
properties) and then we approximate the obtained formula
to regain the properties necessary for converting it to SQL.
This method also allows to pass arbitrarily complex conditions
(e.g., conditions that constrain more than one vari-
able, like x ! y) sideways.
Example 23: Let '(x) be a
formula with a reference to the auxiliary relation r ff . Then
clearly only those tuples in r ff that satisfy the condition
affect the outcome of evaluation of '(x). Thus
(assuming r ff was used only in this single place) when r ff
is being rematerialized we can restrict the tuples stored to
those for which the condition holds. Note that the
condition time-invariant.
The restricting condition induced by a context in which r ff
is used is called a magic condition for r ff . In the previous
example the context was "x
the induced magic condition was x ? 10. This condition
has to be computed for each context r ff is used in. The
overall magic condition for the auxiliary relation is then
determined as the disjunction (i.e., union) of all such conditions
Definition 24 (Magic Condition) Let ' be a formula and
/ be a subformula of '. Let / 0 be another formula such
that FV (/ 0
called a magic condition for / (in ').
To compute the magic conditions for each occurrence of
r ff the following algorithm is used. Note that we need to
determine the magic conditions for the leaves of the (tree
corresponding to the) formula; whereas the magic information
for the root of the formula can be passed to the
algorithm from the upper level (the magic condition for
the root of the top-level constraint is true).
Notation. Let r ff be an auxiliary relation and ' r ff
the formula
defining r ff . We denote m r ff the magic condition
for an (single) occurrence of r ff in some formula /. We
the magic condition restricting the contents
of r ff (i.e., the overall magic condition with respect to all
occurrences of r ff ).
The following algorithm collects all the information that
can be passed sideways to a particular leaf of a given for-
mula. Note that together with the formula the algorithm
takes another argument-the magic condition for the root
of this formula (i.e., if a restricting condition has already
been computed for the root of a given formula it can be
passed down towards the leaves of the formula).
Algorithm 25 (Magic) Let OE be a FOL formula.
case OE of
semantic equivalence of the formulas.
atom r ff m r ff := 9x
atom A skip
The correctness of this algorithm is proved using the following
lemma and theorem:
Lemma 26: Let B[p] be a formula having atom p as a
leaf. Let A be another formula. Then A-B[p] j A-B[p-
By induction on the structure of B.
Theorem 27: Let ' be a formula and m ' a magic condition
for '. Then magic('; m ') computes a magic condition
for all auxiliary relations in the leaves of the input
It is sufficient to prove
'. The proof is by induction on the structure
of ' and the corresponding structure of m r ff .
While computing the magic conditions for all contexts of
r ff we need to be careful. Algorithm 25 computes magic
condition for every leaf of the input formula. However:
ffl The information passed sideways has to agree with the
flow of time, i.e., no information can be passed from
the future.
ffl The information has to be relevant-the magic condition
has to actually restrict the contents of the auxiliary
relation.
ffl The transformation has to preserve the evaluable prop-
erty. This is necessary because the transformed formula
has to be converted to Relational Algebra and
eventually to SQL.
Thus we often need to replace the computed magic condition
by a weaker formula that satisfies the above restric-
tions. The following theorem shows the soundness of the
replacement:
Theorem 28: Let '[/] be a formula with subformula /.
Let /, / 0 , and / 00 be formulas with the same set of free
variables. Then
By induction on the structure of ' we prove
Now we use the assumption '[/] j '[/ 0 ] to
obtain the desired result.
Theorem 28 is applied to magic conditions as follows:
Corollary 29 (Approximation) Let ' be a formula and
/ be a subformula of '. Then
To find an appropriate weaker magic condition the following
definition is used:
Definition 30: Let ' be a FOL formula and let / be
subformula of '. / is called positive (negative) in ' if it is
in the scope of even (odd) number of negations in '. We
define a approximation of / in ' as
ae
true if / is positive in '
false if / is negative in '
CHOMICKI AND TOMAN: IMPLEMENTING TEMPORAL INTEGRITY CONSTRAINTS USING AN ACTIVE DBMS 13
Note that any safe (i.e., time-independent) approximation
of / can be used here as long as / oe Approx(/)
(Approx(/) oe /) for / positive (negative) in '.
Lemma 31: Let ' be a FOL formula and / a subformula
of '. Let / 1 and / 2 be two formulas (with the same set of
free variables as /) such that / 1 oe / 2 . Then
/ is positive in '
/ is negative in '
By induction on the structure of ' (again, there
is only one occurrence of / in ').
Obviously in all cases ' oe '[/= Approx(/)]. This fact
together with Corollary 29 is used to modify the original
magic formula in such a way, that it is evaluable, relevant,
and agrees with the flow of time:
ff be a occurrence of r ff in a formula
'. A magic condition m r ff for r ff agrees with the flow
of time if for all leaves p m of m r ff , m - n where n and
m are the states of the database the respective atoms are
evaluated in.
This means that all leaves must be at least as old as the
occurrence of r ff . Unfortunately the agreement of sideways
information passing with the flow of time is not guaranteed.
Example 33: Let '(x) := A(x) n - r
ff (x). The magic
condition computed by Algorithm 25 for r
ff (x) is A(x) n .
When r ff (x) is rematerialized at time the contents of
A(x) at time n is not known (a reference to the future).
Thus the magic condition needs to be safely approximated
in order to account for all possible values of A(x) at time
n. In this case the only safe approximation is true.
In the first step the agreement with the flow of time is
achieved:
Definition 34 (Flow of Time) Let m r ff be the magic
condition computed by Algorithm 25 for the definition of
r ff . Let Approx(m r ff ) be the condition obtained by replacing
all leaves in m r ff using the table in Figure 6 (rows: the
occurrences of r ff , columns: the leaves of m r ff ).
Note the difference between the first and second rows of
the table: not only references to the future are replaced by
their approximations but also the superscripts denoting the
time of evaluation of a particular leaf are modified to match
with the occurrence of r ff the magic condition is associated
with. Thus if the occurrence of r ff is in the state
first row of the table) then the leaves labeled with
are current wirth respect to this occurrence of r ff (so the
label is changed to n) and the leaves labeled n are in the
future. Similarly, because of sequential rematerialization of
the auxiliary relations we need to modify the leaves in the
case of ff - fi (i.e., r fi is rematerialized after r ff and thus
the actual state of r fi has index n \Gamma 1).
agrees with the flow of time with
respect to r ff (the proof is by induction on structure of
this follows immediately from Lemma 31 and Corollary 29.
The order of rematerialization of the auxiliary relations
is important: no information can be passed from a new
state of an auxiliary relation that has not been updated yet.
Also when two auxiliary relations are incomparable (in OE)
we need to pick which of them is going to be rematerialized
first (i.e., which is smaller in OE). The rule system of the
active database evaluates the rules sequentially. Thus we
can pick any linear ordering of the rematerialization of the
auxiliary relations as long as it contains OE.
This solves the problem with the flow of time, but the obtained
magic condition may still be not relevant. Note that
the magic condition for r ff in ' produced by the Algorithm
has always the form
are subformulas of
the original formula '. This leads to following definition:
Definition 35: Let m r ff be the magic condition for r ff
computed by Algorithm 25. Let m r ff be of the form
are subformulas of '. A conjunct / i is
called relevant to r ff if
1. FV (/
2. FV (/ relevant to r ff .
Let R be the set of relevant ' i 's. We define
Again Relevant(m r ff ) is a magic condition for r ff ; we used
Lemma 31 and Corollary 29 to approximate subformulas
of the original magic condition.
Finally we have to make sure that the magic condition
can be evaluated (i.e., that it has the evaluable property).
The situation is very similar to detecting the evaluable
property for the constraints themselves. The only difference
arises when a non-evaluable (sub-)formula is detected:
in the former case the formula is rejected, but in the case
of magic conditions we can always pick a weaker condition
that is evaluable (note that the weakest condition-true-
is always evaluable).
Algorithm 36: Let Eval be the function defined by Algorithm
16, where the E constructor is redefined as follows:
Let A be a simplified formula (see Definition 15). If A is of
the form -A i , then:
O(E(x;
Otherwise we distinguish the following cases (depending on
the variable bound by the quantifier):
1. x is not free in A and we can drop
the quantifier.
2. formula is not evaluable and we
replace it by Approx(A). Now
definition of Approx, and we can proceed as in case 1.
Note that if A is of the form -A i we can approximate
only those conjuncts that contain x among their free
variables.
3. (x; c; G) 2 FV (A): formula is evaluable, we perform
a transformation to obtain an allowed formula (this is
the same as in the original definition of E).
4. (x; allowed and we simply
add the quantifier and remove x from FV (A) according
the rules for building the FV and FV 0 sets.
Similarly to the cases of Approx and Relevant we can show
that Eval(m r ff ) is a magic condition for r ff that is evaluable
14 IEEE TRANSACTIONS ON KNOWLEDGE AND
A
r
ff A
Fig. 6. Dependencies on the Flow of the Time.
Note that the if the input formula agrees with the flow of
time then the resulting formula agrees with the flow of time
as well. Both the Relevant and the Eval transformations
obviously preserve this property.
After application of these three steps an evaluable magic
condition that agrees with the flow of time is obtained. The
process is repeated for each occurrence of the same auxiliary
relation. The individual magic conditions are then
glued together.
Considering all the occurrences of r ff , we need to distinguish
two cases in order to apply Algorithm 25:
1. r ff occurs as a leaf of a parent formula (i.e., in the
top-level constraint C or in ' r fi
for ff OE fi).
2. r ff occurs as a leaf of its own definition.
Note that r ff cannot occur as a leaf of any ' r fi
for fi OE ff
by Definition 5.
In the first case the solution is easy: for r ff being a leaf of
we just compute the (partial) magic condition using the
Algorithm 25, i.e., magic(' r fi
in the case of top-level constraint). The second case is
more intricate: when trying to compute the magic condition
for r ff it is not clear what condition M should be
used in magic(' r ff
because the magic condition for r ff
has not been computed yet. Moreover we can not simply
use the magic conditions derived from the parent formulas
only, as illustrated by following example:
Example 37: Assume constraint 9x:(A(x)-3B(x)). The
first-order translation consists of the top-level constraint
and the (only) auxiliary atom is defined
by
(we have dropped the subscript 3B(x) of r). Assume that
we try to restrict the contents of this auxiliary relation by
A(x). Let p be the restriction of r:
At the time of the rematerialization of p n (x) at time t of A
can be at most n (so we don't look into the future). The
following table shows an example why this transformation
is not acceptable:
A fg
r
The result of constraint evaluation in state n is different for
r (true-correct result) and for p (false-incorrect result).
On the other hand if A was a time-independent relation
the transformation would be valid.
We can simply use the constant true for M , but the previous
example shows that a stronger condition can be used
in general (also, in our setting the choice of true would
produce true as the magic condition for this occurrence of
r ff ).
Definition 38 (Time-invariant Condition) Let Inv(') be
the formula ' where all time-dependent leaves 9 are replaced
by their approximations.
Lemma 39: Let r ff be an auxiliary relation. Let m 0 r ff
be
I is the set of all occurrences of r ff
in formulas ' r fi
for ff OE fi. Then magic(' r ff
computes a magic condition for r ff in ' r ff
time-invariant condition for r ff
with respect to all occurrences in parent formulas. Using
Lemma 26 we can prove the claim by induction on time
(the base case is trivial: r 0
ff := false by Definition 4. The
induction step follows directly from Lemma 26).
Example 40: Consider the constraint
that expresses no employee has had salary less or equal
to $0. In this case the top-level constraint would be
and the auxiliary relation r would
be defined by
Clearly the condition y - 0 can be pushed into the body
of r's definition.
When the magic conditions for all occurrences of an auxiliary
relation r ff have been defined, then the overall magic
condition is defined as follows.
Definition 41 (Overall Magic Condition) Let r ff be an
auxiliary relation defined by ' r ff
. Let I be the set of all occurrences
of r ff
. Then the overall magic condition
defined as
i2I
is a magic condition for every occurrence of
r ff . This follows from the observation that
i2I
and Corollary 29. Note that the evaluability of the over-all
magic condition is guaranteed because we can distribute
9 A leaf of a formula (i.e., an atom) is time-dependent if its extension
can change with time.
In the current version there are at most two references to the same
auxiliary relation. But in general we can handle arbitrary number of
references.
CHOMICKI AND TOMAN: IMPLEMENTING TEMPORAL INTEGRITY CONSTRAINTS USING AN ACTIVE DBMS 15
Rewrite Rule Condition
Fig. 7. Algebraic Transformations.
Ext(') denotes the set of all tuples that satisfy '.
the relation r ff into the disjunction m r ff . Thus for all variables
and no other free variables appear in this formula. A
successful conversion to SQL also requires that the magic
transformation produces a set of non-recursive view definitions
(this can be proved using the techniques presented in
[17]).
The magic conditions m r ff (for each ' r ff
are used just
before the auxiliary relations are rematerialized to restrict
their contents.
B. Algebraic Transformations
The Magic Set transformation reduces the amount of
data stored in auxiliary relations. Because the transformation
is applied after the original PastTL formula has been
converted to a set of first order formulas, it can not influence
the choice of auxiliary relations introduced during
this conversion. In the simplest case (that was considered
in Section IV) the auxiliary relations are introduced in the
places of subformulas of the original formula rooted by a
temporal connective. This is not necessary: the original
formula can be transformed to an equivalent formula in
which the temporal connectives are in more suitable places.
Example 42: Let '(y) := 9x:3P (x; y). Then the auxiliary
relation r 3P (x;y) has arity 2. On the other hand the
auxiliary relation r 0
induced by the temporal sub-formula
of the formula ' 0 (y) := 39x:P (x; y) has arity only
1. Moreover it is clear that r 0 ' - x (r) (thus the amount of
stored data is reduced). Clearly ' and ' 0 are equivalent.
The transformations are summarized in Figure 7.
VII. Future work
The generality and modular structure of the proposed
compiler architecture allows easy adaptation to different
environments. The modifications are usually confined to a
single module of the compiler.
Other Query Languages. We can introduce other constructs
in the constraint specification language to capture
bigger class of temporal constraints (for example real-time
constraints [2]), repeated activities (periodic sets), etc.
New Optimization Techniques. So far we have considered
only space-saving optimization techniques. We can
introduce optimizations that speed up the evaluation of
the given constraints (note that space saving techniques
also help towards efficient execution because we deal with
smaller amounts of data). In this area we have several
options:
1. We can write specialized routines for query optimiza-
tion. The transformation to RANF is not unique and
optimization of this transformation may help to reduce
the size of the final formula (but note that we can't
avoid the exponential explosion in general).
2. We can optimize the process of rematerialization of
the auxiliary relations as shown in [11], [12]. This
pass can be easily added to the existing system.
3. The partial ordering of the rematerialization of the
auxiliary relations is linearized during the compilation.
Some linearization may yield better magic optimization
than other ones. Similarly to the RANF case, we
can study the impact of different linearizations on the
resulting code.
Different Rule Systems. Despite the problems described in
section IV-C the Starburst rule system is extremely well
suited for our purposes: It provides set-oriented rules and
allows to specify rule priorities. In addition the rules are
triggered after each transaction. This defines the flow of
time in our model. All these features are utilized by our
implementation. We have also shown that the architecture
can also utilize tuple-based rule systems [20]: Essentially
we have to simulate the set-oriented system explicitly by
additional rules for each table (mentioned in the constraints
specification) that maintain the (also explicit) transition
tables.
Performance Analysis. Measuring performance of a constraint
enforcement system is a quite complex problem.
Clearly the cost of evaluating the constraints is the same
as running a set of queries at the end of every transaction
(time overhead) and maintaining the auxiliary relations
(space overhead). However, the time/space overhead
depends directly on the transactions run against the
database. Both the length of the transactions and the
amount of changes to the database play a significant role
in the analysis. Moreover a benchmarking method has to
choose a reference system the results are compared with.
It is not clear, what the reference system should be-there
are several candidates: a database without any constraint
enforcement, constraint enforcement with respect to the
whole history of the database, etc. We plan to develop
benchmarking method suitable for comparing the overhead
and performance of temporal constraint enforcement systems
VIII. Conclusion
We have seen that the language of Past Temporal Logic is
suitable for specifying constraints over temporal databases
in a straightforward and declarative way. We have also
shown that this specification can be translated to active
DBMS rules which guarantee enforcement of the constraints
on the underlying database. There is no need for
an additional run-time constraint monitor. Moreover, during
the translation process all the formulas are checked for
safe evaluation property and formulas that do not meet
this requirement are rejected by the system. Also several
optimization steps are performed to cut down the overhead
connected with the constraint checking.
Our approach expands the range of active database system
applications without requiring changes to the active
DBMS itself.
IX.
Acknowledgments
We are very grateful to Jennifer Widom for encouraging
us to use Starburst and for supplying the system. Thanks
also go to Inderpal Singh Mumick for discussions and for
sending us his Ph.D. thesis. This research was partially
supported by NSF grant IRI-9110581.
An early version of some of the results in this paper
appeared in [19]. INGRES is a trademark of the Ingres
Corporation.
--R
Efficient Checking of Temporal Integrity Constraints Using Bounded History Encoding.
Safety and Translation of Relational Calculus Queries.
International Organization for Standardization (ISO).
Implementing Set-oriented Production Rules as an Extension to Starburst
Deriving Production Rules for Constraint Maintenance.
Temporal Logic.
Logic and Databases: a Deductive Approach.
Deriving Integrity Maintaining Triggers from Transition Graphs.
Efficiently Up-dating Materialized Views
Maintaining views incrementally.
Temporal Triggers in Active Databases.
On Rules
The temporal logic of reactive and concurrent systems.
Query Optimization in Deductive and Relational Databases.
Magic sets and other strange ways to implement logic Programs.
Implementing Temporal Integrity Constraints Using an Active DBMS.
INGRES/SQL Reference Manual for Unix and VMS operating systems.
--TR
--CTR
Wes Cowley , Dimitris Plexousakis, Temporal Integrity Constraints with Indeterminacy, Proceedings of the 26th International Conference on Very Large Data Bases, p.441-450, September 10-14, 2000
Avigdor Gal , Opher Etzion, A Multiagent Update Process in a Database with Temporal Data Dependencies and Schema Versioning, IEEE Transactions on Knowledge and Data Engineering, v.10 n.1, p.21-37, January 1998
Leopoldo Bertossi , Marcelo Arenas , Cristian Ferretti, SCDBR: An Automated Reasoner for Specifications of Database Updates, Journal of Intelligent Information Systems, v.10 n.3, p.253-280, June 1, 1998
Jan Chomicki , David Toman , Michael H. Bhlen, Querying ATSQL databases with temporal logic, ACM Transactions on Database Systems (TODS), v.26 n.2, p.145-178, June 2001
Vittorio Brusoni , Luca Console , Paolo Terenziani , Barbara Pernici, Qualitative and Quantitative Temporal Constraints and Relational Databases: Theory, Architecture, and Applications, IEEE Transactions on Knowledge and Data Engineering, v.11 n.6, p.948-894, November 1999
Jan Chomicki, Efficient checking of temporal integrity constraints using bounded history encoding, ACM Transactions on Database Systems (TODS), v.20 n.2, p.149-186, June 1995
Can Trker , Michael Gertz, Semantic integrity support in SQL:1999 and commercial (object-)relational database management systems, The VLDB Journal The International Journal on Very Large Data Bases, v.10 n.4, p.241-269, December 2001 | constraint checking;integrity constraints;temporal databases;dynamic constraints |
627693 | Critics for Knowledge-Based Design Systems. | AbstractExpert critics have been built to critique human performance in various areas such as engineering design, decision making, etc. We suggest that critics can also be useful in the building and use of knowledge-based design systems (KBDSs).Knowledge engineers elicit knowledge from domain experts and build a knowledge-based design system. The system generates designs. The amount of knowledge the system possesses and the way it applies the knowledge directly influence the performance of its designs. Therefore, critics are proposed to assist 1) acquiring sufficient knowledge for constructing a desirable system, and 2) applying proper knowledge to generating designs. Methodologies of equipping a KBDS with critics are developed. Our practice in building and using a KBDS shows the applicability and capability of these critics. | Introduction
Engineering design is a domain where a knowledge-based approach has often been adopted
due to the very large number of factors that must be considered and the difficulty in accurately
characterizing them [17]. A brute-force approach is out of the question for configuration
design tasks due to its computational complexity. Experts design by following
general rules, applying their knowledge, and using contextual sense and empirical insights.
A knowledge-based design approach attempts to mechanically generate designs with reference
to the behavior of experts. This work is about how to build a better knowledge-based
design system to perform configuration design where the choices in a design can be enumerated
exhaustively. Given a domain and problems to be solved, knowledge engineers first
This author is currently with Department of Information Systems & Computer Science, National University
of Singapore, 0511. [email protected]
acquire expert knowledge, codify the knowledge in a machine usable form (rule, predicates,
etc.), and build a knowledge-based design system. Ideally, when the system is in operation,
it can apply proper knowledge to problems accordingly to produce "good" designs that
meet the requirements and criteria. Otherwise, more knowledge needs to be acquired in
order for the system to generate good designs. A cycle is formed by these four processes
(acquiring, codifying, building, and applying) that heavily depend upon domain experts
(how they choose and summarize the knowledge) and knowledge engineers (how they elicit,
codify and use the knowledge). In general, like all models, knowledge bases are selective,
based on assumptions, and prone to failure [2]. Biases, irrelevant or incomplete knowledge,
misconceptions, and wrong assumptions are likely to be introduced into the system through
these four processes and therefore affect the quality of the system's designs. Some effective
means must be employed to reduce the likelihood of these factors being incorporated in a
system.
A critic is a system in which a knowledge-based design system (KBDS) is a component.
A critic takes, as its inputs, a problem description and a proposed solution to the problem
and gives, as its output, a critique, i.e., what is wrong and where it is wrong. It detects
errors or possible improvements, and provides the user with directions for correction and
improvement. A set of critics is proposed for knowledge-based engineering design. It accounts
for aspects to be covered by critics, their functions, and their usage in building and
applying knowledge-based systems. The critics are aimed at assisting knowledge acquisition
and improving system's performance.
In the following, the need for critics is elucidated for a knowledge-based approach; a
set of critics is then suggested; and each critic is described in turn by giving algorithms
that implement the critics. In Section 3, an example application is defined, and a KBDS,
called DS, is described. Section 4 illustrates how the set of critics is applied to improve
DS's performance. In Section 5, some related work is discussed to place this work in
literature. We conclude with a summary of the proposed critics, and with a discussion of
future directions for research and development on this work.
2 Critics of KBDS
In this section, it is first shown, by identifying four problems, that critics are necessary in
building and using a KBDS. Second, a set of critics is proposed to attack these problems
by critiquing designs of a KBDS and by using domain knowledge. Then the critics are
presented with the algorithms that implement them.
2.1 The Necessity of Critics
The necessity of critics can best be seen in the four problems identified in KBDSs. The
first two problems pertain to eliciting and applying knowledge, and the last two concern
the designs a system produces.
(1) Acquiring adequate knowledge
Acquiring knowledge for a KBDS is a subjective process: knowledge engineers and
domain experts could well be biased in their viewpoints or from their experiences. Too
often, useful knowledge is omitted - the adequacy problem. A critical question is how they
can know if the knowledge obtained is sufficient for a given domain. Perfect knowledge
is impossible to obtain, but sufficient knowledge is essential for a KBDS to succeed [6, 8].
In a moderately complex domain, it is difficult to find out what's missing, based on a
handful of test cases. Increasing the number of cases studied can be time-consuming and
is often impractical. Hence, a method is sought to alleviate the adequacy problem without
increasing the number of cases to study. Such a method is expected to provide clues and
directions to search for more knowledge and to reorganize knowledge.
(2) Choosing proper knowledge
As the knowledge accumulates in a particular design domain, we ultimately encounter
the problem of choosing the most suitable knowledge for a particular case - the application
problem [7]. Not knowing all cases beforehand makes this problem extremely difficult.
Heuristics employed by a KBDS may bring about global optimization. Carefully designed
heuristic search may generally produce the expected results, but, it may also generate some
unexpected (or poor) results in some cases. Therefore, it is sometimes necessary to try some
other knowledge or a new combination of heuristics in order to ensure proper knowledge is
used in design.
(3) Correctness of designs
Correctness is a minimum requirement for engineering design in that a solution should
satisfy what a problem description specifies - the correctness problem. Any unsatisfied
requirement indicates mistakes in the structure of the system, and/or misconceptions in a
knowledge base. Whether a design is correct can be tested directly by checking if all the
requirements stated in the input (a problem description) are met by the output (a design).
(4) Consistency of designs
A problem description specifies particular requirements to be satisfied, whereas a domain
theory contains general knowledge about the domain. As a problem description can be
employed to check correctness of a design, a domain theory can be used to check consistency
of a design. That is, designs should be consistent according to a domain theory - the
consistency problem. Design consistency means two things: (1) consistency among the
components of a design; and (2) consistency between the components of a design and
the domain theory. Lack of consistency indicates logical errors in the system structure,
misconceptions, or possibly, wrong assumptions in a knowledge base.
2.2 Critics
The four problems described above are obstacles in building a successful KBDS, and must
be treated systematically. One way of doing so is to critique a design, because (1) designs
are in general a good indicator of how well a design system is built and knowledge is used;
and (2) errors found in designs usually reflect problems in the process of either acquiring,
organizing, or applying knowledge. In addition, domain theories and problem descriptions
can be used in critiquing. Therefore, the use of critics is suggested as follows:
i. these four problems are the minimum set that should be considered;
ii. there is information available for critiquing, such as problem descriptions,
domain theories and produced designs;
iii. the use of critics should be either independent (of a knowledge base) or
deductive (from a part of the knowledge base);
iv. formal and systematic use of critics should be pursued; and
v. a KBDS is a part of critics.
Four critics are proposed here to critique solutions generated by a KBDS. The aim
of these critics is to assist (1) acquiring sufficient knowledge for constructing a desirable
system, and (2) applying proper knowledge to generating good designs.
2.2.1 Expertise completion critic
An expertise completion critic is required to deal with the adequacy problem, and to help
a knowledge engineer reorganize and generalize the knowledge base of a KBDS. One way of
implementing the critic is to build a strong model of a domain-trained knowledge engineer
and to apply it to knowledge acquisition [15]. In general, we should try as strong a model
as possible, to pool all available experts together, and to combine their expertise. When
a domain is complex, building a strong model is a problem in itself. Even if a strong
model is obtainable, the adequacy problem still cannot be solved. When a KBDS produces
correct designs consistent with the domain theory, how could one know whether a better
design has been missed or not? A system cannot determine, by itself, its own knowledge
adequacy. A critic can help, after a strong model has been built, to make the model stronger.
When only a few known cases are available, the critic uses an independent system to check
the knowledge adequacy by ensuring that the KBDS produces at least as good as any
designs by the independent means. By independent we mean that two systems are different
in their implementation mechanisms, in their search algorithms, and in their knowledge
used. For example, a neural network system can be a choice of independent system, but
another KBDS is not 1 . The critic discovers missing knowledge by applying both systems
to producing designs, and then studying their differences to find out why the designs are
different, and to locate what's missing. Of course obtaining an independent system can
be difficult. It must (1) adopt a different mechanism and a different search algorithm; (2)
apply different knowledge; (3) use the same representation for the input and output as the
KBDS; and (4) be feasible to implement and run. Often these requirements can be met by
using an algorithmic design system, which is capable of producing good designs, but too
computationally intensive for routine use in design.
Algorithm 1 implements our first critic. This critic is used during system construction
phase when computing time is relatively affordable, and experts are available for detailed
cross examination of a few known cases. Suppose we are building a knowledge base, KB for
a design system, DS; IS is an independent design system that does not use the knowledge
in KB; d d and d i are designs generated by DS and IS respectively; and diff stores the
differences between d d
and d i
. This critiquing process continues until every individual case,
in P - a set of known cases, is tested and diff is empty or it can be terminated by a
knowledge engineer if a certain degree of satisfaction is reached.
For (each p 2 P) do
If (diff !? NULL)
ModifyKB(KB; diff);
Above, Compare(d d ; d i ) gives the differences between d d and d i in terms of critical
decision points defined by the domain. In ModifyKB(KB; diff ), a knowledge engineer first
analyzes diff , figures out whether KB is capable to diminish diff by rearranging the order
of rules or adding new knowledge, and modifies KB accordingly.
2.2.2 Alternative solution critic
There may exist redundant and ad hoc knowledge for handling special situations. For a
given design problem, it is not always obvious which piece of knowledge is most suitable
Otherwise, why can't we combine the two KBDSs to build a more powerful KBDS in the first place?
to achieve an optimal solution. However, it would become more obvious when a design is
generated. Which parts of the design are not good or reasonable can be used as feedback
to try out a suitable body of knowledge. The feedback can be obtained automatically or
from a design expert, and guide a KBDS to generate alternative designs. By finding the
best design in terms of cost or expert judgment, the critic helps identify the most proper
knowledge. The critiquing process continues until no better solution is found or until design
experts are satisfied. Below two algorithms are given for an alternative solution critic: one
is automatic and the other allows experts to be involved in critiquing.
Suppose T is a threshold for the maximum number of designs; c t is a cost for d t ; C is a
set of various costs; d t
is a design at time t; and p and P are defined above. Every design
has a WorstParts stack. Initially, it is empty. DS generates a design d t that is evaluated
to have a cost c t and what is in WorstParts depends on applications.
For (each p 2 P) do
else
Here SetInitial(WorstParts) finds the most possible responsible decision points in d t ,
resets the strategy (heuristic) there. At each decision point, a few competing heuristics are
available. They are selected in a round robin fashion. Evaluate(d t
calculates the cost of
d t based on a cost function. Report(DS; d t ) gives a list of worst parts in d t . An algorithm
is given in Appendix A to show how this can be implemented in telephone cable network
design. Append(c t
into C. Unless numOfDesign reaches the threshold T or
DS does not produce designs with the same costs, the algorithm tries to reset some strategies
at the critical decision points (determined by applications) responsible for WorstParts. A
new design will be generated with different heuristics at the identified decision points. By
varying the heuristics various designs can be generated. Thus, the above algorithm carries
out automatic redesign.
Obviously, the performance of this critic is dependent upon the utility of the cost function
as metric for design quality. If an accurate metric is difficult to generate, it would
limit the degree to which designs can be improved. As an alternative, the following algorithm
involves expert in determining which parts of a design can be improved. Suppose D
is a set of designs; and suggest is the input by human experts after examining a design,
d t . Suggestions are solely based on the expert's judgment of a graphically displayed de-
sign. SetInitial(suggest) resets heuristics at suggested decision points. ExpertExamine(d t
presents d t , with all d in D if necessary, graphically to design experts and returns suggestions
by experts for possible improvement to d t .
For (each p 2 P) do
suggest
suggest
If (suggest !? NULL)
Algorithms implement the alternative solution critic, which can be used either
automatically or manually. Their use in DS is shown in Figure ??. A user can choose either
one, automatic or manual. When the latter is chosen, the user will be asked to input critical
decision points s/he thinks are responsible for an unsatisfied design based on a graphical
display.
2.2.3 Correctness and consistency checking critics
These two critics are relatively close to each other since they rely on the contextual information
of design such as domain theories and problem descriptions. Correctness of a design
is defined as satisfaction of the design requirements specified in a problem description. Consistency
is defined as correct interrelationships (no conflicts) between the design and the
domain theory, and between different parts of a design. Two critics are used during both
system construction and design generation. Only when no incorrectness or inconsistency is
found can the two critics stop monitoring. While correctness and consistency of a design
must be checked to ensure that it is a valid solution to the design problem, the role of these
critics in the phase of system construction is mainly to identify the sources of problems in
the knowledge base and help a knowledge engineer correct them.
(Insert Figure 1 here)
The algorithm for correctness and consistency checking is as follows. Suppose we are
constructing a system DS, P is a set of design problems, T a domain theory, d a design, Sys
a version of DS, S incorrect and S inconsistent sets of reported errors. The algorithm is terminated
when S incorrect
and S inconsistent
are empty, i.e., Sys produces designs for P without
incorrectness and inconsistency. The domain theory and input information are used. A
nonempty S incorrect or S inconsistent starts refining DS. In Create(T ) of Algorithm 4, knowledge
engineers generate an initial version of DS using all knowledge available, such as all
domain experts, specifications, manuals, etc. in forms of production rules and procedures.
In Apply(Sys; p), Sys is employed to construct a design for p. In Refine(Sys; d; T ; S incorrect
and S inconsistent ), knowledge engineers modify Sys based on those variables and produce
a new version of DS. CorrectnessCheck(d; p) examines d against p and reports any errors
to S incorrect
. ConsistencyCheck(d; T ) examines d against T and reports any errors to
S inconsistent . SystemDesign(Sys; p) is defined recursively to refine DS until a version of DS
does not have incorrect and inconsistent parts in its design for p.
For (each p 2 P) do
SystemDesign(Sys; p);
SystemDesign(Sys; p)
If (S incorrect !? NULL inconsistent !? NULL)
inconsistent );
SystemDesign(Sys; p);
Re-examining the above four algorithms, we notice that a KBDS is always a component
of a critic (e.g., either Apply(DS, p) or Apply(Sys, p) is called), but not vice versa. All the
four try to improve a KBDS by examining designs the KBDS produces. Moreover, Modify,
Refine and ExpertExamine require the involvement of experts and/or knowledge engineers.
Machine learning could be applied to knowledge base modification [3], but it is outside the
scope of this paper.
In the following, we will show the use of the critics in the construction and application
of a practical KBDS called DS. Critics are used to detect errors in a design, provide directions
to enrich a knowledge base, improve designs, and reduce subjectivity in knowledge
acquisition. Improved DS can result in better designs.
3 An Example Application of Critics
We will now describe an engineering design application to show how critics are built and
used. The application is that of telecommunications network design. As a commonplace
design task, this has attracted the development of KBDSs for automatic network design.
3.1 An application: telecommunications network design
The specific area tackled by DS is that of telephone distribution network design. This is
that part of the telephone network between telephone exchanges (switches) and customers'
premises. For simplicity, the design of this type of network is divided into distribution areas,
each served by a large cable from the exchange. The distribution network consists of cables
and joints, with cables running in underground pipes, and joints in underground pits. The
problem is partly one of connectivity to all customers, but also one of efficiency in that
larger cables are used to minimize costs, so the topological distribution problem becomes
one of non-linear, discrete optimization where longer cable routes are traded-off against the
costs of various cable sizes. Network design must also consider maintenance costs, which
leads to minimal use of joints.
Input (problem description): (1) The topological and cadastral information of a district
that consists of streets, intersections, and houses; and (2) service requests for each
house (i.e., location for provision of telephone network access for a residence and the
amount of the provision.)
A sample problem: there are 118 houses in a district as shown in Figure ??(a) that
consists of eight intersections and eleven street segments (polygons), etc. (provided
in a cadastre file with the details such as how they are located and related, their
coordinates and so on), and each house requires one telephone connection.
Output: A tree structure T(j; c) where j are joints that connect different sized cables,
and c cables that are telephone lines in groups. Joints are nodes, and cables are links
in T(j; c). The root of a tree is a special joint, the pillar, as a major distribution point
that is linked to a larger network. Cables connect the joints to the pillar.
Criteria: to achieve the minimum number of joints, minimum number of street crossings,
and minimum use of cables in terms of length and size, and minimum cost of T(j; c)
when the first two minima are obtained.
3.2 Domain theory
A domain theory consists of assumptions, facts and inference rules in the given domain.
An assumption is something that is hard to prove but supposedly true. Facts are what are
actually done in practice. Inference rules are more abstract facts and can be used to derive
new facts. Given below is a sample theory for the chosen domain:
Assumptions:
1. In general, service requests are evenly distributed in a district.
2. All service requests are located along the streets.
3. Every service request should be served by just one cable.
4. The center of a district is where a pillar is located.
Facts:
1. Telephone cables should run only along the streets, should not go through the
central areas of the street intersections and any other properties such as houses,
parks.
2. There should be no loops in the network (the definition of T(j; c)).
3. Telephone cables are available in discrete sizes: 10, 20, 30, 50, 70, and 100.
4. A joint can split a cable into cables with smaller sizes.
5. A joint's minimum capability is 10.
6. Every service request should be included in the network, T(j; c).
7. Cables are inside pipes.
Inference Rules:
1. If a pipe is allocated for a cable, the pipe shares some identical information with
the cable, such as length, position and joints they connect with.
2. The number of joints is proportional to and less than the number of service
requests because of Facts 3 and 5.
3. The cable sizes taper from the root to the leaves of T(j; c).
Heuristics used by experts are summarized in Table 1. Implementation cost is the major
measure of optimality.
3.3 A KBDS for network design
DS is a KBDS for telecommunications network design. It is rule-based, and it models the
design problem as search tree with solutions as the leaf-nodes. The design process then, is
a matter of exploring the search tree for the optimum solution to each design problem.
As the search space is large, both breadth and depth wise, we use knowledge-based
design techniques to narrow the search space. While this results in a system that can
quickly design networks, we are reliant on the quality of the knowledge used to build the
system, and exposed to the possibility that designs are of local optima.
aspect content
costs minimize all material costs
reliability minimize number of joints
cable layout run from boundaries to the center
cable routes shortest possible
parts usage maximize to their capacities
future develop 25% extra in capacity
others how to choose an intersection/polygon
how to position a joint
how to run cable crossings; and
how to connect pits, etc.
Table
1: The aspects and contents of heuristics in telephone cable network design.
GenerateDesign
For (all inter)
DesignJoint;
DesignJoint
If (S !? NULL)
DesignJoint;
DS's algorithm is described in Algorithm 5. OD stands for outgoing degree for each
intersection 2 , inter an intersection, S a set of intersections. Without loss of generality,
a Euclidean distance is used (one of the alternatives is the polygon distance 3 , e.g. The
following describes the automated design process. It calculates OD's for every intersection.
one-way intersection has its OD=1, a two-way intersection OD=2, and an n-way intersection OD=n.
3 A polygon is a part of a street between two closest intersections on the street. The polygon distance is
the number of polygons between two intersections.
Next it calls DesignJoint that implements a general heuristic employed by human experts
assigning joints from the boundary of a district towards the pillar. DesignJoint finds
the intersections, S, which have the least OD's at the time. If S is not empty, it chooses
an intersection, inter, which is the furthest away from the pillar in S. Then it selects
polygon according to some criterion (see below) among all unserved polygons connected to
inter. After AssignJoint(polygon) and UpdateOD(polygon), DesignJoint continues until all
intersections' OD is zero. An illustrative example is shown in Figure ??.
(Insert Figure 2 here)
CalculateOD(inter) computes OD for all inter in the district. LeastOD(all inter) finds
intersections that have the least OD's among all inter. FurthestInter(S) chooses an intersection
among S, which is the furthest away from the pillar. ChoosePoly(inter) selects
one polygon, which starts at inter and has not been assigned joints, based on some criterion
(a polygon is either furthest away, or closest to the pillar, or just picked at random).
AssignJoint(polygon) allocates joints according to the services required on polygon. If its
previous polygon has some joints that have some spare capacity, these joints should be
considered to use first. If its previous polygon has some service requests that have not been
served, these requests should be included to the ones of polygon's. Some balancing is usually
needed if polygon has multiple previous polygons. UpdateOD(polygon) decreases by
one OD of the two intersections that enclose polygon. AssignCable(root) allocates cables
for joints and determines the cable sizes as well.
This algorithm implements a design strategy taken commonly by experts, i.e., laying
cables and assigning joints from the boundary of a district toward the pillar. Since a cable
tapers from the pillar and a joint at an intersection on a polygon may serve requests on
other polygons around the intersection, the purpose of this strategy is twofold: achieving
the minimum number of joints and lowest cost of cables.
4 The Use of Critics for DS
A basic requirement for a KBDS, and thus for DS, is that it produce designs that not only
satisfy the specifications, in accordance with the domain theory, but also are comparable to
the designs produced by human experts. Other requirements are: efficiency - cost and time
convenience - allowing users to easily try different designs; and flexibility - allowing
users the choice of involving themselves by permitting manual instruction for changes to
designs produced by a KBDS. DS aims to satisfy these requirements. In building DS, the
correctness and consistency critics are designed to check each version of DS in order to
produce correct and consistent designs. The expertise completion critic is used to obtain
sufficient knowledge - a critical part for DS's success. The alternative solution critic offers
the opportunities to have various designs with or without expert's intervention.
4.1 Using correctness and consistency checking critics
These critics can be implemented by giving functions in Algorithm 4 physical meanings.
In system design, two critics check the designs produced by DS: (1) CorrectnessCheck(d; p)
uses the information contained in the problem description (input) to check the counterparts
included in the design (output). For the sample problem described in example 1, there are
houses that are equivalent to 118 requests. One task CorrectnessCheck performs is to
sum up all the requests that the design d (i.e., T (j; c)) serves. If the sum is either greater
or less than 118, an error message is added into S incorrect . Another check is that the cable
network should have no loops, i.e., starting from a joint, going down it's child links, the
joint will not be visited again. (2) ConsistencyCheck(d; T ) works less straightforwardly. The
domain theory T is used to check if there is any conflict among the design components based
on the inference rules in T and/or inconsistency between d and the facts in T . For example,
the number of joints should be close or equal to d118=10e using inference rule 2 based on fact
3. Other consistency checking includes: #(cable
number of; #(pits) households/2; all pits are connected by pipes; pipe routes are overlaid
on the cable network, etc. A complete check should examine all components in d with the
facts and rules in T . S inconsistent contains the complaints found in the examination. (For
the meanings of symbols and functions, refer to Algorithm 4.) Please note that in design
generation, function Refine() in Algorithm 4 is replaced by ReportError() since a user of a
system normally has no right to modify a system or its knowledge base. Algorithm 4 can
be viewed as a simple version of constraint-based reasoning in which the goal is to discover
some problem state that satisfies a given set of constraints.
4.2 Using expertise completion critic
Only a limited number of test cases can be studied during knowledge acquisition and system
construction, due to the time constraint and number of cases available. Algorithm 1 tries to
make full use of these test cases in knowledge acquisition, and to improve DS's generality 4 .
In principle, the more problems studied, the more general the system could become.
As is suggested in Algorithm 1, IS should be independent of DS. Our choice of IS is
Simulated Annealing [5] that relies on an energy (cost) function, instead of the heuristics
used in Algorithm 5 that are shown in Table 1 of section 3.2. This independent design
system is called SA. Algorithm 1 is implemented by calling Apply(DS, p) and Apply(SA,
4 By generality we mean that the system can perform at a comparable level for the unseen cases as it
does for the seen ones.
p), here IS is replaced by SA. Both DS and SA have the same basic representation for
streets, polygons, intersections, joints, cables, etc. It is shown in Figure ?? how these two
systems are used in critiquing. Two sets of designs, fdesign1g and fdesign2g, are generated.
Any difference between the two sets can be a good clue for knowledge base modification,
while which design is better is not an issue concerned.
(Insert Figure 3 here)
Human expertise is used in SA to determine an appropriate cost function in equation
(1). SA searches for an optimal design in terms of the cost function that is about the
total cost of a design. There may be many local minima of the cost function. SA tries to
avoid getting stuck at a local minimum by increasing the "temperature" so that a globally
optimum solution may be found. The main features of SA are:
1. The uphill moves are controlled by a parameter T - the "temperature".
2. A higher T means a higher uphill moving probability.
3. Initially, a sufficient high T is given to allow a lot of uphill moves.
4. As time passes, T decreases and eventually arrives to a frozen point. After that, only
downhill moves are allowed until the global minimum is reached.
5. State space are all possible legitimate designs.
6. A scheme s j
is a neighbor of scheme s i
can be obtained from s i
by a single
change: change of either an intersection or a polygon.
7. Cost (or criterion) function
(1)
where J - Number of Joints, L - Total length of cables, a weighted sum of the lengths of
different cables by itself, X - Total length of crossing cables from one side of a street to the
other side, and N - Number of services. And ff; fi; and fl, are constants selected by the
designer to reflect the importance of each cost parameter. The optimal design should, of
course, have a minimal cost factor C. The choice of such a C is due to its simplicity and to
that it reflects the nature of the underlying application [9]. It is noted that a linear cost
factor is acceptable due to the presence of rules that have already selected between various
possible designs such as minimum number of joints, etc.
DS is rule-based, and SA is neural-network-based. Knowledge used in DS is summarized
in Algorithm 5, DS is basically a greedy algorithm that looks one step ahead by choosing
a proper intersection and polygon locally; while knowledge used in SA is expressed in
minimizing equation (1), SA is, in essence, searching for a global optimum of C. DS and
SA do share the representation for their inputs and outputs, so that their designs can be
compared.
The two designs are generated by using DS (Algorithm 5) and SA, depicted in Figure
??(a) and (b) respectively. Recall that in Algorithm 5, when function ChoosePoly(inter)
is called, a criterion is needed in order to make a selection. A heuristic derived from the
general rule of laying cables from a boundary to the pillar is, naturally, choosing a polygon
among those available that is furthest away from the pillar. The effect of this heuristic
is that a cable at the top in Figure ??(a) goes around and along the arc towards the pillar
instead of going directly to the pillar at the first possible turning intersection. If a
more complex district is presented, more such (similar and subtle) problems could occur.
Manually finding these problems are both difficult and impractical.
The alternative approach, SA, is used to look for different configurations of the network.
SA generates one shown in Figure ??(b). The discrepancy between the two networks provides
clues to modify the heuristic. The Comparison module in Figure ?? checks each route
in design1 against all routes in design2 until all routes in design1 are checked. By comparing
designs (a) and (b) in Figure ??, we notice that the heuristic does not give intuitively good
choice at the intersections from which the pillar is one polygon away, although it works at
the other intersections. As a result, when such intersections are reached, another heuristic
must be used to choose the polygon that connects the intersection to the pillar. This example
shows the limitation of the heuristics used, as well as the value of an independent
approach in improving a knowledge base.
(Insert Figure 4 here)
4.3 Using alternative solution critics
DS takes a very small amount of time to produce a design (order of seconds), compared to
the time used by experts (order of days). Although quick design is a desirable feature, so
is the quality of designs. Designs generated by DS should be comparable to the designs by
human experts. One way to measure the quality is to compare the design costs; another
way is for designs made by DS to pass examination by experts. In either case, a correct
and consistent design is a minimum requirement. We shall see if a design can be improved
based on either a cost evaluation or an expert opinion.
The generation of alternative designs is a process of guided redesign based on previous
designs. Two types of redesign implemented in DS are automatic and interactive redesign.
They are two forms of the alternative solution critic. The former relies on a cost function,
and the latter depends on expert judgment.
As was mentioned in Algorithm 5, a cable route can be altered at an intersection.
Hence the responsible decision points are street intersections here. Choosing which polygon
to lay cables at an intersection is based on the strategy assigned. SetInitial(WorstParts)
in Algorithm 2 finds the responsible intersections in d t , and resets the strategy there. There
are three strategies for choosing a polygon to continue to lay cables at an intersection:
furthest, closest in terms of a polygon position away from the pillar, or random. They are
chosen in a round robin fashion. The default strategy is selecting the closest polygon. The
WorstParts stack is filled with some joints which connect cables 5 . These cable routes are
relatively costly, compared to the other part of the cable network. Unless numOfDesign
is greater than a threshold T or DS does not produce designs with the same costs, the
algorithm tries to reset some strategies at the intersections related to joints in WorstParts.
A new design will be generated with different strategies at the specified intersections.
Two cases are used to illustrate how Algorithms 2 and 3 work. Along with the graphic
display for each case, a state (design) transition graph is also used to show the repetitive
process of alternative design generation. In Algorithm 2 a critical point is set automatically
(in short, automatic setting), but in Algorithm 3 it is set manually by experts (in short,
manual setting).
(Insert Figures 5 and 6 here)
Case I: The previous example revisited
Take example 2 again. DS generates the design in Figure ??(a). Automatic redesign
does not create any new design in this case. However, if we manually reset at the top middle
intersection (grey one in the figure), DS creates another design (b). Although design (b)
has a higher cost, in this case DS cannot automatically reduce the cost to that of design
(a). That is, as shown in the transition graph, automatic redesign does not bring about
any new design. In most cases tested, automatic redesign did produce best designs; in some
cases, manual resetting was necessary for DS to change designs from one to another. We
have shown by this example that the sole reliance on the automatic setting for redesign may
not necessarily produce a better design; a better design may be missed.
Case II: A non-standard pillar position
In this case, DS generates the design in Figure ??(a). When it's in the automatic
redesign mode, it produces design (b), next design (c), then back to (b), forms a loop. Now,
if we manually set the bottom middle intersection (grey one in the figure (d)) after design
(a), a new design (d) is constructed. From there, a series of automatic redesigns bring (d)
back to (b). Judged by their costs, designs (b) and (c) are the best. DS oscillates between
generation of (b) and (c). This oscillation could be settled at (b), since design (b) costs
the least. This case is more complex than case I, for the pillar is located at the boundary
instead of the center. There are more variations, as seen in the transition map.
5 An algorithm of finding worst parts of design based on a cost function is given in Appendix A.
4.4 Remarks
For the telecommunications network design project described here, roughly two thirds of
the time was spent on the traditional development of DS, one third on the development
of critics, critiquing and revision. Implementing SA did not take much time (half a day)
since the algorithm is simple and easy to implement. Obtaining a proper cost function C,
however, took longer time with respect to SA implementation, since obtaining C involved
the consultation with domain experts and experiments. By implementing these critics, the
completed project has been proven to be beyond what was originally expected, and to be
very useful in assisting design to expert designers.
Nevertheless, a few more points are worth mentioning regarding the use of these critics.
First, it is not always necessary to implement two independent systems. Since building an
independent system may be costly, we suggest the use of this approach when the following
occur: (a) There are only a few known test cases; (b) Some heuristics cannot be proven
generally useful; (c) There exists a simple cost function; and (d) It is too complex and
costly for experts to consider all choices of a design. Second, without the time constraint,
these two approaches (knowledge intensive versus computationally intensive) compensate
each other. However, a computationally intensive approach such as simulated annealing
cannot satisfy the time constraint in a practical design application, a KBDS is practical to
give a "satisficing" solution within a reasonable time. Third, design experts, in trial use of
automatic and manual redesigns, found these redesigns very convenient and useful for them
to generate various designs to compare. Two guidelines on using these redesigns are: (a)
When it is suspected that the evaluation function is out of tune, it is strongly suggested
to implement the Interactive Redesign critic so that a domain expert can be involved in
evaluation. (b) If a greedy algorithm is taken in generating a design, there are reasons to
believe that some non-local optima may result in the global optimum, then it is worth of
implementing the AutoReDesign critic. These two critics are not suitable for tasks such as
planning, control in real time. Last, a KBDS is only a component of each critic. The input
and output of a KBDS are used in critiquing. The knowledge employed by the KBDS is
not used in the system evaluation.
5 Related Work
Although this work stems from the practical need, it is influenced and nurtured by some
work in the community. Among many, an expert critic is one, which is a computer program
that critiques human-generated solutions [10]. A survey of expert critics by Silverman [14]
provides up-to-date work in the field. These critics have found many applications such
as decision making, engineering design, word processing, knowledge base acquisition, and
software engineering. Expert critics scrutinize the solutions (e.g., designs) produced by
humans in terms of clarity, coherence, correspondence, and workability tests. As defined
in [14], clarity means that all statements should be unambiguous; coherence deals with abstract
truth, or the logical structure of statements; correspondence concerns the agreement
of statements with reality; and workability means pragmatically verifying and validating a
body of knowledge. Expert critics were built to improve the performance of users [16].
Critics suggested by Fischer et al [4] are an important component of cooperative problem
solving systems, especially when they are embedded in integrated design environments. A
critic, by their definition, is a system that presents a reasoned opinion about a product or
action generated by a human. These critics detect inferior designs, provide explanations
and argumentation for their "opinion" and suggest alternative solutions.
The critics proposed in this paper share many similarities with the above two groups
of critics in their functions, objects they deal with, and objectives they want to achieve.
But they have their own distinct features: (1) they are designed to critique the solutions
generated by KBDSs, instead of by human designers; (2) they are used to assist knowledge
engineers improve the KBDS and to make the KBDS generate alternative designs, not to
help human designers produce alternative solutions; (3) two stages of using these critics are
clearly specified - first is system designing, second is design generating by the system. Not
all the critics are used in the two stages. The expertise completion critic is active in the
first stage, which helps clarify knowledge and enrich the knowledge base. Correctness and
consistency checking critics are functioning in both stages. The alternative solution critic
is only working in the second stage. The last three are used in a similar manner as are
Fischer et al's critics, that is, being embedded in integrated design environments.
Intelligent tutoring systems [18] also work on solutions and problems together, but they
are for people who learn a new trade. These systems are designed and developed based on a
relatively good understanding of domains and mastered skills. For example, the computer
tutors of Anderson et al [1] were based on a set of pedagogical principles derived from ACT
theory of cognition. In other words, the correct solutions are always known beforehand. In
design, such knowledge is not available. The proposed critics cannot tell if the best design
exists but only if a better one is accomplished. These critics also differ from intelligent
tutoring systems in the way they generate critiques. The critics try to locate the differences
between designs, and to find incorrectness or inconsistency of designs. Tutoring systems
begin always with deviations of a solution from the standard one.
Other relevant fields are the refinement and verification of knowledge-based systems.
Refining [13] is the process of fine tuning rules that discriminate between alternatives and
help assure the validity of the resulting system within the model. When the knowledge is
incomplete, it is not suitable. Critiquing shares some similarities with verification [11]. Both
attack the problems of knowledge redundancy and incompleteness. Knowledge verification,
however, focuses on the demonstration of logical correctness of the rules, wherein checks
are performed for superfluous, incorrect, or missing rules, which would eventually impair
system performance.
6 Conclusion and Future Work
Knowledge-based approaches are error-prone due to the subjective nature of knowledge
acquisition. Studying the processes from knowledge acquisition to design generation, we
identify four main problems, i.e., the adequacy, correctness, consistency, and application
problems. A set of critics has been proposed: expertise completion, correctness and consistency
checking, and alternative solution critics. They are called "critics" because all of
them accomplish their tasks by critiquing designs produced by a KBDS. Algorithms that
implement the critics have been given, and these critics have been used in a practical design
system in two phases (designing a system and generating a design using the system).
The expertise completion critic helps a knowledge engineer reorganize and generalize
the knowledge base of a KBDS. In order to obtain its critique, the critic uses the KBDS
and a second, independent system, which might be much slower, less ad hoc than the
KBDS, to run test cases. By comparing the designs of the two systems, problems can
be detected in the knowledge base of the KBDS. The alternative solution critic computes
alternatives to system generated designs based on information about sub-optimal parts of
the original design. The heuristics which caused the sub-optimal parts are replaced by
other heuristics for generating alternative designs. The sub-optimal parts of the design are
identified either by evaluating the design parts using a cost function, or by displaying the
current design interactively to an expert who can then point at parts of the design which
he regards as sub-optimal. The correctness checking critic checks whether the designs
generated by the system satisfy the original design specification. Detected errors are passed
to the knowledge engineers who can modify the knowledge base to remedy the errors. The
consistency checking critic examines whether the designs generated by the system satisfy a
given set of consistency rules. Any inconsistency will be notified to the knowledge engineers
to modify the knowledge base. These critics are a necessary set for a working knowledge-based
engineering design system. The systematic application of these critics has shown, in
telephone network design, promising results in knowledge acquisition, heuristics selection,
and design quality control.
This work shows our effort in search of methodologies that consistently and systematically
guide knowledge engineers to perform their tasks objectively and thoroughly, in
order to mitigate the bottleneck problem of knowledge acquisition and the subjectivity of
a knowledge-based engineering design approach. It has been shown the usefulness of the
critics in this limited application domain. However, more work is needed to extend this critiquing
approach to other domains, and to expand the set of critics. Future work will also be
on how to find suitable independent models to test against the knowledge-based model, and
how the critiquing approach can be combined with other approaches such as verification,
refining, intelligent tutoring, etc. to have a systematic, usable tool for knowledge-based systems
design. Another line of research will be on the objectivity in knowledge acquisition.
Since the subjectivity in knowledge acquisition is difficult to avoid, we start to investigate,
for some domains, using unsupervised learning algorithms to induce production rules (de-
cision tree induction has been shown effective in a supervised environment), and acquiring
knowledge at raw data level.
Acknowledgments
The suggestions made by anonymous reviewers on an early version of this paper are very
helpful and highly appreciated. The permission of the Director of Research, Telecom Re-search
Laboratories Australia, to publish this paper is hereby acknowledged.
--R
Intelligent tutoring systems.
Viewing knowledge bases as qualitative models.
Dendral and meta-dendral: roots of knowledge systems and expert system application
Critics: an emerging approach to knowledge-based human-computer interaction
Optimization by simulated annealing.
On the thresholds of knowledge.
Optimizing knowledge based system design.
Introduction to Linear and Nonlinear Programming.
Critiquing a physician's management plan.
Issues in the verification of rule-based systems
Automating the design of telecommunication distribution networks.
Refining rule bases for classification knowledge-based systems
Survey of expert critiquing systems: Practical and theoretical frontiers.
Critiquing human judgment using knowledge-acquisition systems
Expert critics in engineering design: Lessons learned and research needs.
Artificial Intelligence and Tutoring Systems.
--TR | knowledge acquisition;design;critiquing;knowledge-based systems |
627698 | A Graph-Based Data Model and its Ramifications. | AbstractCurrently, database researchers are investigating new data models in order to remedy the deficiences of the flat relational model when applied to nonbusiness applications. Herein we concentrate on a recent graph-based data model called the hypernode model. The single underlying data structure of this model is the hypernode which is a digraph with a unique defining label. We present in detail the three components of the model, namely its data structure, the hypernode, its query and update language, called HNQL, and its provision for enforcing integrity constraints. We first demonstrate that the said data model is a natural candidate for formalising hypertext. We then compare it with other graph-based data models and with set-based data models. We also investigate the expressive power of HNQL. Finally, using the hypernode model as a paradigm for graph-based data modelling, we show how to bridge the gap between graph-based and set-based data models, and at what computational cost this can be done. | INTRODUCTION
Relational DataBase Management Systems (DBMSs) are currently dominating the commercial
database market-place. The flat relational model (commonly known as the relational model) has
been advocated by Codd since the early 1970's [CODD70]. It has taken about 20 years for the
relational model to attain its present dominant position! Relational DBMSs have been developed
with the traditional business data processing applications in mind, such as: banking, payroll and
inventory control systems. The units of data needed for these applications are typically small and
have a simple flat structure. Furthermore, the operations performed on these units of data are
relatively straightforward and do not, in general, involve making recursive inferences.
In recent years there has been a growing demand to use databases in applications beyond
the traditional business applications, such as: Computer Aided Software Engineering (CASE),
hypertext, knowledge base systems, Computer Aided Design (CAD), image processing, scientific
data (such as satellite data) analysis and geographical data analysis. In these applications the
M. Levene is with the Department of Computer Science, University College London, Gower Street, London WC1E 6BT,
U.K., E-Mail address: [email protected].
G. Loizou is with the Department of Computer Science, Birkbeck College, University of London, Malet Street, London
WC1E 7HX, U.K., E-Mail address: [email protected].
units of data are typically larger and more complex than in the traditional business applications,
i.e. they are complex objects whose structure may be hierarchical or may have a more general
digraph structure. Furthermore, the operations needed to manipulate these complex objects may
not be straightforward to define and may involve making recursive inferences. As an example,
consider storing a VLSI chip layout in a database and defining the operations for modifying and
testing the chip. As another example, consider a library which would like to have available on-line
papers from scientific journals in a particular subject area such as Computer Science. The
task of organising the text included in individual papers in a manner that allows readers to browse
and query the text in a very flexible manner, and the task of creating the appropriate references
between papers cannot be carried out easily by using the relational model.
Currently database researchers are actively investigating new data models in order to be
able to manage efficiently applications not easily modelled using the relational approach, and are
implementing prototype DBMSs based on these new models. There are two main categories of
new data models that are being developed:
(1) set-based data models, such as the nested relational model [LEVE92, PARE89, THOM86,
VANG88], which extend the traditional relational model, and
(2) graph-based data models, such as the hypernode model [LEVE90] presented herein, which
build upon the traditional hierarchical and network data models [ULLM88].
Hereinafter we concentrate on a graph-based data model, namely the hypernode model.
less research has been carried out on graph-based data models and as yet there is no
agreement within the database community on a single graph-based data model. In contrast, the
relational model and its nested relational counterpart have been extensively investigated and provide
adequate formalisms for the development of set-based DBMSs.
We now informally introduce the three components of the hypernode model. The single
underlying data structure of the hypernode model is the hypernode, which is an equation of the
E) such that (N, E) is its digraph and G is its unique defining label. A hypernode
database is a finite set of hypernodes.
In Fig. 1 we illustrate part of a simple hypernode database, which models a simple airline
reservation system. The hypernode, whose defining label is AIRLINES, contains the defining
labels of other hypernodes which describe the various airlines, and the hypernode, whose defining
label is PASSENGERS, contains the defining labels of other hypernodes which describe the
booking information pertaining to passengers. The hypernode with defining label FLIES
represents a relationship telling us with which airline a particular passenger is flying. We note
that labels are denoted by strings beginning with an uppercase letter.
In Fig. 2 we show the details of some of the passenger hypernodes. We note that atomic
values are denoted by strings surrounded by double quotes and attribute names by strings
beginning with "$". We further observe that the attribute name $dependent in a passenger hypernode
is used to reference other passengers who in some unspecified way depend on this
for example, PASS2 and PASS3 may depend on PASS1 to drive them to the airport.
These references between hypernodes can be viewed conceptually as a part-of relationship or
alternatively as encapsulating the data represented in the referenced hypernodes. We use the distinguished
atomic value null to indicate that a "value exists but is unknown". We note that we can
also model incomplete information of the type "value does not exist" by isolated attribute names.
For example, if we delete the arc ($dependent, null) and the node null from the digraph of the
hypernode, whose defining label is PASS3, our interpretation changes from "PASS3 has a dependent
which is unknown" to "there does not exist a dependent of PASS3".
Fig. 1. Part of a passengers and airlines hypernode database.
FLIES
PASSENGERS
EA BA
DA AI
AIRLINES
BA
EA
DA
AI
The query and update language (or alternatively, the database language) for the hypernode
model, presented herein, is called HyperNode Query Language (HNQL). HNQL consists of a
basic set of operators for declarative querying and updating of hypernodes. In addition to the
standard deterministic operators we provide several non-deterministic operators (cf. [ABIT90]),
which arbitrarily choose a member from a set. Examples of the need for such non-determinism
are: choosing an arbitrary seat number for a passenger on a given flight or choosing an arbitrary
referenced paper on a specific topic from a given set of references. HNQL is further extended in a
procedural style by adding to the said set of operators an assignment construct, a sequential
Fig. 2. Some of the passengers in the hypernode database.
"EA121"
"21a"
null
"Mary"
"12c"
"BA212"
"Sara"
"22b"
"EA121"
"Hanna"
"12d"
"BA212"
"Iris"
$name
$dependent
$dependent
$name
$dependent
$name
$name
$dependent
composition construct, a conditional construct for making inferences and, finally, for loop and
while loop constructs for providing iteration (or equivalently recursion) facilities.
Finally, we equip the hypernode model with an integrity constraint called the Hypernode
Functional Dependency (HFD). HFDs incorporate into the hypernode model Functional Dependencies
(FDs) from the relational model [PARE89, ULLM88] by using a graph-theoretic formalism
We now very briefly overview some of the ramifications of the hypernode model which are
given in detail in the paper. We demonstrate that it is a natural candidate for formalising hyper-text
[CONK87, NIEL90] due to its support of a general-purpose digraph constructor and its ability
to support both navigation and declarative querying via HNQL. We then go on to investigate
the expressive power of HNQL in terms of the class of transformations from databases to data-bases
(termed computable updates) that can be expressed in HNQL. We present two classes of
computable updates and discuss the expressive power of HNQL with respect to these two classes.
Finally, using the hypernode model as a paradigm for graph-based data modelling, we show that
it is possible to bridge the gap between graph-based and set-based data models. This can be
achieved by a transformation from hypernodes to non-well-founded sets [ACZE88, BARW91].
Unfortunately, such a transformation is shown to be at least as hard as testing for isomorphism of
digraphs [BUCK90], whose complexity is, in general, an open problem [GARE79].
The rest of the paper is organised as follows. In Section II we present the three components
of the hypernode model. In Section III we show that the hypernode model can provide an underlying
formalism for hypertext. In Section IV we compare the hypernode model with other graph-based
data models and with set-based data models. In Section V we discuss the expressive power
of HNQL. In Section VI we show how to bridge the gap between graph-based and set-based data
models. Finally, in Section VII we give our concluding remarks and indicate further research
problems to be solved.
II. THE HYPERNODE MODEL
In this section we present the hypernode model which builds on the traditional graph-based
models, i.e. the hierarchical and network data models [ULLM88]. In particular, in Section II-A
we present the single underlying data structure of the model, namely the hypernode. In Section
II-B we present the query and update language of the model, i.e. HNQL, and in Section II-C we
show how FDs can be incorporated into the model in the form of HFDs.
A. Hypernodes and Hypernode Databases
The underlying data structure of the hypernode model is the hypernode, which is used to
represent real-world objects. We begin by recalling the definition of a directed graph (or simply a
digraph) [BUCK90]. A digraph is an ordered pair (N, E), where N is a finite set of nodes and E -
is a set of ordered pairs of nodes from N, which are called arcs (also known as directed
edges).
We use the following terminology for a digraph (N, E). An arc (n, m) - E is said to be
incident with each of its two nodes n and m. We call n the anchor and m the destination. We also
say that n is adjacent to m and that m is adjacent from n. The indegree of a node n - N is the
number of nodes adjacent to n and the outdegree of n is the number of nodes adjacent from n. A
node with no incident arcs is said to be isolated.
We assume the following two disjoint countable domains of constants are available. Firstly
we have a domain of Labels L whose elements are denoted by strings beginning with an upper-case
letter (excluding X and Y since these are used to denote variables). Secondly we have a
domain of Primitive nodes P which is partitioned into two disjoint domains one of Atomic
Values, AV, and the other of Attribute Names (or simply attributes), AN. We denote atomic
values by strings surrounded by double quotes and attributes by strings beginning with "$". We
also assume that the domain of atomic values AV contains a distinguished value null meaning
"value exists but is unknown".
A hypernode is now defined to be an equation of the form:
E)
where G - L is termed the defining label of the hypernode (or simply the label of the hypernode
when no ambiguity arises) and (N, E) is a digraph, termed the digraph of the hypernode (or simply
the digraph of G), such that N - (P - L).
We impose the following syntactic restrictions on the arcs of a hypernode,
(E1) the indegree of nodes n - (AN - N), i.e. of nodes that are attributes, is zero.
(E2) the outdegree of nodes n - (AV - N), i.e. of nodes that are atomic values, is zero.
i.e. the anchor node of the arc is a label, then m - (L - N),
i.e. the destination node of the arc is also a label.
In order to explain the motivation behind the above restrictions we take the approach of the
Entity-Relationship model [CHEN76] which asserts that the real world can be described by entities
(or objects which in our case are hypernodes), which are in turn represented by a set of attributes
and their values, and by relationships between entities.
We observe that an arc set of a digraph can be viewed as a (binary) relation on the nodes
which are incident on its arcs. Thus, the semantics of restriction E1 are that attributes cannot be in
the range of the relation induced by the arc set. Furthermore, an arc whose anchor is an attribute
represents an attribute-value pair (i.e. a property) whose destination node is its value, the value
being either an atomic value or a label. Thus, when an arc is incident with an attribute this attribute
must be the anchor of the arc. The semantics of restriction E2 are that atomic values cannot
be in the domain of the relation defined by the arc set. Thus, when an arc is incident with an
atomic value this value must be the destination of the arc. Finally, the semantics of restriction E3
are that when a label is in the domain of the relation defined by the arc set then an arc incident
with this label represents a relationship between two hypernodes, i.e between two objects. Thus,
a relationship between two hypernodes can be represented by an arc which is incident with their
defining labels. We observe that conceptually this kind of relationship can be viewed as a referential
relationship (see relational links [DERO89]).
It can easily be verified that the hypernodes shown in Fig. 1 and Fig. 2 satisfy restrictions
E1, E2 and E3. As was discussed in the introduction these hypernodes model part of a simple airline
reservation system detailing information about passengers and indicating with which airline a
particular passenger is flying. We note that each arc in the hypernode with the defining label
FLIES in Fig. 1 represents a referential relationship and that each arc in the passenger hypernodes
of Fig. 2 represents an attribute-value pair.
A hypernode database (or simply a database), say HD, is a finite set of hypernodes satisfying
the following three conditions:
no two (distinct) hypernodes in HD have the same defining label.
(H2) for any label, say G, in the node set of a digraph of a hypernode in HD there exists a hypernode
in HD whose defining label is G.
every attribute name has the same meaning in all the node sets (of all the digraphs) of all
the hypernodes in HD in which this attribute name appears.
Given a database, HD, we denote by LABELS(HD) the set of labels appearing in the hypernodes
of HD, by PRIM(HD) the set of primitive nodes appearing in the hypernodes of HD, and
by ATT(HD) the set of attributes appearing in HD, i.e. PRIM(HD) - AN.
We note that condition H1 above corresponds to the entity integrity requirement of
[CODD79], since each hypernode can viewed as representing a real-world entity. In object-oriented
terminology [KIM90] labels are unique and serve as system-wide object identifiers,
assuming that all of the hypernodes known to the system are stored in a single database. Simi-
larly, condition H2 corresponds to the referential integrity requirement of [CODD79], since it
requires that only existing entities be referenced. This implies that a relationship between two
hypernodes can also be represented in terms of a reference from one hypernode to the other
(rather than a reference via an arc between two labels in the digraph of a hypernode). We observe
that conceptually this kind of relationship can be viewed as a part-of relationship, which provides
the hypernode model with inherent support for data encapsulation (see inclusion links
[DERO89]). Condition H3 corresponds to the Universal Relation Schema Assumption originating
from the Universal Relation model [LEVE92, MAIE84]. For example, if the attribute, $title,
means the title of a document it cannot also mean the title of an author of a document. We note
that condition H3 can always be enforced by the renaming of attributes when a conflict does arise.
For example, we could have the attribute, $dtitle, meaning the title of a document, and the attri-
bute, $atitle, meaning the title of an author of a document.
It can easily be verified that the hypernodes shown in Fig. 1 and Fig. 2 comprise a portion
of a hypernode database (if we add hypernodes for PASS5, PASS6, EA, BA, DA and AI, whose
node sets do not include any new labels, we would then have a database satisfying conditions H1,
H2 and H3). We note that by condition H1 each hypernode representing one of the objects in the
database has a unique label. Furthermore, the defining labels of the passenger hypernodes are
part-of the hypernode with the defining label PASSENGERS. Thus, by condition H2 there must
be one hypernode in the database for each passenger. Finally, we note that by condition H3 each
attribute included in the passenger hypernodes shown in Fig. 2 plays a unique role.
The Hypernode Accessibility Graph (HAG) of a hypernode E) - HD (or simply the
HAG of G, whenever HD is understood from context) is the digraph telling us which hypernodes
in HD are part-of (or encapsulated in) the hypernode with the defining label G, when considering
part-of as a transitive relationship (cf. composite objects [KIM90]).
Formally, we define the HAG of G, denoted by (N G , E G ), as the minimal digraph which is
constructed from hypernodes in HD as follows:
(1) G - N G , and G is a distinguished node called the root of (N G , E G );
(2) if G- N G and G- = (N-, E- HD (such a hypernode must exist by condition H2), then
We note that, in general, the HAG of G may be cyclic. In Fig. 3 we illustrate the HAG of
PASS1, where the hypernode with defining label PASS1 is shown in Fig. 2. We note that the
HAG of PASS1 is cyclic and thus PASS4 is part-of PASS1 and PASS1 is part-of PASS4, indicating
that PASS1 and PASS4 depend on each other.
Fig. 3. The HAG of PASS1.
In order to simplify the presentation we assume that hypernodes are untyped, i.e. we do not
put any further constraints on the structure of hypernodes. Thus, hypernodes are dynamic in the
sense that nodes and arcs in hypernodes can be updated subject only to all of the above restric-
tions. In this approach we do not classify entities according to the entity set to which they belong
but rather consider entities to be classless [ULLM91] (cf. [RICH91]), i.e. belonging to a single
set of entities. In particular, all the available hypernodes are members of a single database.
(Types give us a means of defining database schemas and of enforcing further constraints on the
structure and content of hypernodes. An extension of the hypernode model to deal with typed
hypernodes and typed databases can be found in [POUL92].)
B. A Query and Update Language for Hypernodes
We now introduce a query and update language for the hypernode model, called
Query Language (HNQL). HNQL consists of a basic set of operators for declarative querying and
updating of hypernodes. HNQL is further extended in a procedural style by adding to the basic set
of operators an assignment construct, a sequential composition construct, a conditional construct
for making inferences and, finally, for loop and while loop constructs for providing iteration (or
equivalently recursion) facilities.
Apropos of HNQL we assume that a countable domain of variables, V, is available, and
such that V - We denote variables by strings beginning with the uppercase
letters X or Y. Variables in HNQL are untyped, i.e. their values range over the union of the
domains of primitive nodes and labels.
From now on we assume that HD is a hypernode database and that all the operators we
define are to be evaluated with respect to HD. We also assume that the label NULL -
LABELS(HD) is reserved in order to return an error code when necessary. Notationally, we will
use strings beginning with the lowercase letter, v, to denote either a label or a primitive node and
strings beginning with the uppercase letter G to denote labels only.
The following four operators update hypernodes in the database, HD:
(1) insert_node(G, v) returns G if E) - HD, and as a side effect v is inserted into N, i.e.
N := N - {v}; otherwise NULL is returned.
(2) delete_node(G, v) returns G if E) - HD and " v- N there is no arc (v, v- E or
(v-, v) - E, and as a side effect v is deleted from N, i.e. N := N - {v}; otherwise NULL is
returned.
E) - HD and v 1 , v 2 - N, and as a side effect
E) - HD and (v 1 , and as a side effect
is deleted from E, i.e. E
The following two operators add or remove hypernodes from the database, HD:
(1) create() returns an arbitrary new label G such that G - (LABELS(HD) - {NULL}), and as
a side effect is added to HD, i.e. HD := HD -
(2) destroy(G) returns the label G if G = (- HD and for no hypernode G- = (N-, E-
HD is it true that G - N-, and as a side effect is removed from HD, i.e. HD :=
The following five predicates provide membership tests for a node or an arc being contained
in a given hypernode, for a defining label of a hypernode being in the database, HD, or for the
digraph of a hypernode to contain a given node or a given arc:
returns true if E) - HD and v - N; otherwise false is returned.
returns true if E) - HD and (v 1 , false is
returned.
returns true if E) - HD; otherwise false is returned.
returns true if E) - HD and v - N; otherwise false is
returned.
returns true if E) - HD and (v 1 , false
is returned.
We also allow the two equality tests:
which return true or false as the case may be.
We define a simple condition to be either a membership test or an equality test. A condition
is now defined to be either a simple condition, the parenthesising of a condition used for grouping
purposes, the negation of a condition, say cond, denoted by !cond, or the conjunction of two con-
ditions, say cond 1 and cond 2 , denoted by cond 1 & cond 2 .
The following five non-deterministic operators can be used to arbitrarily choose a node or
an arc contained in a given hypernode, or arbitrarily choose a defining label of a hypernode in the
database, HD, or one containing a given node or a given arc:
(1) any_node(G) returns an arbitrary node v - N if E) - HD and N -; otherwise
NULL is returned.
(2) any_arc(G) returns an arbitrary arc (v 1 , E) - HD and E -; otherwise
(NULL, NULL) is returned.
(3) any_label() returns an arbitrary label G such that E) - HD, if HD -; otherwise
NULL is returned.
returns an arbitrary label G such that E) - HD and v - N;
otherwise NULL is returned.
returns an arbitrary label G such that E) - HD and
NULL is returned.
Hereafter we assume that all variables in HNQL have a current value, which is either a label
or a primitive node; these are always initialised to have the value NULL. Thus, we extend our
earlier notation to allow strings beginning with the letters v or G to denote the current value of a
variable when appropriate. We now define an assignment statement to be an expression of the
lvalue := rvalue
where lvalue is a variable or a pair of variables, and rvalue is a constant, or a variable, or any of
the possible pairs of these two, or one of the HNQL operators defined so far.
The semantics of an assignment statement are that the current value of lvalue becomes the
result of evaluating rvalue on the current state of the hypernode database, HD (and possibly
updating HD as a side effect). We note that evaluating a constant on HD returns the constant
itself and that evaluating a variable on HD returns its current value. We assume that if the assignment
is undefined, for example, when trying to assign a pair of constants to a variable, or a constant
to a pair of variables, then lvalue is assigned the value NULL or (NULL, NULL), respec-
tively. This is consistent with the standard destructive assignment of imperative programming
languages such as Pascal [JENS85] for defining the value of a variable.
Statements (which can be assignment statements or one of the other kinds of statements
defined subsequently) can be composed sequentially using ";" as a statement separator. Further-
more, we use the keywords (transaction begin) and (transaction end) to delimit such compound
statements in analogy to the begin and end keywords used in Pascal. For convenience we
may omit statement contains only a single statement. We
note that since a compound statement is a statement, nesting of compound statements is made
possible.
The compound statment, shown in Fig. 4, deletes the arc ($dependent, null) and the node
null from the hypernode with defining label PASS3 and then inserts the node PASS5 and the arc
($dependent, PASS5) into this hypernode. Finally, an arbitrary arc is deleted from the hypernode
with defining label FLIES.
The syntax of a conditional statement is defined as follows:
if condition then
compound statement
The semantics of a conditional statement are that if the condition evaluates to true on the current
state of the database, HD, then the compound statement is executed on the current state of HD.
On the other hand, if the condition evaluates to false then the compound statement is not executed
at all.
The conditional statement, shown in Fig. 5, deletes the arc ($dependent, null) and the node
null from the hypernode with defining label PASS3 and then inserts the node PASS2 and the arc
($dependent, PASS2) into it, if both PASS2 and PASS3 are flying on flight_no "EA121" and
PASS2 is not already a dependent on PASS3.
Fig. 4. An example of a compound statement.
Fig. 5. An example of an if statement.
We next define two types of loop: for loops, which give us a bounded looping construct,
and while loops, which give us an unbounded looping construct [CHAN88].
The syntax of a for loop is defined as follows:
for_all for_predicate do
compound statement
where for_predicate is one of the following five membership testing predicates: X - nodes(G),
The semantics of a for loop are now described. Firstly, the for_predicate is evaluated on the
current state of the database, HD, prior to the execution of the for loop. The evaluation is effected
once for each possible substitution of the variables in the for_predicate with values from
Thereafter the compound statement is executed synchronously in
parallel on the current state of HD once for each time the for_predicate evaluates to true with the
evaluation as indicated above. (We note that the semantics of a compound statement being executed
in parallel are that the statements, which comprise this compound statement, are also to be
executed synchronously in parallel.) We further observe that the compound statement is always
executed only a finite number of times, i.e. the looping is bounded.
The for loop, shown in Fig. 6, modifies flight number "BA212" to "BA345" for all
passengers in the database.
The syntax of a while loop is defined as follows:
while changes do
compound statement
do
do
if (Y1,
Fig. 6. An example of a for loop.
The semantics of a while loop are that the compound statement is repeatedly executed on the
current state of the database HD until no further changes are effected on the current state of HD.
That is, the compound statement is executed until a fixpoint is attained. We observe that, in gen-
eral, a while loop may not terminate (since a fixpoint may not be attainable), i.e. the number of
times the compound statement is executed may be unbounded.
The while loop, shown in Fig. 7, transitively closes the digraph of a hypernode, E).
Note that we have omitted TB and TE, since this while loop comprises a single statement.
An HNQL program is now defined to be a compound statement terminated by a full-stop,
i.e. it is a sequential composition of one or more of the above kinds of statements (including a
compound statement itself). The HNQL program, shown in Fig. 8, oscillates between updating
the digraph of a hypernode E) to be irreflexive and updating it to be reflexive. We note
that this program does not terminate.
while changes do
for_all (X1, X2) - arcs(G) do
do
Fig. 7. An example of a while loop.
while changes do
do
if (X1, X1) - arcs(G) then
do
if !(X1, X1) - arcs(G) then
TE.
Fig. 8. An example of an HNQL program.
C. Hypernode Functional Dependencies
Functional Dependencies (FDs) are by far the most common integrity constraint in the real
world [PARE89, ULLM88] and the notion of key (derived from a given set of FDs) [CODD79] is
fundamental to the relational model. FDs have also been extended to nested relations in
[LEVE92, THOM86, VANG88]. Essentially, by allowing attribute domains to be relation-valued
(i.e. nested relations) FDs are capable of modelling both single-valued and multi-valued data
dependencies. In [LEVE91] it was shown that such extended FDs can be naturally incorporated
into a hypergraph-based data model which was the precursor of the hypernode model. FDs have
also been incorporated into the graph-based model GOOD [GYSS90] in the form of functional
and multi-valued arcs. Finally, a more general type of FD, called a path FD (PFD), was defined
in [WEDD92] for an object-oriented data model, wherein both the class schemas and the
instances thereof are interpreted as digraphs. A sound and complete axiomatisation of PFDs was
exhibited and it was also shown that, in general, if the schema is cyclic then there may be an
infinite number of derived PFDs.
We now show how the concept of FDs can be incorporated into the hypernode model by
using a graph-theoretic formalism.
We recall that a subgraph, (N-, E-), of (N, E), is a digraph such that N- N and E- E. The
induced subgraph of (N, E) with node set S, denoted by induced(S, (N, E)), is the maximal sub-graph
of (N, E) whose node set is S [BUCK90].
Let HD be a hypernode database, E) - HD and A - (N - ATT(HD)). We denote
by adj(A) the set of attributes A together with the set of all nodes m - N that are adjacent from
any node n - A.
We next give a definition of a FD in HD. Informally a set of attributes, A - ATT(HD),
functionally determines another set of attributes, B - ATT(HD), if for each attribute, $b - B,
whenever the induced subgraphs, each with node set adj(A), of two digraphs of hypernodes in HD
are equal, then the corresponding induced subgraphs, each with node set adj({$b}), of these two
digraphs are also equal.
More formally, let A, B - ATT(HD) be two sets of attributes. Then, the Hypernode Functional
Dependency (HFD), A - B, is satisfied in HD if " $b - B and for every pair of (not
necessarily distinct) hypernodes G
it is also the case that
An example of an HFD holding in our simple airline reservation database is:
This HFD asserts that a passenger's seat number and flight number uniquely determine their
name and their dependents. It can easily be verified that the passenger hypernodes in Fig. 2
satisfy this HFD.
In the following we assume that ATT(HD) is a fixed set of attributes, U, for any hypernode
database, HD, that F is a set of HFDs and that A - B is a single HFD.
We denote the fact that a database HD satisfies F (respectively, by A - B) by HD |= F
(respectively, HD |= A - B). We say that F logically implies A - B (with respect to a class of
hypernode databases), denoted by F |= A - B, if and only if for every hypernode database HD in
the given class if HD |= F then HD |= A - B.
An axiom system for HFDs (for a given class of hypernode databases) is a set of inference
rules that can be used to derive HFDs from a given set F of HFDs. We denote by F |- A - B the
fact that either A - B - F or A - B can be inferred from F by using one or more of the inference
rules in a given axiom system for HFDs. We define the closure of a set of attributes, A, with
respect to F, denoted by A + , to be the set of all attributes such that $b - A + if and only if F |- A
- {$b}. Finally, an axiom system is sound if F |- A - B implies that F |= A - B and it is complete
if F |= A - B implies that F |- A - B.
We now define an axiom system for HFDs with respect to the class of all hypernode data-bases
(R1) Reflexivity: if B - A - U, then F |- A - B.
(R4) Decomposition: if F |- A - B, then " $b - B, F |- A - {$b}.
We observe that the transitivity rule (i.e. if F |- A - B and F |-
which is sound for FDs with respect to relational databases, is not sound as an inference rule for
HFDs. Consider the following counterexample. Let HD be the database shown in Fig. 9, where
A, C and D denote pairwise disjoint sets of attributes. It can easily be verified that HD |= A - D,
HD |= D - C but HD |- A - C.
Fig. 9. A hypernode database satisfying A - D and D - C but not A - C.
A A
G1 G2 G3
"1"
"1"
"1"
"0"
Theorem 1. The axiom system comprising inference rules R1-R4 is sound and complete for
the class of hypernode databases.
Proof: See Appendix.
We note that the above axiom system comprising, R1-R4, was also shown to be sound and
complete for FDs with respect to the class of relational databases with a single unmarked null
value [LIEN82]. This implies that, within the hypernode model, we can capture the semantics of
incomplete information proposed in [LIEN82] without explicitly storing the missing information
in the database. These semantics fit in well with nulls of the type "value does not exist", which
can be modelled by isolated attribute names (see Section I).
We next show how condition H1, which asserts the uniqueness of the defining label of
every hypernode in a database HD, can be used explicitly to define the concept of key in the
hypernode model. Let $id - AN be a distinguished attribute such that $id - PRIM(HD), and
assume that the HNQL program, shown in Fig. 10, enhances HD by adding to each hypernode,
say E), in the database an arc ($id, G). We call the resulting database an enhanced
obviously the class of enhanced databases is a proper subset of the class of
hypernode databases.
do
Y := insert_node(X, $id);
Y := insert_node(X, X);
Y := insert_arc(X, $id, X);
TE.
Fig. 10. An HNQL program to enhance a hypernode database.
We now add the following two inference rules to our axiom system for HFDs.
Theorem 2. The axiom system comprising inference rules R1-R6 is sound and complete for
the class of enhanced databases.
Proof: See Appendix.
III. THE HYPERNODE MODEL AS AN UNDERLYING
Hypertext [CONK87, NIEL90] is text that can be read nonsequentially, in contrast to traditional
text, for example in book form, which has a single linear sequence defining the order in
which the text is to be read. Hypertext presents several different options to readers, and the individual
reader chooses a particular sequence at the time of reading. A hypertext database (known
as a network in hypertext terminology) is a digraph (in [TOMP89] a directed hypergraph is con-
sidered) whose nodes represent units of information and whose arcs (known as links in hypertext
terminology) allow the reader to navigate from an anchor node to a destination node. In the context
of this paper we only consider textual units of information but, in general, hypertext
databases integrate into the system multimedia data types such as graphics, sound and video. The
activity of navigating within a hypertext database by traversing links and examining the text associated
with destination nodes is called browsing. As was pointed out in [HALA88] browsing does
not provide sufficient access to a hypertext database, since the database may have a complex
digraph structure rendering navigation difficult. This may cause readers to get "lost in hyper-
space" [CONK87, NIEL90], i.e. readers may not know where they are and/or do not know how to
get to some other position in the network. Therefore, declarative querying and searching facilities
are needed in order to complement browsing. Querying can be done via the structure of the
digraph using a query language (Graphlog was suggested in [CONS89]) and searching can be
done by textual content using full-text retrieval techniques.
The hypernode model possesses a number of features which make it a natural candidate for
being a formal model for hypertext.
Firstly, a hypernode is a digraph structure with two built-in link types. The first link type is
the arc representing a referential relationship and the second link type is the encapsulated label
representing a part-of relationship. Furthermore, attributes allow us to give additional semantics
to nodes. In fact, hypernodes can model arbitrary complex objects. In order to support text
directly we can assume that the domain of atomic values is actually a domain of textual fragments
over which full-text retrieval operations are possible. In Fig. 11 we show part of a hyper-text
database, called PAPERS, which stores on-line papers from scientific journals. In particular,
the figure shows an overview diagram [NIEL90] of the papers that are adjacent to PAP1 (i.e.
PAP7 and PAP3) and adjacent from PAP1 (i.e. PAP11, PAP4 and PAP15); we assume that PAP1
is currently being browsed. The hypernodes encapsulated in IN1, IN2, OUT1, OUT2 and OUT3
are annotations of links [NIEL90]. An annotation of a link provides additional information about
the link such as the name of the creator of the link, the date it was created and the subject matter
of the link (see Fig. 12 for the details of the annotation OUT1). In addition to the annotation
OUT1, Fig. 12 shows the hypernode PAP1, which is currently being browsed and two of its
encapsulated hypernodes, AUTH1 (showing the details of one of the authors of the paper) and
(which contains the actual text of the paper).
Secondly, the hypernode model can provide for browsing and declarative querying facilities
via HNQL. HNQL can also cater for authoring [NIEL90] via its update facilities. Finally, within
the context of the hypernode model we can reason about integrity constraints (see Section II-C) in
a hypertext database. In summary we view hypertext as a promising application of the hypernode
model.
IV. COMPARISON OF THE HYPERNODE MODEL TO OTHER
A. Comparison to other graph-based data models
Fig. 11. Part of a hypertext database.
OUT3
We briefly compare the hypernode model to other recent graph-based data models with
respect to their data modelling capabilities. In particular, we deal with the Logical Data Model
(LDM) [KUPE84], GOOD and G-Log [GYSS90, PARE91], and Graphlog [CONS90].
In all of the above graph-based data models the database consists of a single digraph, while
a hypernode database consists of a finite set of digraphs. This unique feature of the hypernode
model permits data encapsulation and the ability to represent each real-world object in the data-base
separately.
In LDM database schemas are represented by digraphs and their instances are represented as
two-column tables each of which associates entities of a particular type (which is either a primitive
type, a tuple type or a set type) with their corresponding values. In the hypernode model we
have a single data structure, i.e. the digraph, which as was shown in [LEVE90] has the ability to
types.
In GOOD, G-Log and LDM there is a separation between the database schema and the data-base
instance, while the hypernode model, as presented herein, has no such separation, since
hypernodes are untyped. This has the advantage that changes to the database can be dynamic but
on the other hand it has the disadvantage that typing constraints cannot be imposed on the data-base
Unlike GOOD, G-Log and Graphlog, we do not label arcs in the hypernode model. How-
ever, we can attain the same data modelling expressiveness by including arcs, which have the
same label in a GOOD, G-Log or Graphlog digraph, within the arc set of a single hypernode
hypernodes ."
Here, we generalise graphs to
and various semantic data models.
example the hierarchical, network
of a number of data models, for
"Graphs have formed the foundation
Fig. 12. Some hypernodes in the hypertext database.
"London"
"UCL"
"M. Levene"
$address
$college
$name
$text
$author
$abstract
$title
"The Hypernode Model ."
$creator
$date
$subject
"J.Bloggs"
"22.4.92"
"hypertext"
whose defining label is this same label. For example, we can represent the set of labelled arcs:
by the hypernode:
We next briefly compare the hypernode model to object-oriented data models [KIM90]
again with respect to their data modelling capabilities. Typically, object-oriented data models
support tuple, set and list data constructors, which are used to define the type of an object. Each
object has a unique object identity and belongs to only one class. The class of an object defines
both its structure and its behaviour, i.e. its type and the methods it responds to.
The hypernode model supports only a single general-purpose digraph constructor, which, as
was mentioned above, has the ability to represent tuple, set and list constructors. Hypernodes are
provided with object identity via their unique labels. We do not support classes in the hypernode
model, since hypernodes are untyped. Furthermore, we do not support methods in the hypernode
model, since we have a general-purpose database language, i.e. HNQL, which allows us to
directly pose to the database any query or update definable in HNQL. As was noted in [ULLM91]
such a general-purpose database language is essential in database applications such as scientific
and market applications (see [TSUR91] for more details on such applications), where a large
variety of queries and updates, which cannot always be planned in advance, may be posed to the
database.
B. Comparison to set-based data models
Set-based data models such as the relational model [CODD70, PARER89, ULLM88] and
the nested relational model [LEVE92, PARE89, THOM86, VANG88] are value-based. That is,
tuples in relations (respectively nested relations) are identified solely by their attribute values. On
the other hand, graph-based data models such as the hypernode model are object-based, i.e.
hypernodes are identified by unique labels that serve as system-wide object identifiers.
In [ULLM88] it is argued that query and update languages for value-based data models are,
in general, more declarative than those for object-based data models. For example, in [ULLM91]
it is shown that the consequence of attaching object identity to each path [BUCK90] in a digraph
may cause some undesirable side-effects. In particular, when trying to generate the set of all paths
in a digraph we may mistakenly generate the set of all walks [BUCK90], since different walks
that are induced by the same path have different object identities. If the digraph is cyclic then an
infinite number of walks (i.e. objects) will be generated. Furthermore, if we are only interested in
the reachability relation between the nodes of a digraph, then generating all the walks first is
obviously ineffective. Although declarativeness is generally desirable, we firmly believe that
navigational features supported by languages for graph-based data models such as the hypernode
model are necessary in certain applications such as hypertext. Furthermore, we claim that there
need not be any loss of data independence in object-based data models and thus their query and
update languages need not be less declarative than those for value-based data models. We substantiate
this claim with reference to our model as follows: the unique labels of hypernodes are
system generated via the operator create() (they should be machine-independent to allow portabil-
ity) and therefore their internal values are hidden from the users. In order to overcome this prob-
lem, unique meaningful aliases can be used at the conceptual level of the database in order to
identify the defining labels of hypernodes, as we have demonstrated throughout the paper.
We close this section with a brief mention of integrity constraints in graph-based and set-based
data models. In Section II-C we showed how FDs can be incorporated into the hypernode
model. In order for the theory to be comprehensive we need to extend our results to include other
kinds of integrity constraint such as inclusion dependencies [VARD88], which can be used to
enforce referential integrity constraints. In the context of the relational model there is a plethora
of data dependencies which are described in [VARD88]. It would be a pity not to tap this wealth
of ideas when investigating data dependencies for graph-based data models. In [LEUC91] a step
has been taken in this direction by reconstructing tuple and equality generating dependencies
[VARD88] in a graph-theoretic setting. In the context of an object-oriented data model PFDs
mentioned in Section II-C have been generalised to path constraints [COBU91], which can also
assert equations and typing constraints.
V. ON THE EXPRESSIVE POWER OF HNQL
A fundamental measure of the expressive power of a query and update language is the class
of transformations from databases to databases (termed computable updates) that such a language
can express. We cannot use directly the standard notion of a Turing computable mapping from
strings to strings to define the said class for two reasons. Firstly, database domains are normally
abstract and thus do not have a built-in ordering; this is called the genericity requirement (in
[CHAN80] the genericity requirement is called the consistency criterion). For example, the
domains L and P of the hypernode model are abstract domains and are thus uninterpreted with
respect to any ordering that can be defined on them. Secondly, we may introduce non-determinism
into the database language in two ways:
(1) by allowing the creation of new objects with arbitrarily chosen object identifiers (as we do
by using the create() operator of HNQL defined in Section II-B); and
(2) by introducing explicit non-deterministic operators into the language (as we do via the five
non-deterministic operators of HNQL also defined in Section II-B).
The non-determinism introduced by (1) is motivated by the fact that the internal values of
object identifiers are hidden from the users and therefore, at least from the users' point of view,
their generation should be non-deterministic. On the other hand, the non-determinism introduced
by (2) is motivated by the need to answer queries such as: choose an arbitrary seat number for a
passenger on a given flight or choose an arbitrary referenced paper on a specific topic from a
given set of references.
As a result of the above we define two classes of computable updates: generic computable
updates which cater for the genericity requirement and arbitrary-order computable updates
which cater for the non-determinism introduced by (1) and (2). We show that the former class is
a special case of the latter class and then investigate the expressiveness of HNQL with respect to
the class of arbitrary-order computable updates.
We first introduce some useful notation. In the following we let HD be a database, p be an
isomorphism from L - P to w, where w is the set of all natural numbers, p -1 be the inverse of p
and d be a Turing computable mapping from strings to strings. We next define certain auxiliary
operators used in the sequel:
(1) encode(p)(HD) returns a standard encoding [GARE79], say SE, of p(HD). (We note that p
preserves the adjacency of the digraphs of the hypernodes in HD.)
returns a database p -1 (ISE), where ISE is the result of computing the
inverse of the standard encoding SE output by the encode operator.
(3) d(SE) denotes the result of computing d via a Turing machine that computes d with input
SE.
We observe that p is necessary in defining encode and decode, since when a set is
represented by a string, the elements of the set need to be ordered [GARE79].
We are now ready to formalise the notion of a computable update.
A mapping t, from databases to databases, is a computable update if there exists a Turing
computable mapping d from strings to strings and an isomorphism p such that
A query and update language is update complete with respect to a class of computable
updates if and only if it expresses all and only all the computable updates in that class.
We observe that in our context a query can be considered to be a special case of an update,
since we can always put the result of a query into a distinguished hypernode and remove this
hypernode from the database after the user has inspected its contents.
A. Generic Computable Updates
We now introduce the notion of a generic computable update.
A computable update t is generic if it commutes with every isomorphism r that maps primitive
nodes to primitive nodes and labels to labels, i.e. for any database, HD,
t(r(HD)).
We note that in [ABIT90] a more general definition of genericity is considered where a
finite set C of constants (or primitive nodes in the case of the hypernode model), some of which
may appear in the query or update itself, are mapped to themselves. Herein for simplicity we have
assumed that C is the empty set. We further note that genericity is a consequence of the requirement
that a database provide a high degree of data independence. This is due to the fact that data
independence requires a computable update to be independent of the internal representation of the
data. In particular if an ordering, imposed on the underlying domains at the physical level of the
database, is not known at the conceptual level, then such an ordering should not affect the result
of a given update.
B. Arbitrary-Order Computable Updates
Generic computable updates do not allow us to express certain computable updates such as
"choose a member from a set", since such an update is not generic. As noted in [ABIT89], this
update can easily be expressed in the presence of a total ordering on the underlying domains,
since we can then treat a set as an ordered list without duplicate elements. Alternatively, we can
allow this sort of update to be expressed by introducing non-deterministic operators, such as the
non-deterministic operators of HNQL, which permit us to choose a member from a set by introducing
an arbitrary order on the members of the set (cf. the cut operator in Prolog [CLOC81] and
the choice predicate in LDL [NAQV89]).
We next define a class of computable updates which takes into account the added expressiveness
of the non-deterministic operators of HNQL. We then investigate the expressive power
in HNQL with respect to the said class of computable updates.
binary relation t, from databases to databases, is an arbitrary-order computable update
(or simply an AO computable update) if t is a computable update up to a choice of p, which is
used when computing t.
We observe that by the definition of an AO computable update we may have (HD, HD1) -
t and (HD, HD2) - t resulting from two different choices of p, say p 1 and p 2 .
We note that if L - P has a fixed natural ordering, which is always used when computing t,
then the definition of an AO computable update would degenerate to the definition of a computable
update, since only one choice of p would ever be used. We further note that, in general, the
definition of an AO computable update is weaker than the definition of a computable update in
the sense that we do not assume that a given choice of p is a priori available when computing t.
The following lemma shows that generic computable updates are just a special case of AO
computable updates.
Lemma 3. t is a generic computable update if and only if t is independent of the choice of
p, which is used when computing t.
Proof: See Appendix.
If HNQL's non-deterministic operators (including the create() operator) are to be interpreted
as AO computable updates, they must behave deterministically once a choice of p is made.
We now formalise this approach by defining the semantics of HNQL's non-deterministic operators
Let S be a set over L - P (that is we make no assumption about the internal structure of S).
We now define the following three auxiliary operators:
(1) member(S) returns an arbitrary member s - S.
(2) list(S, p) returns the list resulting from imposing an ordering p on the set S.
(3) first(L) returns the first element of the list L.
We next formalise the notion of "returns an arbitrary member" by replacing the definition of
member(S) in (1) above by
given a choice of p,
Thus, we have replaced the choice of an arbitrary member in the definition of member(S) by
a choice of an arbitrary ordering imposed by a given choice of p, with the returned value of
member(S) being the first element of the chosen ordering on S.
We can now make the assumption that when an HNQL program, say Prog, which may contain
non-deterministic operators, is evaluated we can utilise (a) by making a particular choice of
p prior to the evaluation of Prog. That is, when a non-deterministic HNQL operator in Prog
returns an arbitrary member (which may be a node, an arc or a label) from a given set, say S, (a)
is used (with the particular choice of p) in order to compute the returned value of member(S).
From now on we call this assumption with respect to the operational semantics of HNQL's non-deterministic
operators assumption (a). We note that the create() operator can also utilise
assumption (a) by returning the least label not present in LABELS(HD-), where HD- is the
current state of HD, according to the ordering imposed by the choice of p. We illustrate the
deterministic behaviour resulting from assumption (a) with a simple example. Let
assume that the simple HNQL program, shown in Fig. 13, is executed with
respect to HD.
Y := delete_node(G, X);
TE.
Fig. 13. A simple HNQL program.
After the above program is executed the current state of HD is either
depending on whether the choice of p induces
We are now ready to state the main result of this section, which characterises the expressiveness
of HNQL, given assumption (a), as an operational semantics to the non-deterministic
operators of HNQL.
Theorem 4. Given assumption (a), HNQL is update complete with respect to the class of
AO computable updates.
Proof: See Appendix.
In [ABIT90] a non-deterministic computable update (called a non-deterministic database
transformation therein) is a binary relation t, from databases to databases, which is generic and
recursively enumerable. Although this approach is semantically "clean", it does not provide us
with an operational semantics for non-determinism, since due to genericity, there is no mechanism
to decide which output to choose from a query or update given a set of possible outputs. On
the other hand, AO computable updates provide us with a "clean" operational semantics to non-
determinism, since the choice of p in assumption (a) can be made, for example, by using the physical
layout of the database. Since this layout changes over time the result of a query or an update
will "appear" to the user to be non-deterministic. (For more discussion on computable updates
see [ABIT89, ABIT90, CHAN80, CHAN88, HULL90, HULL91, NAQV89].)
VI. BRIDGING THE GAP BETWEEN GRAPH-BASED AND
SET-BASED
In this section we endeavour to bridge the gap between graph-based and set-based data
models by considering a transformation from one to the other.
We first define the important notions of copy and copy elimination. Two hypernode data-bases
are defined to be copies of each other if they are not equal and there exists an isomorphism
from one to the other that maps primitive nodes to themselves (i.e. it is the identity mapping on
P) and labels to labels. Intuitively, this means that the two databases are modelling exactly the
same set of objects. A hypernode database with copies is a database such that two or more of its
subsets (consisting of hypernodes) are copies of each other. Finally, the operation of copy elimination
is the operation of replacing a database with copies by a maximal subset of this database
without copies, i.e it is the operation of removing the duplicate copies. We observe that copy
elimination can be performed easily in HNQL, since we can arbitrarily retain one of the copies
from the duplicates and then remove the others. (For more discussion on copy elimination see
Now, since a set-based data model can be viewed as a special case of a graph-based data
model, i.e. one in which the database is such that it is always without copies, we need only consider
transforming graph-based to set-based data models. This would be straightforward if graph-based
data models had a built-in copy elimination operator which would be invoked on the data-base
after each query or update is computed. We use our model to demonstrate the said transfor-
mation. In effect, we would like the hypernode model to behave like a set-based data model. This
involves solving two problems. The first problem is to find a suitable set-based formalism such
that the hypernode model behaves like such a formalism, and the second problem is to devise a
transformation from the hypernode model to this suitable set-based formalism.
We suggest non-well-founded sets [ACZE88] (also called hypersets [BARW91]) as a solution
to the first of these two problems. Hypersets subsume well-founded sets by dropping the
requirement that sets have a hierarchical structure, thus allowing us to model various kinds of circular
phenomena whereby a set may contain itself. It was shown in [ACZE88] that certain systems
of equations have unique solutions in the universe of hyperests. That is, a hyperset can be
viewed as the unique solution to such a system of equations. This important result is called the
solution lemma. As as a consequence of the solution lemma we can define hyper-relations to be
the unique solutions to the set of hypernodes (which can be viewed as a set of equations whose
indeterminates constitute its set of unique defining labels) in a hypernode database, HD. This
interpretation will allow us to transform our model into a value-based data model thus solving our
second problem. Although this solution is appealing theoretically, in practice we are faced with
the problem of copy elimination, i.e. if the solution to two different defining labels is the same
hyper-relation, then how is this to be detected and at what computational cost? In other words
how can we test for the equality of two hyper-relations?
We now show that in the hypernode model solving this problem is at least as hard as testing
for isomorphism of digraphs [BUCK90], whose complexity is, in general, an open problem (we
note that the subgraph isomorphism problem is NP-complete) [GARE79]. We briefly describe
two isomorphism tests, a local test and a global test, which can be used to solve copy elimination
in the hypernode model.
In order to check whether two hypernodes
are locally isomorphic, we restrict the mapping realising the isomorphism so that primitive nodes
map to themselves (i.e. it is the identity mapping on P), labels map to labels and the label G 1
maps to the label G 2 . For example, consider the following three hypernodes:
The first hypernode is locally isomorphic to the second hypernode but not locally isomorphic to
the third hypernode, since in this case G 1 would have to be mapped to both G 3 and G 4 .
We observe that the above local test is, in general, not sufficient when considering the isomorphism
of two hypernodes in a database, since condition H2 may be violated if one of the isomorphic
hypernodes is removed from the database. For example, consider the following four
hypernodes:
The first hypernode is locally isomorphic to the second hypernode but the third hypernode is not
locally isomorphic to the fourth hypernode, since "iris" cannot map to "robert". Furthermore, if
we eliminate one of the first two hypernodes, then either the third or the fourth hypernode will
lose a parent as a result.
A global isomorphism test can now be devised by taking into account the hypernode accessibility
graphs (HAGs) of the two hypernodes under consideration. In order to test whether two
hypernodes, with the defining labels, G 1 and G 2 , are globally isomorphic, we can use the following
algorithm. First test whether the HAGs of G 1 and G 2 are isomorphic such that G 1 maps to
. If the answer is no, then the test fails. Otherwise, if the answer is yes, let r be the one-to-one
mapping which verifies this isomorphism. Then test, for each node G in the node set of the HAG
of G 1 whether the hypernode whose defining label is G is locally isomorphic to the hypernode
whose defining label is r(G). If all such local tests succeed, then the global isomorphism test
succeeds, otherwise it fails.
For example, the following two digraphs are, respectively, the HAGs of G 1 and G 2 shown
above.
It can easily be verified that these HAGs are indeed isomorphic, where G 1 maps to G 2 . Finally,
the hypernode defined by G 1 is not globally isomorphic to the hypernode defined by G 2 , since as
was shown above the hypernode defined by G 3 is not locally isomorphic to the hypernode defined
by G 4 .
VII. CONCLUDING REMARKS AND FURTHER RESEARCH
We have presented the three components of the hypernode model in detail: its single underlying
data structure, the hypernode, its query and update language, HNQL, and its integrity constraints
in the form of HFDs. We have also presented hypertext as a natural application for the
hypernode model, and compared the model with other graph-based data models and with set-based
data models. Finally, using our model as an example, we have demonstrated that hypersets
can be used to bridge the gap between graph-based and set-based data models; this is achieved at
the cost of testing isomorphism of digraphs, whose complexity is, in general, an open problem.
From a practical point of view, this may imply that one can bridge the gap for hierarchical data-bases
only, i.e. when the digraphs of all hypernodes in the database are trees, since testing for
isomorphism of trees can be solved in polynomial time [GARE79].
An advantage of graph-based database formalisms, which will be important in the next generation
of database systems, is that they considerably enhance the usability of complex systems
[HARE88]. In particular, graph-based formalisms encourage "graphical" user interfaces.
We now list some further topics which demand more attention:
# Utilising the wealth of algorithmic graph theory in order to incorporate into HNQL special-purpose
operators, which solve particular distance-related problems such as finding the
shortest distance path between two nodes.
# Developing a higher level database language for the hypernode model on top of HNQL (a
step in this direction is the logic-based language Hyperlog discussed in [LEVE90,
POUL92]).
Developing further our ideas of a formal model for hypertext.
We close by briefly mentioning that a first prototype of a storage manager for the hypernode
model has already been implemented [TUV92]. The storage manager is a set of modules carrying
out manipulation of digraphs in a persistent store. It caters for object identity and referential
integrity of hypernodes, for the storage of large and dynamic hypernodes, for clustering strategies
on secondary storage and for retrieval operations which utilise indexing techniques. It also supports
the basic set operators of HNQL excluding its non-deterministic operators.
Proof: It can easily be shown that the axiom system comprising R1-R4 is sound. In order
to prove completeness we need to show that if F |-/ A - B, then F |- A - B. Equivalently for
the latter, we have to exhibit a database, say EX, such that EX |= F but EX |- A - B. Let EX be
the database shown in Fig. 9, where A, C and D, as before, denote pairwise disjoint sets of attributes
and such that
We first show that EX |= F. Suppose to the contrary that EX |- F and thus there exists an
F, such that EX |- V - W. It follows by the construction of EX that V - A and
that C) such that $c - A + . By it follows that F |- A -W - A, and by R4 it follows
that F |- A - {$c}. This leads to a contradiction, since it follows that $c - A + .
We conclude the proof by showing that EX |- A - B. Suppose to the contrary that EX |=
by the construction of EX,
fore, " $b - B, F |- A - {$b} and by R3 it follows that F |- A - B. This leads to a contradic-
tion, since we have derived F |- A - B. #
Proof: We observe that the identity inference rule, R5, is a consequence of condition H1
and thus it is sound (cf. the simple attribution rule in [WEDD92]). Furthermore, the key inference
rule, R6, is a consequence of the fact that if F |= A - {$id} then A is a superkey (i.e. a
superset of a key) for every enhanced database satisfying F, so it is sound. From these two observations
and from Theorem 1 it follows that the axiom system comprising R1-R6 is sound.
In order to prove completeness we need to show that if F |-/ A - B, then F |- A - B.
Equivalently for the latter, we have to exhibit an enhanced database, say EXE, such that EXE |= F
but EXE |- A - B. Let EXE be the enhanced database shown in Fig. 14, where A, C, D and
{$id} denote pairwise disjoint sets of attributes and such that
Fig. 14. An enhanced database satisfying F but not A - B.
G3
"0"
$id
G3
"1"
A
$id G2
$id
A
"1"
"1"
We first show that EXE |= F. Suppose to the contrary that EX |- F and thus there exists an
F, such that EXE |- V - W. It follows by the construction of EXE that V - A
and that either $ $c - (W - C) such that $c - In the first case the result
follows as in the proof of Theorem 1, so we assume that $id - W. By and R4 it follows that F
|- A - {$id}. On using R6 we can deduce that $ $c - A + such that F |- A - {$c}. This leads to
a contradiction, since it follows that $c - A + .
The proof that EXE |- A - B is similar to that in the proof of Theorem 1; we note that by
the construction of EXE, A - {$id}, otherwise this would have allowed us to derive F |- A - B
by R5. #
PROOF OF LEMMA 3
Proof: Let d be the Turing computable mapping that is used to compute t(HD) and let "o"
denote composition.
is independent of the choice of p which is used when computing t we have that
We now have that
t(r(HD)) as required.
(Only if): Assume that t is not independent of the choice of p. Then there exist at least two
choices of p, say p 1 and p 2 , such that
It follows that either not both; assume that
We assume without loss of generality that the isomorphism, r, maps
primitive nodes to primitive nodes and labels to labels. Now, by the definition of r we have that
It therefore follows that encode(p 1
SE be the
result of this Turing machine computation. Hence it follows that
using the genericity requirement, since
Thus, since r
we have that
This concludes the proof, since it is also the case that
and thus leading to a contradiction. #
Proof: Let HNQL det denote the subset of HNQL without its non-deterministic operators. It
can easily be shown that all the updates expressed by HNQL det are in fact generic computable
updates. Thus by Lemma 3 all the updates expressed by HNQL det are also AO computable
updates. Now, by assumption (a), HNQL's non-deterministic operators become AO computable
updates. Thus, since the choice of p is made prior to the evaluation of a given HNQL program, it
follows by structural induction on the constructs of HNQL programs that HNQL expresses only
AO computable updates.
In order to conclude the proof we need to show that HNQL can express all the AO computable
updates. Now, let t be any AO computable update. It is immediate from the definition of the
non-deterministic operators of HNQL that a given choice of p, say p-, can be generated in HNQL.
We observe that for any current state of HD, say HD-, p- need only be defined for PRIM(HD-
LABELS(HD-).
let d be the Turing computable mapping that is used to compute t(HD) with p- being
the isomorphism used in the definition of a computable update. We next need to show that
can be simulated in HNQL. Firstly, encode(p-)(HD) can be
simulated in HNQL by using a standard encoding scheme as in [GARE79]. An example of the
result of encoding the hypernode, ignoring p-, for the sake of brevity,
is shown in Fig. 15. Secondly, d(SE) can be simulated in HNQL, since HNQL is capable of
simulating the working of any Turing machine; this is realised by using certain results given in
Fig. 15. An example of encoding a hypernode.
"end"
"node"
"edge"
"node"
G
"begin"
[LEVE90]. Finally, decode(p-1 )(d(SE)) can be simulated in HNQL by defining in HNQL the
inverse mapping of the encode operator. We leave the remaining technical details of the proof to
the reader; the techniques used are similar to those used in [ABIT89, ABIT90, CHAN80] to
prove update completeness. #
ACKNOWLEDGEMENT
The work of Mark Levene is funded by grant GR/G26662 of the U.K. Science and
Engineering Research Council. He would like to thank Alexandra Poulovassilis for their joint
work on the hypernode model.
--R
"Object identity as a query language primitive"
"Procedural languages for database queries and updates"
"Hypersets"
Distance in Graphs.
"Computable queries for relational data bases"
"Theory of database queries"
"The Entity-Relationship model - towards a unified view of data"
Programming in Prolog.
"Path constraints for graph-based data models: Towards a unified theory of typing constraints, equations and functional dependencies "
"A relational model of data for large shared data banks"
"Extending the database relational model to capture more meaning"
introduction and survey"
"Expressing structural hypertext queries in Graphlog"
visual formalism for real life recursion"
"Expanding the notion of links"
Computers and Intractability: A Guide to the Theory of NP-Completeness
"A graph-oriented object database model"
"Reflections on Notecards: Seven issues for the next generation of Hypermedia systems"
"On visual formalisms"
"ILOG: Declarative creation and manipulation of object identifiers"
"On the expressive power of database queries with intermediate types"
Germany: Springer-Verlag
"A new approach to database logic"
"On the equivalence of database models"
The Nested Universal Relation Database Model.
"The hypernode model and its associated query language"
"An object-oriented data model formalised through hypergraphs"
"Agreement graph dependencies."
"On the foundations of the universal relation model"
A Logical Language for Data and Knowledge Bases.
The Structure of the Relational Database Model.
"G-Log: A declarative graphical query language"
"A nested-graph model for the representation and manipulation of complex objects"
"Aspects: Extending objects to support multiple, independent roles"
"A data model for flexible hypertext database systems"
"Nested relational structures"
"Deductive databases in action"
"A storage manager for the hypernode model"
Principles of Database and Knowledge-Base Systems
"A comparison between deductive and object-oriented database sys- tems"
"Multilevel nested relational structures"
"Fundamentals of dependency theory"
"Reasoning about functional dependencies generalized for semantic data models"
--TR
--CTR
Ivan Radev , Niki Pissinou , Kia Makki , E. K. Park, Graph-based object-oriented approach for structural and behavioral representation of multimedia data, Proceedings of the eighth international conference on Information and knowledge management, p.522-530, November 02-06, 1999, Kansas City, Missouri, United States
Mark Levene , Alexandra Poulovassilis , Kerima Benkerimi , Sara Schwartz , Eran Tuv, Implementation of a graph-based data model for complex objects, ACM SIGMOD Record, v.22 n.4, p.26-31, Dec. 1993
Sankhayan Choudhury , Nabendu Chaki , Swapan Bhattacharya, GDM: a new graph based data model using functional abstractionx, Journal of Computer Science and Technology, v.21 n.3, p.430-438, May 2006
Avigdor Gal , Opher Etzion, A Multiagent Update Process in a Database with Temporal Data Dependencies and Schema Versioning, IEEE Transactions on Knowledge and Data Engineering, v.10 n.1, p.21-37, January 1998 | computable update;query and update language;hypernode database;non-well-founded sets;hypernode functional dependency;set-based data model;hypertext;graph-based data model |
627706 | Time-Constrained Query Processing in CASE-DB. | AbstractCASE-DB is a real-time, single-user, relational prototype DBMS that permits the specification of strict time constraints for relational algebra queries. Given a time constrained nonaggregate relational algebra query and a fragment chain for each relation involved in the query, CASE-DB initially obtains a response to a modified version of the query and then uses an iterative query evaluation technique to successively improve and evaluate the modified version of the query. CASE-DB controls the risk of overspending the time quota at each step using a risk control technique. | Introduction
A real-time database has strict, real-time timing constraints in responding to queries. A time-constrained
query is of the form "evaluate the query Q in at most t time units". In a multi-user, real-time DBMS, the
resources (i.e., CPU and data) are shared, and the issue of meeting the time constraint in evaluating the query
becomes complicated due to CPU scheduling and transaction management (concurrency control). In comparison,
in a single-user DBMS, the satisfaction of a time-constraint does not deal with resource sharing or transaction
management. Nevertheless, the problem of evaluating a time-constrained query in a single-user DBMS is far from
trivial. Also, its solution is useful in a multi-user DBMS for forcing a time-constrained query to have a fixed CPU
utilization time, which is an important parameter in multi-user real-time DBMSs for transaction scheduling.
CASE-DB is a real-time, single user, relational prototype DBMS that uses relational algebra (RA) as its
query language. In earlier papers [10, 11, 12], we presented query approximation techniques for aggregate
relational algebra queries, where the result of the query was estimated by using statistical estimators and sampling
techniques. In this paper, we present a query modification technique for processing non-aggregate, real-time
relational algebra queries.
In a single-user DBMS, the issue of time-constraint satisfaction is equivalent to controlling the evaluation time
of a query precisely. There are two points to observe:
1. The query evaluation time for an RA query is unknown prior to the evaluation and can only be estimated
with a certain probabilistic confidence. For example, consider the following query. Select a set of tuples
This research is supported by the National Science Foundation under Grants IRI-8811057, IRI-9009897, and IRI-9008632. A
preliminary version of this paper has appeared in the Proceedings of the 1992 IEEE DE Conference.
y Department of Computer Engineering and Science, Case Western Reserve University, Cleveland, OH 44106.
z Department of Computer Science, Southern Illinois University at Carbondale, IL 62901.
from a relation that satisfies a boolean formula F. In this query the number of tuples satisfying F may
vary significantly with different relations, and whether the selection can be completed within a given time
quota cannot be known a priori. In general, the evaluation time of a query changes not only with different
relations, but also with the selectivities 1 of the RA operators of the query.
2. A given time constraint T for a query may be so small or the query evaluation is so time consuming that
the probability of not being able to evaluate the query within T time units (referred to as the risk of
overspending in the rest of the paper) may be extremely high. For example, the probability that the join
of two disk-resident relations, each with 1,000 blocks, can be performed in 10 seconds is likely to be almost
zero, and, thus the risk of overspending the time quota of 10 seconds for such a join is almost one.
One can come up with various approaches for approximating or modifying a time-constrained RA query. The
approach used in CASE-DB is as follows:
1. The relations in the database are fragmented into semantically meaningful subsets (fragments).
2. For each query, in addition to the time quota, the user specifies the maximum risk of overspending to be
taken by the DBMS in evaluating (either) the query (or one of its modified versions). The concept of the
risk of overspending for a given query is introduced in [13].
3. To evaluate a time-constrained query, the DBMS modifies the original query by replacing the relations
with their fragments (Query Modification Technique). The fragments are selected such that the risk of
overspending the time quota while evaluating the modified query is closest to and less than the risk specified
by the user (Fragment Selection Problem).
4. If there is any time left after evaluating the modified query, step (3) is performed iteratively with higher
risks of overspending. This processes is carried out until the time quota is completely used (Iterative
Query Evaluation).
With the exception of [15 , 16], all other real-time database literature deals with the multi-user environment
and transaction management such as maximizing the number of transactions that complete within their deadlines.
Smith and Liu [15], and Vrbsky and Liu [16] give a methodology for finding approximate answers to relational
algebra queries. In their approach, as the amount of time used increases, the accuracy of the approximate result
is improved. Their approach does not contain a risk control mechanism, and the way in which they obtain and
improve the approximate result is different than ours.
The rest of the paper is organized as follows: In section 2, the query modification technique is described with
an algorithm and an example. In sections 3.1 and 3.2, we discuss two different formulations of the fragment
selection problem, and show that both problems are NP-complete. A heuristic solution is presented in section
3.3 for the fragment selection problem, which has been implemented in CASE-DB. In section 4 we present the
1 Selectivity of an RA operation (or an expression) E, denoted by sel E , is the ratio of the number of output tuples of E to the
product of the number of tuples in the operand relations in E.
transformations used in the query modification technique. In section 5 we report the experimental results and
the performance analysis. Section 6 concludes.
Query Modification Technique
In this technique, a time-constrained query is modified by replacing the relations with their fragments. The
user or the database administrator identifies relations that would probably be used in time-constrained query
processing, and divides each relation into three types of strata: required, strongly preferred and preferred
strata. Figure 2.1 shows the relation fragmentation chain for the relation FURNACES: the user prefers
that the query be evaluated with furnaces, strongly prefers that the query be evaluated with one of the two
fragments, critical-status-furnaces or high-priority-and-critical-status-furnaces, and absolutely requires that the
query is evaluated with the fragment high-priority-and-critical-status-and-dangerous-environment-furnaces. The
fragments in the required, preferred and strongly preferred strata are called as required, strongly preferred and
preferred fragments.
Example 2.1. In CASE-DB, each RA query has the keyword parameter "T=" which specifies the time constraint
(or time quota), and the keyword parameter "R=" which specifies the risk of overspending. Now, consider the
database relation FURNACES (fnumber, fname, priority, status, environment) that contains information about
furnaces, and the relation TEMPERATURES (fnumber, temperature, time, date) that maintains the recorded
temperatures of furnaces. Assume that the user has specified the relation fragmentation chains shown in figures
2.1.a and 2.1.b. Consider the query "List the furnace names and their temperatures in 10 seconds with the
risk at 0.5 or less" which is specified in RA as
first revises the query Q into Q 1 where FURNACES is replaced by HIGH-PRIORITY-AND-CRITICAL-
STATUS-AND-DANGEROUS-ENVIRONMENT-FURNACES and TEMPERAT-URES is replaced by LAST-
3DAY-TEMPERATURES, i.e., the required fragments. Q 1 is then evaluated.
Assume that the evaluation of Q 1 took 2 seconds. CASE-DB then finds the risks of evaluating the query with
different combinations of fragments from the two chains for the time of 8 seconds. Assume that, among these risks,
the risk that comes closest to and is less than 0.5 is 0.48, and it is for the query "List the last day temperatures
of high-priority-and-critical-status-furnaces" which is
Then CASE-DB evaluates Q 2 . Assume that the evaluation of Q seconds. Then, for the remaining 2
seconds, CASE-DB chooses larger fragments from the two chains using a very high risk of overspending (e.g., 0.95)
and repeats the query evaluation. The reason for choosing high risks in later iterations is to reduce the number of
additional iterations, and thus to control the overhead of iterations. On the average, the number of iterations are
always upper bounded by 4. CASE-DB keeps evaluating modified versions of Q, each time with bigger fragments,
until the time quota T runs out. Then, CASE-DB returns the very last completed response to the user together
with the modified query of that response. Figure 2.2 presents an outline of the non-aggregate, real-time query
evaluation algorithm used in CASE-DB. Please note that the major random variables that introduce an error
in query evaluation time (and thus cause multiple query evaluation steps) are the selectivities of RA operators
in the query. At the end of each query evaluation step, we have better information about operator selectivities,
which is used to revise the selectivity estimations.
Please note that, in the algorithm in figure 2.2, there is a transformation of the modified query Q s into Q 0
s
such that Q 0
s uses "the previous step's response''. The first revision of the query Q obtained by replacing each
relation with its required fragment and the evaluation of the revised query constitutes the first query evaluation
step. CASE-DB then spends the remaining time by iteratively improving the query with additional steps.
Clearly, from step 2 onwards, the DBMS may save time if, instead of evaluating the current step's query with base
relations, it can revise the current step's query such that (a) previous step's output can be used in the current
step's output, and (b) it can ``add'' new tuples to the output due to the ``larger'' fragments utilized in the current
step. The point (b) is, of course, true in general for only monotone 2 queries.
Figure
2.1. Relation Fragmentation Chains
The motivations for our approach are listed below.
1. There is a compromise between the sizes of operand relations of the query and the risk of overspending.
Under the expected case (with the possible exception of the set difference operator of the RA), as the
relations are replaced by their subsets (i.e., fragments), the query evaluation time and hence the risk of
overspending get smaller.
A query is monotone when adding tuples to its input relations does not make it lose any of its output tuples; otherwise, it is
nonmonotone. RA queries with unions, intersections, projections, and joins are monotone. However, the inclusion of the set difference
operator makes an RA query nonmonotone.
2. By specifying the fragments of relations and how much risk (s)he is willing to take for overspending in a
query, the user guides the DBMS in choosing the modified query.
3. The modified query is semantically meaningful, and represents the "best" query that the DBMS can answer
for the given risk and the given time constraint.
Algorithm Time-Constrained-Ad-Hoc-and-Non-Aggregate-Query-Evaluation(Q, T, fi)
input: Q: an arbitrary relational algebra query.
T: a given amount of clock time quota.
fi: (upper bound for) the risk of overspending to be used in step 2.
Output: a revised query Qs and its response produced within T clock time units.
begin
Set the timer interrupt to T units
While TRUE do begin
if Step=1 then
begin
ReplaceRequiredFragments(Q;Qs ); ffor each r in Q, select the
required fragment f and replace r with f in Q to obtain Qsg
Execute(Qs ); fStandard relational algebra query executiong
else begin
if Step=2 then fi 0
else
,Step) fdecides the risk to be taken in
step 3 and above g
previous ,fi 0
,TimeLeft,Qs ); fsolves the fragment selection
problem with the risk fi 0
to obtain
the revised query Qsg
if (Q previous = Qs) then goto END; fif a relation could not be obtained
such that the constraints are satisfied or if the given query(Q) has been evaluated.g Transform(Qs ,Q 0
fTransform Qs into Q 0
s
such that Q 0
s uses the previous step responseg
endif
CurrentTime - StartTime;
previous := Qs ;
endwhile
3 When the timer interrupt occurs, interrupt service routine returns the control to the statement after the while loop. Therefore,
the while loop in the next line is an infinite loop.
END:Return(Qs,ResponseToQs
Figure
2.2. Query Evaluation Algorithm for Real-Time, Non-Aggregate Queries in CASE-DB
3 Fragment Selection Problem, Its Complexity and Heuristics
In this section, we formally define the fragment selection problem using two different risk factor formulations,
and prove that both formulations lead to NP-complete problems. We then briefly describe the heuristic approach
used in CASE-DB.
For each r, let S r denote the fragments in the relation fragmentation lattice of r, i.e., S rg.
Consider Q with input relations r i . For a query evaluation step of Q, let us say we choose the fragment f i from
(S r i of) each relation r i . We call the resulting list of fragments F=ff 1 the fragment list
of Q.
Below we describe two different risk factor formulations.
3.1 Risk Factor ff
In [11] we gave a risk factor ff approach for sampling and evaluating an estimate for aggregate queries. We
now revise that approach for fragment selection in non-aggregate queries.
Assume that we are at the i th query evaluation step. Let F ng be a fragment list of Q selected
at step i. We first characterize the probability of exceeding the time quota when Q is evaluated with fragments
of F i , that is, the risk ff i of overspending for F i . Let T i be the amount of time left after
the random variable representing the actual amount of time that will be spent at the i th step with mean - t i and
variance simply, sel i ) denote the selectivity of the operator Op at the i th step. SEL(E)
(or, simply, SEL) denotes the set of sel i (Op) for each operator Op in E (i.e., sel i 2 SEL). Let COSTQ
be the time-cost formula of the query at the i th step. Clearly, the equality
is satisfied. Since SEL and, hence, t i are unknown until the step i ends, we use the expected version of the above
equation, i.e.,
where \Xi denotes the expected value function. Now, assuming that, for a given fragment list F
have (a) approximations for SEL, and (b) the time cost formula COSTQ of the query Q is derived, we solve for
using equation (3.2).
The risk of overspending at step i, denoted by ff i , is defined to be P denotes the probability.
For the risk ff i , a number d ff i
can be obtained such that the actual amount of time spent at step i will be less
than or equal to - t i
Therefore,
if we use the equality
then t i will be less than or equal to T i with probability known and - t i is obtained
from equation (3.2), we can use equation (3.3) to solve for d ff i , and hence for ff i . Thus, the risk ff i of overspending
when Q is evaluated using the fragments in the fragment list F i can be computed. We now state the fragment
selection problem and discuss its complexity.
Fragment Selection Problem (FSP ff ff be the given risk of overspending at step i. Let X denote the
set f F j jF j is a fragment list, 0 - ff - ff j - ", " is a pre-chosen small constant g. Choose from all fragment lists
in X the list F i with the risk ff i such that ff \Gamma ff i is minimum.
2FSP a is a particular case of FSP a where each relation r j has only two fragments in S r j
Theorem 1: 2FSP ff is NP-complete.
Proof: See the Appendix.
Thus, the complexity of finding F i with ff i among all possible fragment lists is high. However, for monotone
queries several fragment lists can be eliminated from consideration. For example, consider two fragment lists F 1
and F 2 with risks ff 1 and ff 2 , respectively. Assume that each fragment of F 1 contains the corresponding fragment
of F 2 . Clearly, when Q is monotone, ff 2 - ff 1 . If ff 2 is evaluated and it is found that ff - ff 2 then we do not
need to compute ff 1 and can eliminate F 1 from consideration. Nevertheless, the fragment selection problem still
remains to be NP-Complete.
Theorem 2: FSP -M ff is NP-Complete where FSP -M ff denotes the fragment selection problem with monotone
queries.
Proof: See the Appendix.
In addition, as discussed in section 5, we use "stratified relation fragmentation lattices" that are defined and
maintained a priori so that the number of "eligible" fragment lists to consider is significantly reduced and tightly
controlled. Essentially, when there are too many fragments in the fragment set of a relation r, we stratify the
fragments, and, for the fragment selection problem, consider only fragments in a single stratum.
There are various ways to approximate each sel i (Op) in SEL [14, 2, 3]. In our earlier work [11, 10], we have
approximated sel i (Op) by sampling and evaluating COUNT estimators. The overhead of approximating sel i (Op)
can be reduced to zero disk accesses if a sample of each fragment f i is also pre-retrieved and stored along with
Please note that the selectivity of an operator Op 1 (in Q) that uses as an operand the output of another
operator Op 2 in Q is dependent on the selectivity of Op 2 . That is, selectivities are not independent, and for
precise selectivity estimations, covariances between selectivities need to be estimated. Unfortunately, covariance
formulas are usually quite complicated [9]. Moreover, the complexity changes with the sampling method used.
In equation (3.3), we assumed that we have an approximation for function of the
fragments used and the variances among selectivities in SEL, which can be replaced by the sample selectivity
variances obtained during sel i (Op) approximations.
3.2 Risk Factor fi Op
We now discuss another risk factor computation approach, also adapted and revised from [11], for the
fragment selection problem.
In the previous section, we control the risk ff of overspending for the whole query Q. The approach we
pursue in CASE-DB is to define the risk fi Op of overspending in each operator Op in Q. Such an approach is
computationally simpler than the ff-Risk approach and has the advantage that we can use separate risk factors
for different operators. For example, if a join operator in Q has large operand relations and a high variance of
selectivity then we may want to take a small risk of overspending for that operator. On the other hand, we take
a large risk of overspending for a selection operator with a small operand relation regardless of the variance in its
selectivity.
Our approach is as follows. Assume that, at step i for a given fragment list F i , we know the selectivities sel i (Op)
and Var(sel i (Op)) for each operator Op in Q. Instead of using sel i in our query time cost formula COSTQ , we
use sel
i such that sel
In other words, the
probability that the actual selectivity sel i for Op (with the fragment list F i ) is greater than sel
(thereby resulting
in an overspending in Op execution-the risk) is fi Op . Such a selectivity sel
i can be derived by using the equation
sel
is the mean of sel i , V ar(sel i ) is the variance of sel i , and d fi Op
is a proper value chosen by the system
(based on the distribution of sel i ) for controlling the risk fi Op .
We approximate V ar(sel i ) in equation (3.4) using the variance of the corresponding sample selectivity. For
simple random sampling and cluster sampling, [9] gives the formulas for the variance of the sample selectivity.
Let us now state the fragment selection approach that uses the fi approach.
Fragment Selection Problem FSP denote the set of sel
's. Let X denote the set
is a pre-chosen small constant g. Choose from all
fragment lists in X the list F fi i
such that T \Gamma COSTQ among all fragment lists in X.
2FSP fi is a particular case of FSP fi where each relation has only two fragments in its fragment list.
Theorem
Proof: Similar to the proofs of Theorems 1 and 2, and is omitted.
Thus, the problem of finding F i with the fi Op risk is also NP-Complete, similar to the complexity of finding
F i with the ff risk-except that the expected value of the function COSTQ in the ff risk approach is much more
complex. And, as in the ff risk approach, similar complexity reduction techniques can be used to control the time
spent in the fragmentation selection problem.
3.3 Heuristic Approach with fi Risk Factor
In CASE-DB we have implemented the fi-risk factor and a heuristic approach to locate an F fi i
such that T
constant. In this approach, we consider the following properties for
our heuristics:
(i) Selectivity,
(ii) types of operators involved,
time costs of subqueries where r is involved
(iv) file organization type, and
(v) positions of input relations in the parse tree of the query
We use "selectivity" because if the selectivity of an operator is high, a slight increase in the fragment size of the
relation involved with the operator would drastically increase the output, thereby increasing the time cost, and
might overspend the allocated time. So, we would like to increase the fragment size of an input relation whose
associated operator has a high selectivity, when we are ready to take a large risk. We use "the types of operators"
to determine the monotonicity property of the subquery involved. For some relations and some operators, if the
fragment size is increased (decreased) then we may observe a priori an increase (decrease) in the output size,
and hence in the time cost. For some relations, the reverse is true : an increase (decrease) in the fragment size
decreases (increases) the output size. We would like to increase the fragment size of those relations which increase
the time cost (maximize) when the available time is larger.
Figure
3.1. The parse tree of (r
The time-cost of the subqueries involving only base relations is expected to have smaller variance. Hence, the
time-cost of a query involving few operators would have smaller variance.
The type of an operator in a subquery plays a part in the time-cost of the subquery. The file organization of
the relation involved in the subquery also plays a role in determining the time-cost of the subquery. For example,
in the case of the 'selection' operator if there is an indexed file whose index is over the same attribute used in the
selection formula, then the time-cost is much less than that of evaluating the selection operator on a non-indexed
file.
To justify the use of the position of the relation in the parse tree of the query, we use the following example:
Assume we have the following query Q=(r whose parse tree is shown in figure 3.1. If we increase the
size of r 1 or r 2 the variance of the output size will also increase since the output size of the query as a random
variable will be dependent on the output size of r
The heuristic procedure proceeds as follows: At any given iteration, the system fits into one of the following
scenario. We choose a relation depending on the scenario, increase the fragment size of that relation, compute
the risk taken. If the risk is "acceptable" the query is evaluated with the chosen fragment.
time is insufficient 4 , and the risk taken is small 5 .
We do not increase the fragment size of a relation if it is
ffl associated with an operator with high selectivity 6 , or
ffl lower in the parse tree, i.e., away from the root 7 , or
ffl involved with an expensive operator, i.e., the time-cost is of higher order (e.g., O(n 2 )).
4 The available time is less than the time taken to evaluate the query with the required fragment.
5 If the risk is less than 0.1.
7 The relation is at a level 3 or greater.
We increase the fragment sizes of a relation if it is
ffl associated with an operator whose selectivity is low, or
ffl closer to the root of the parse tree, or
ffl involved with inexpensive operators.
time is insufficient, and the risk taken is high.
Similar to scenario 1, we do not increase the fragment size of a relation if it is involved with an expensive
operator or lower in the parse tree because the available time is less. Instead, we increase the fragment size of a
relation if it is associated with an operator whose selectivity is high because we are ready to take a larger risk.
Scenario 3 Available time is sufficient, and the risk taken is small.
In contrast with scenario 2 we increase the fragment sizes of a relation if it is lower in the parse tree or involved
with expensive operators or involved with operators whose selectivity is low.
time is sufficient, and the risk taken is high.
Algorithm SelectFragments(Q, fi, AvailableTime, Qm )
Modified query from the previous iteration.
fi: the risk to be taken in the next step.
AvailableTime: Time available for the next iteration.
modified Query to be used in the present iteration.
begin
HeuristicChoose(Q, r, fi, AvailableTime); fchoose a relation r using the heuristic.g
while (EstimatedTime ! AvailableTime) and (r !? EMPTY) do begin
EstimatedTime := 0;
fchooses a fragment f j (r) for the relation r where
is the fragment used in the previous iterationg
previous :=Qm ;
ReplaceFragment(Q previous , f j (r), f i (r), Qm ); replaces the fragment f i (r) with f j (r) in the query Qmg
for each subquery (SUBQ =(r 1 Op r 2 )) or (SUBQ= (Op r 1 )) in Qm
such that sizes of r 1 and r 2 are known (available or estimated)
do begin
sel
switch Op
case Op=oe fOp is a selection operationg
EstimatedTime
fr 0 is the output relation and r 1 is the input relationg
case Op=- fOp is a projection operationg
EstimatedTime
case is a union operationg
EstimatedTime
fr 0 is the output relation and r 1 and r 2 are the input relationsg
case is an intersection operationg
EstimatedTime
case Op =1 fOp is a natural join operationg
EstimatedTime
case difference operationg
EstimatedTime
endfor
fif the selected fragment
is the entire relation then choose another relation with the help of the heuristicsg
then HeuristicChoose(Q previous , r, fi, AvailableTime);
endwhile
(EstimatedTime ? AvailableTime) then Qm:=Q previous ;
fif the estimated time is not within the
available time discard the new fragment
and use the previously found fragment that
could be evaluated within the available timeg
Figure
3.2. Algorithm for selecting a fragment.
In this scenario we increase the fragment size of a relation if it is lower in the parse tree, or involved with
operators having high selectivities or involved with expensive operators.
3.4 Algorithms for selecting a fragment
The algorithm SelectFragments given in figure 3.2 outlines the method used for selecting a fragment for the
fragment selection problem using the heuristic approach suggested in section 3.3. In this algorithm, the procedures
SelectCost, ProjectCost, UnionCost, IntersectionCost and JoinCost return the time estimation for the
corresponding operators, which are specified in section 4.3. Algorithm HeuristicChoose shown in figure 3.3
specifies the process of choosing a relation based on the four scenarios.
Algorithm HeuristicChoose(Q,r,fi,TimeLeft)
arbitrary query.
Risk to be taken in the present iteration.
available for the current iteration.
involved in Q.
taken to evaluate the query with the required fragment.
begin
switch cond begin
case BaseTime and fi ! .1)
case Basetime and fi - .1)
case BaseTime and fi - :1)
case BaseTime and fi ! :1)
endcase
Figure
3.3. Algorithm for choosing a relation based on the heuristics.
Algorithm LessTimeLowRisk chooses a relation based on the scenario 1 : Available time is insufficient and the
risk of overspending is small. In this algorithm we have used three functions namely Level(r) which returns the
level of the relation in the parse tree; OperatorValue(r) returns a value associated with the operator which operates
on r directly (value depends on the time complexity of the operator) and Selectivity(r) returns the selectivity
of r.
Algorithm LessTimeLowRisk(Q,SelectedRelation)
arbitrary query.
relation involved in Q.
begin
For every relation r in Q begin
if(SelectedRelation=EMPTY) then
if the entire relation has not been used for processing in the previous iteration then
else
else
else
endfor
Figure
3.4. Algorithm for choosing a relation to modify under a given scenario.
Similar to LessTimeLowRisk procedure, we have algorithms LessTimeHighRisk, MoreTimeHighRisk and More-
for choosing a relation based on the scenarios 2,3 and 4, respectively.
The algorithm ChooseFragment chooses a fragment from the fragment set of the relation. In CASE-DB, we use
linear search to find the right fragment for a given relation. Following is the algorithm for ChooseFragment as
implemented in CASE-DB.
Algorithm ChooseFragment(f j (r), f i (r), r)
is the fragment that is being currently used.
The relation for which we are choosing the fragment.
new fragment chosen for the relation r, such that f j oe f i .
begin
Figure
3.5. Algorithm for choosing a fragment from a given fragment set.
Algorithm CalculateSelPlus implements the computation of sel
i for an operator O p in CASE-DB.
an arbitrary query.
sel :Estimated selectivity from the previous i-1 iterations.
The risk to be taken in step i.
assumed larger selectivity of the operator Op .
value chosen such that P (sel
total number of points in the point space of Op given as are the number of tuples in the
relations involved in the operation.
the number of points in the point space of Op at the step j.
the number of points which have not been included in the previous i-1 iterations.
begin
sel
sel
return(sel
end.
Figure
3.6. Algorithm for calculating sel
Iterative Query Evaluation Transformations
We illustrate the iterative query evaluation transformations with an example.
Example 4.1. Consider the query Q 2 of Example 2.1.
Assume that the evaluations of Q 1 and Q 2 took 8 seconds, and there still are 2 seconds left in the time quota. In
the third step, using the risk of 0.95, the DBMS chooses to evaluate
We can transform Q 0
3 into an equivalent query Q 3 that uses Q 2 as follows.
[- fname;temperature (CRITICAL-STATUS-FURNACES
We make two observations. First, each of the two union operators in the right hand side of equation (4.1)
is a union of two disjoint sets. Therefore, there is no need for duplicate tuple elimination which leads to a
very fast implementation. Second, in the implementation of relation fragmentation chains for FURNACES and
TEMPERATURES, we actually maintain(physical files for) f 0
\Gammag
at each node. Thus, when evaluating (f
3 and g 0
respectively, that are already
stored in the database and available; this leads to a very fast implementation of Q 3 . 2
Consider a single-operator query with input relation r. CASE-DB evaluates Q using fragments f
from r such that f i ae f
Example 4.2. Consider a relation r with a relation fragmentation chain for r. Assume we have already evaluated
there is still time left in the time quota. Let f 2 be the next fragment chosen. We then evaluate Q(f 2 )
in terms of Q(f 1 ) which in turn, is used in evaluating Q(f i
The evaluation of Q(f i+1 ) in terms of Q(f i ) is done (almost always 8 ) as follows. Through algebraic manipula-
tions, Q(f i+1 ) is converted into Q(f i )[Q 0
;. In other words,
uses f i and f 0
in its evaluation-two relations each strictly smaller than f i+1 .
We call f 0
i+1 the complement fragment of f i+1 .
(ii)The union operation between Q 0 and Q is a union of two disjoint sets.
Let us denote the union of two disjoint sets by ], and call it the disjoint union.
Let r and s be two relations. Then r
Please note that disjoint union can be implemented very fast since, unlike union, it does not require duplicate
tuple elimination, which is normally implemented by sorting in databases-an expensive task. Therefore, whenever
possible, we use disjoint union over union. We illustrate with an example.
Example 4.3. Let Q(r, with r and s having the relation fragmentation chains f
respectively. Assume, at a previous iteration, Q(f evaluated, and, at the current
are chosen to evaluate Q, i.e., Q(f j+1 is to be evaluated. We transform
the relation fragmentation chains of r and s contain g 0
j+1 and
computed and stored in the database already (before the query session starts). 2
8 There are some exceptions which are discussed in [4].
4.1 Transformations for Single-Operator Queries
We now generalize our approach for single-operator queries. Assume Q(f evaluated before, and Q(f k
is to be evaluated. Let f i;k denote f 0
k and g j;m denote g 0
m . We
is an RA operator, and
evaluate the transformed form. We now list for each single-operator query Q such transformations.
Projection:
4.2 Transformations for Multiple-Operator Queries
Consider an RA query with multiple operators and its parse tree, e.g., the RA query
parse tree is shown in figure 3.1. At each query evaluation step, internal nodes of the parse tree are associated
with (output) relation instances obtained by evaluating the operator at that node. Our approach is to store
and use whenever possible the last instances of such relations. For monotone queries, such an approach is quite
efficient.
4.2.1 Monotone Queries
Assume that the RA expression does not have any set difference operators (i.e., a monotone query). Let
be the output relations of an internal node in the parse tree obtained in two consecutive query evaluation
steps. We now summarize the query transformations at each node of the parse tree. Let E and -
E be arbitrary
RA expressions (possibly relations) that are evaluated at the i th step to give e i and -
respectively, and, at the
th step to give e i+1 and -
respectively. From the relation fragmentation chains of the relations involved
in E and -
E, we can compute e 0
Clearly, for E' -
(or '(E) in unary operator case), where ' 2 f[; ";
correspond to Q(f j;m , in section
4.1 respectively. Then we use exactly the same transformations given in section 4.1 for evaluating e i+1 '-e i+1 (or
'(e i+1 )). For example, when
e i is available, the transformation for
4.2.2 Nonmonotone Queries
Whenever a set difference operator appears in the parse tree (i.e., a non-monotone query), we may have
are two consecutive output relations of the set difference operator. This results in complicated
transformations if we are to use in the computation of thus making the iterative evaluation too costly.
Note that, in r-s, the consecutive evaluations of f 1 -g 1 , f 2 -g 1 , f 3 -g 1 , ., etc., do create monotonously increasing
output relations our approach in CASE-DB for any subexpression
E in the query is to evaluate E-
once, and afterwards, to evaluate E-
E with new fragments only in E (but not
in -
E). Such an approach guarantees that consecutive output relations of any set difference operator
. In this case, to compute use the transformation
4.3 Time-costs for the transformations
For a given operator, we choose a transformation by comparing the cost formulas of all the transformations
for that operator. We now briefly present these cost formulas. Please note that the chosen cost formula for an
operator is also used in the algorithm SelectFragments to estimate the iteration time (and hence to choose the
fragments).
We use sequential files sorted on the key to store the fragments of a relation. As a notation, jF j denotes the
number of records of F and jjF jj denotes the number of blocks used in storing F (jjF jj is used in computing the
disk access cost, since the tuples are read/stored in blocks from/to the disk). We have two ways of maintaining
intermediate results obtained during an iterative evaluation step: either in main memory until the evaluation
is over or on the disk. Since we use the iterative evaluation method to process the fragments of a relation,
intermediate results are repeatedly used in each iterative step. Therefore, we keep intermediate results in main
memory. The final results obtained from each iterative step are kept on the disk.
In what follows, we give the time-costs for the transformations. For union and set difference which have more
than one transformation, we only give the time costs of the transformation with (1) the smallest expected number
of disk accesses; (2) the smallest expected number of comparisons. Since the query is evaluated in an iterative
fashion, we only compute the costs of the tranformations in a certain iteration step.
Among the four equivalent transformations for union listed in section 4.1, transformation u.3, Q(f
has the smallest time cost.
are constants.
Among the three equivalent transformations for union listed in section 4.1, transformation d.3, (Q(f
has the smallest time costs.
are constants.
The only transformation for
are constants.
are constants.
Using algorithm Selection(f i;k ; Condition), the costs for transformation s are
are constants.
Projection:
Using algorithm Projection(f i;k ; Attributes), the costs for transformation are
are constants.
5 Experimental Results
5.1 Implementation of CASE-DB
The implementation of the query modification technique has been carried out on ERAM - a relational prototype
DBMS [8]. ERAM is built on top of Unix 4.3BSD operating system on Sun 3/60 workstations and is written
in the C programming language.
CASE-DB consists of five basic modules, namely, file management module which performs the functions
of reading and writing tuples; relation maintenance module which creates, retrieves, updates and destroys
relations; algebra module which executes all algebra operations with the help of the file management module;
command interpreter module which supports a relationally complete query language and relation maintenance
commands; and lattice maintenance module which executes commands to create, update and delete
lattices (i.e., in the simplest case, fragmentation chains). Details of CASE-DB implementation are in [7] .
Information of all relations and their associated fragment chains are stored in two different dictionaries with
the same basic structure. The dictionaries are divided into pages 9 , and, at the end of each page, there is a pointer
to the next available space in the page, a pointer to the next page and the page number information.
In CASE-DB, the complement fragments (discussed in section are stored. There are two reasons for this
choice
ffl The query modification technique uses the complement fragment for transformations (discussed in section
and not the entire fragment.
Less amount of space is required.
5.2 Creation of input relations
For each relation used in the experiment, the first and second attribute (C 1 and C 2 ) are of integer type and
the third attribute (C 3 ) is of character type. The first attribute is an unique random integer, which is the key for
the relation. The second attribute, which is not a key of the relation, is used to determine the selectivity of an
operator. The distribution of the attributes is uniform. Each relation involved in the experiment contains 5000
tuples, where the tuple size is 100 bytes. The number of tuples in the required fragment is always 100, and the
9 A page is 1024 bytes.
number of tuples in other complement fragments vary between 100 to 200. All the relations are indexed unless
specified otherwise.
5.3 Factors Affecting CASE-DB
The factors that affect the performance of CASE-DB are discussed below.
(a) Risk
Probabilistic risk of overspending plays an important role in the selection of a fragment list i.e., solving the
fragment selection problem. In CASE-DB, depending on the risk given by the user, the SEL
giving different time estimates for different risk values. This leads to the selection of a larger fragment when the
risk is higher, and a smaller fragment when the risk is lower.
(b) Complement Fragment Size
The time-cost COSTQ is a function of the size of the input fragments. Since we are using complement
fragments in iterative query evaluation steps, size of the complement fragment affects the fragment selection
process.
(c) Selectivity
The selectivity of an operator O p affects the selection of a fragment for the relations involved with O p . The
selectivity of O p when O p is either a Union, Intersection, Difference, Projection or Selection, is defined as the
ratio of the number of output tuples to the total number of input tuples. If O p is a Natural Join operator, the
selectivity of O p is defined as the ratio of the number of output tuples to the product of the input tuples. The
expected time of evaluation is a function of selectivity and thus a change in the selectivity alters the expected
time.
(d) Time
As the available time increases, more and more input tuples will be used, leading to the evaluation of the
original query.
5.4 Single Operator Queries
In this section, we present the results of single operator queries and see how the factors presented in the
earlier section affect the performance of CASE-DB. CASE-DB normally uses the risk given by the user only in
the second iteration. In the third and succeeding iterations, CASE-DB computes a higher risk value so that the
number of iterations can be reduced. However, in order to see the actual effect of risk given by the user (fi), in
the experiments we have used the risk fi in all the iterations excluding the first iteration.
In each of the following tables, the column "Risk" denotes the risk of overspending given by the user.
The column "selp" denotes sel
(the selectivity used in the time-cost formula during the second iteration). As
explained in previous sections, for a given query, time-quota and the risk of overspending, CASE-DB evaluates
the query by substituting the relations in the query with their corresponding required fragments. If there is any
time left after the first iteration, CASE-DB iterates until the time available is very small or overspending of time
occurs. The column "itr" denotes the total number of iterations that the query has gone through, including the
iteration where the available time is overspent. Column "ptu" is the percentage of tuples used in the last iteration
where overspending did not occur. "pts" denotes the percentage of tuples selected to be used in the last iteration.
Note that "pts" includes those tuples that are used in the iteration where overspending might have occurred.
"ptu" and "pts" columns will have the same value when overspending did not occur, i.e., CASE-DB terminated
the process when it could not select a fragment that could be used for the next iteration such that the evaluation
of that iteration could be completed within the available time with the given risk. Finally "ovsp" represents the
amount of time overspent (in seconds).
5.4.1 Selection Operation
No. of tuples in complement fragment
Risk 100 150 200
selp itr ptu pts ovsp selp itr ptu pts ovsp selp itr ptu pts ovsp
.3
sel 1 =.54 time=10sec
Table
Effect of Risk on a Single Selection Operation.
A selection query of the following form:
select from rel where c2 ? 500 risk=.5 time=10sec
is used in the experiments. 'rel' is the relation name and c2 is the second attribute of the relation. By varying
the selection formula (e.g., c2 ! 500) we have obtained different selectivities for the selection operator. The
distribution of c2 in the relation and the fragment chain is uniform. Using a uniform distribution has given us a
consistent value for the selectivity to be used in the calculation of SEL
In the query modification technique, the following equations are used to compute SEL
From the above equations, it can be seen that with an increase in the risk of overspending (fi), d fi value decreases,
thereby reducing SEL
. Since the time-cost is a function of SEL
i , the time-cost decreases as the risk increases.
This leads to the selection of larger fragments with an increase in risk. From Table 5.1 it can be seen that the
number of tuples selected for processing increases linearly with risk, except when the risk is almost 1 (.999).
When the risk (fi) is almost 1, SEL
i tends to zero (shown in "selp" column), which leads to the selection of a
large number of tuples
No.
of
input
tuples
used
Time
Figure
Effect of Time on a Single Selection Operation1500250035000
No.
of
input
tuples
used
in
2nd
iteration
risk=.5 time=10sec 100 tuples/frag-comp
Figure
Effect of Selectivity on a Single Selection Operation
For the selection operator, with the increase in the complement fragment size, the number of tuples selected
increases linearly. For the risk of 0.7 the percentage of tuples selected are 19, 20.5 and 23 for complement fragment
sizes 100, 150, and 200, respectively (Table 5.1). This increase can be attributed to the cost of the disjoint unions.
For example, to use 1000 tuples (these tuples have not been used in the first iteration) in the second iteration, we
10 The expected time which is used in the selection of fragments is a function of SEL
need unions for a lattice with a complement fragment size of 100, and 5 disjoint unions for a lattice
with a complement fragment size of 200. Though the disjoint union is not an expensive operator, it does increase
the total time required for processing the iteration.
As the available time increases, the number of input tuples selected for processing also increases linearly
Figure
5.1). Since we are comparing the expected time with the available time, if the expected time is less
than or equal to the available time, we use the fragment selected. So, when the available time increases, a larger
fragment can be selected such that the expected time would be within the available time.
The selectivity from the (i-1) th iteration is used to compute SEL
which in turn is used to compute the
expected time (time cost) of the i th query evaluation step. Hence, the selectivity of an iteration should affect the
selection of a fragment in the next iteration. An increase in selectivity in (i-1) th iteration increases SEL
which
in turn increases the expected time of the i th query evaluation step. Therefore with an increase in selectivity, the
size of the fragment used in the following iteration decreases linearly (Figure 5.2).
5.4.2 Natural Join operation
No. of tuples in complement fragment
Risk 100 150 200
selp itr ptu pts ovsp selp itr ptu pts ovsp selp itr ptu pts ovsp
.3
sel 1 =0.008 and time=20sec
Table
Effect of Risk on a Single Natural Join operation
In the experiments conducted for the natural join operator, the second attribute of the relations is used as
the join attribute. The join attribute values of the relations are uniformly distributed. To test the effect
of risk on natural join operator, for complement fragment size of 150 and risks of .001, .3, .5, .7 and .999, the
percentage of tuples selected is 30.5, 33.5, 35, 35 and 36 respectively. From this data, it can be seen that the total
number of tuples selected for processing linearly increases with risk. Unlike the selection operator where there is
a digression from the linear increase when the risk approaches 1, the increase in the number of tuples selected
for processing does not show any deviation from the linear increase. This linear increase can be attributed to
the very small values of the natural join selectivity. So when the risk fi tends to 1, the SEL
very small.
Actual selectivity by itself is a small value, and does not change the expected time drastically. When the
selectivity of the operator is increased while the complement fragment size, available time and risk remains the
same, the number of fragments selected for processing increases. Even for a small increase in the selectivity, a
larger fragment is selected; for example, for a change of .008 in selectivity (from .001 to .009), there is a difference
of 200 tuples.
5.4.3 Projection Operation
For the projection operator, the relation is projected on the second attribute. The distribution of the second
attribute in the relation and the fragments is uniform.2000400060008000
No.
of
input
tuples
used.
time
200tuples/comp-frag risk=.5 sel=.001
Figure
Effect of Time on a Single Natural Join Operation900110013001500
No.
of
input
tuples
used
in
2nd
iteration
Selectivity of the join operator in the first iteration
time=10sec risk=.5 tuples/frag-comp=150
Figure
Effect of Selectivity on a Single Natural Join Operation
No. of tuples in complement fragment
Risk 100 150 200
selp itr ptu pts ovsp selp itr ptu pts ovsp selp itr ptu pts ovsp
.3
sel 1 =0.87 and time=10sec
Table
Effect of Risk on Single Projection
No.
of
input
tuples
used
Time
Figure
Effect of Time on a Single Project Operation5001500250035000
No.
of
input
tuples
used
in
2nd
iteration
Selectivity in the first iteration
Figure
Effect of Selectivity on a Single Project Operation
From
Table
5.3 we can see that for a complement fragment size of 100, a risk of .7 and .999, the percentage
of tuples selected for processing are 32 and 34, respectively. Though the percentage of tuples selected for the risk
of .999 is higher than the percentage of tuples selected for the risk of .7, there is an overspending of the available
time for the risk of .7. It should be noted that the overspending (for the risk of .7) occurs in the third iteration,
and the total number of iterations for the risk of .999 is only 2, i.e., the number of tuples selected in the second
iteration in case the of .999 risk is higher than the number of tuples used in the second iteration when the risk is
.7. From this, it can be deduced that, when the number of iterations are reduced, higher number of tuples can be
processed. As the risk fi increases the number of iterations is reduced. Using the above two results, CASE-DB
has been designed to use a higher risk value for the third and succeeding iterations, irrespective of the risk given
by the user.
In the case of projection operator, with an increase in the available time, the number of tuples selected for
processing linearly increases. When the selectivity of operator changes, unlike the Join operator, the slope of the
curve is very small (figure 5.6).
5.4.4 Intersection Operation
Intersection operation can be considered as a special case of the join operation which returns a relatively
low number of output tuples.
No. of tuples in complement fragment
Risk 100 150 200
selp itr ptu pts ovsp selp itr ptu pts ovsp selp itr ptu pts ovsp
.3
sel 1 =.4 time=20sec
Table
Effect of Risk on Single Intersection Operation.
From
Table
5.4 it can be seen that the effect of risk changes on the number of tuples selected is minimal i.e.,
the variation in the number of tuples selected with respect to the increase in risk value is just 2% (from 12% to
14%) for the complement fragment size of 100. As stated in the beginning of this section, the first attribute of
all the relations used in experiments is the key for the relation, and all the relations are indexed unless specified
otherwise. When the relations are indexed, disjoint union of the indexed relations resulting in an indexed relation
is as expensive as a union operation. The time cost formula is made up of the time for reading and writing tuples,
disjoint union and processing the data. Since the cost of disjoint union is equal to the cost of union, the effect of
risk (used in writing and processing cost of tuples) is reduced.
Figure
5.7 shows the effect of time on the number of tuples selected for the intersection operator. With an
increase in time, the number of tuples selected for processing increases linearly.
No.
of
input
tuples
selected.
time
Figure
Effect of Time on a Single Intersection Operation
5.4.5 Union Operation
No. of tuples in complement fragment
Risk 100 150 200
selp itr ptu pts ovsp selp itr ptu pts ovsp selp itr ptu pts ovsp
.3
sel 1 =.6 time=20sec
Table
Effect of Risk on Single Union Operation.
Effects of risk and time variations are tested for the union operation. Like the intersection operator, the cost
of disjoint union is very high. Table 5.5 shows the effect of risk for different complement fragment sizes. For a
complement fragment size of 200 and risks of .3 and .999 the number of tuples selected for processing are 14%
and 16%, respectively, which is just a 2% increase in the number of tuples.
No.
of
input
tuples
used.
time
Figure
Effect of Time on a Single Union Operation
Figure
5.8 shows the linear increase in the number of tuples selected for processing with increase in available
time.
Difference Operator
No. of tuples in complement fragment
Risk 100 150 200
selp itr ptu pts ovsp selp itr ptu pts ovsp selp itr ptu pts ovsp
sel 1 =.1 time=20sec
Table
Effect of Risk on Single Difference Operation.
Experiments similar to union and intersection operator were conducted with the difference operator. The
transformation of difference operator for the iterative query evaluation is more complicated than the union and
difference operator. One has to note that due to the non-monotonicity of the difference operator, we might have
to delete certain tuples from the output of the previous iteration. Figure 5.9 shows the effect of time on a single
difference operation. As the available time increases, more and more input tuples are choosen for processing.
No.
of
input
tuples
used.
time
Figure
Effect of Time on a Single Difference Operation
Table
5.6 shows the effect of risk as well as the complement fragment size on difference operator. Consider
the percentage of tuples selected for the risk of .7 for complement fragment sizes of 100, 150 and 200, they are
13, 15.5 and 18%, respectively, it can be seen that with an increase in the fragment size the percentage of tuples
selected for the same risk and time increased. With larger complement fragments, the number of disjoint unions
are reduced, thereby reducing the expected time; and higher number of tuples are selected.
5.6 Multi-operator Queries
Queries that contain monotone operators (Union, Intersection, Join, Selection and Projection) are used in
the experiments. The experiments are mainly designed to test the effect of risk variations in multi-operator
queries.
The relations used in these experiments contained 5000 tuples, and 200 tuples per complement fragment
with a base fragment containing 100 tuples.
From
Table
5.7 and Table 5.8 it can be seen that, with an increase in the risk value, the number of input
tuples selected for the evaluation increases. Also note that overspending the available time quota does not happen
but when such an overspending occurs, the total time overspent is usually large. This is due to the
complex nature of the time-cost formula, especially when the number of operators in a query increases. Since the
number of tuples in the intermediate results depends on the selectivity of the operators, a small discrepancy in
the estimated value can either overestimate or underestimate the time-cost, resulting in either overspending or
risk
itr ptu pts ovsp itr ptu pts ovsp
.3
200 tuples/comp-frag, time=30sec
Table
Performance of CASE-DB in case of multi-operator queries
risk
itr ptu pts ovsp itr ptu pts ovsp itr ptu pts ovsp
.3
200 tuples/comp-frag, time=100sec
Table
Performance of CASE-DB in case of multi-operator queries
underspending the time quota.
The incorporation of difference (nonmonotone) operator in a query leads to a complicated transformation
during the iterative query evaluation. In the current version of CASE-DB the monotonicity property is preserved
in the transformation by not including new tuples for the minuend of the difference operator in the second and
succeeding iterations. The solution to efficiently processing a nonmonotone multi-operator query will be the
subject of another report.
6 Conclusion and Future Work
In this paper, we discuss non-aggregate query processing techniques in CASE-DB, a real-time DBMS, and
present the results of the experiments conducted on CASE-DB. We analyze the complexity of risk control methods,
and propose, implement and evaluate a heuristic solution for controlling the risk of overspending. For the difference
operator in a multi-operator query, we preserve the monotonicity property, thereby making the transformations
and evaluations simpler. This is achieved by not including new tuples for the minuend of the difference operator
after the first iteration.
Using the risk factor, the user indirectly specifies how aggressive (s)he wants to be in getting the query
evaluated with as "large" input relations (fragments) as possible and within the time quota. If the given risk is
high, the DBMS becomes bold and chooses "larger" input relations. If, on the other hand, the risk very low then
the DBMS chooses small input relations to make sure that the query is not overspent. Another way of looking at
risk factors is as "query hints". The DBMS is given a hint about the level of aggressiveness in choosing fragments.
Please note that we use the risk control approach only in the second query evaluation step after making sure
that a (possibly, lower-quality) query response is obtained in the first query evaluation step. Also note that, in
the third step, we use a very high risk; so the fourth step is rarely executed (thereby controlling the overhead
introduced due to iterative query evaluation).
The risk factor approach is essentially a query modification technique that, through a priori and run-time
protocols between the user and the DBMS, evaluates the query or its modified versions within a qiven time quota.
Our risk-based approach introduces a new paradigm for real-time system (DBMS) users in that, in addition to
time constraints, they are asked to specify a risk factor that guides the DBMS in deciding how aggressive it should
be in evaluating the query in the second step.
Note that in our approach, the choice of relation fragments are completely semantics-based, and, hence,
they will change on the basis of each application. We do not therefore provide any guidance in the paper for the
selection of fragments. However, as discussed in the experimental part, the sizes of complement fragments do
influence the performance; this is empirically evaluated in the paper.
In our study of time-constrained queries, we make the following choices: (a) Timing constraints are always
satisfied. (b) Each query is evaluated in at most 4 query evaluation iterations (steps). This minimizes the overhead
due to iterations. (c) We use the risk factor approach only in the second step, and attempt to use a risk as close
as possible, from the lower end, to the user-specified risk. Under the choices (a)-(c), the experimental results
section of the paper reports the performance of our approach in terms of a number of parameters such as (i) the
total number of iterations, (ii) the number of tuples used, (iii) the percentage of tuples used in the last iteration
where overspending did not occur, (iv) the amount of time overspent, (v) the amount of time wasted, etc. One
can certainly use any of the above-listed parameters as a performance metric, and choose risk factors in order to
have desired values for the above-listed parameters.
In some real-time databases, transactions that complete are given "values". Usually, values assigned to
transaction reduces with time after the transaction passes its time deadline. Viewing a query as a read-only
transaction, a value is assigned to a completed query. And, the sum of the values accumulated for completed
transactions serves as a performance metric. In comparison with our approach, value-based models do not have
any notion of controlled revision/rescaling of queries. Perhaps, the value-based approach can be extended by
adding query modification and a way of assigning varying "values" to modified versions of the query. But, such
an approach would require more guidance from the user as it is not clear how one would judge the "value of
a modified query". Also, the DBMS would need additional guidance, perhaps in terms of values, as to how it
should modify the query. The risk-based approach can also be used for processing real-time transactions.
If each transaction specifies its "optional" and "required" parts (subtransactions) then the DBMS can modify
transactions (by downsizing them), and make sure that all or most of the transactions can complete within their
deadlines. We are currently investigating such an approach.
Appendix
Proof of Theorem 1: 2FSP ff 2 NP since a nondeterministic algorithm needs only to guess a fragment list F i
and check in polynomial time that ff i - ff and ff \Gamma ff i - ".
For a transformation, we first define the 0/1 knapsack problem:
Consider a finite set U and the set Z of positive integers. For each u 2 U we associate a size s(u) 2 Z and
a value v(u) 2 Z. Let B and K be two positive integers. Is there a U 0
' U such that
The 0/1 Knapsack problem is proven to be NP-Complete, and remains to be NP-Complete even when s(u)=v(u)
for all u 2 U [5]. In what follows, we assume
We now give the transformation from the 0/1 Knapsack problem to 2FSP ff .
We construct a set D of relations r j , a fragment list S r j for each r j in D, ff , ff i and " as follows.
1. The set D contains relations r is the number of elements in un g.
2. S r j
g.
3. ff j B=N and "
u2U s(u).
4. ff i j
this transformation takes polynomial time. We now prove that there exists a set U 0
only if there exists a fragment list
Assume that such a fragment list F i exists. Then
construct U 0
as follows. For each f j in F i , if f
. Then
has the property that K -
Assume that there does not exist F i with ff i - ff and ff \Gamma ff . For any possible F i consider the corresponding
constructed as follows. For each f j in F i , if f
1. F i does not satisfy the condition ff i - ff. Then, since ff
we have
2. F i does not satisfy the constraint ff \Gamma ff i - ". Then since
Since there is a one-to-one and onto function and its inverse between the set of all possible F i 's and all possible
subsets U 0
of U, we conclude that there does not exist any U 0
with K -
Proof of Theorem 2: Reduction from 0/1 Knapsack problem to 2FSP ff -M 11 is exactly the same as the
reduction of the 0/1 knapsack problem to 2FSP ff . The transformation is the same with the transformation of
0/1 Knapsack to 2FSP ff . We note that, in monotone queries, we do not check some of the fragment lists. We can
show that those fragment lists which are not being checked correspond to those U 0
which need not be checked
also. Given a fragment list F i , we can construct its corresponding U 0
as follows. For each f j in F i , if f
add
Consider three fragment lists F 1 ' F 2 ' F 3 with risks ff respectively. When Q is monotone,
1. If ff ! ff 2 we do not evaluate ff 3 . When ff ! ff 2 , we have
u2U 0s(u). Since F 2 ' F 3 we have
u2U 0s(u) which means we need not check
3 .
2. If ff 2 - ff and ff \Gamma ff 2 ? ", we need not evaluate ff 1 . When ff 2 - ff and ff \Gamma ff
which means that we need not check U 0
1 .
So we conclude that 2FSP ff -M is NP-Complete. Q.E.D.
--R
"Generalization and a Framework for Query Modification"
"Estimating Record Selectivities"
"On the Estimation and Use of Selectivities in Database Performance Evaluation"
"On Automated Query Modification in Databases"
"Computers and Intractability-A Guide to the Theory of NP-Completeness"
"Set Query Optimization in Distributed Database Systems"
"CASE-DB: A System for Processing Real-Time, Non-Aggregate Relational Algebra Queries"
"The Implementation of the Extended Relational Database Management System"
"Relational Aggreate Query Processing Techniques for Real-Time Databases"
"Statistical Estimators for Relational Alegbra Expressions"
"Processing Aggregate relational Queries with Hard Time Constraints"
"Statistical Estimators for Aggregate Relational Algebra Queries"
"Processing Time-Constrained Aggregate Queries in CASE-DB"
"Statistical Profile Estimation in Database Systems"
"Monotonically Improving Approximate Answers to Relational Algebra Queries"
"An Object-Oriented Query Processor That Produces Monotonically Improving Approximate Answers"
--TR
--CTR
SungKil Lee , Gltekin zsoyolu, Distributed processing of time-constrained queries in CASE-DB, Proceedings of the fifth international conference on Information and knowledge management, p.279-287, November 12-16, 1996, Rockville, Maryland, United States
Kyoung-Don Kang , Sang H. Son , John A. Stankovic, Managing Deadline Miss Ratio and Sensor Data Freshness in Real-Time Databases, IEEE Transactions on Knowledge and Data Engineering, v.16 n.10, p.1200-1216, October 2004
Kyoung-Don Kang , Sang H. Son , John A. Stankovic, Differentiated Real-Time Data Services for E-Commerce Applications, Electronic Commerce Research, v.3 n.1-2, p.113-142, January-April
M. Amirijoo , J. Hansson , S. H. Son , S. Gunnarsson, Experimental evaluation of linear time-invariant models for feedback performance control in real-time systems, Real-Time Systems, v.35 n.3, p.209-238, April 2007
Nevzat Hurkan Balkir , Gultekin Ozsoyoglu , Z. Meral Ozsoyoglu, A Graphical Query Language: VISUAL and Its Query Processing, IEEE Transactions on Knowledge and Data Engineering, v.14 n.5, p.955-978, September 2002
Gultekin Ozsoyoglu , Richard Thomas Snodgrass, Temporal and Real-Time Databases: A Survey, IEEE Transactions on Knowledge and Data Engineering, v.7 n.4, p.513-532, August 1995 | query modification;relational databases;databases;real-time databases;query processing |
627715 | A Formal Characterization of Epsilon Serializability. | AbstractEpsilon serializability (ESR) is a generalization of classic serializability (SR). In this paper, we provide a precise characterization of ESR when queries that may view inconsistent data run concurrently with consistent update transactions.Our first goal is to understand the behavior of queries in the presence of conflicts and to show how ESR in fact is a generalization of SR. So, using the ACTA framework, we formally express the intertransaction conflicts that are recognized by ESR and through that define ESR, analogous to the manner in which conflict-based serializability is defined. Secondly, expressions are derived for the amount of inconsistency (in a data item) viewed by a query and its effects on the results of a query. These inconsistencies arise from concurrent updates allowed by ESR. Thirdly, in order to maintain the inconsistencies within bounds associated with each query, the expressions are used to determine the preconditions that operations have to satisfy. The results of a query, and the errors in it, depend on what a query does with the, possibly inconsistent, data viewed by it. One of the important byproducts of this work is the identification of different types of queries which lend themselves to an analysis of the effects of data inconsistency on the results of the query. | Introduction
Epsilon Serializability (ESR) [21, 29], a generalization of classic serializability (SR), explicitly
allows some limited amount of inconsistency in transaction processing (TP). ESR enhances
concurrency since some non-SR execution schedules are permitted. For example, epsilon-
transactions (ETs) that just perform queries may execute in spite of ongoing concurrent
updates to the database. Thus, the query ETs may view uncommitted, i.e., possibly in-
consistent, data. Concretely, an update transaction may export some inconsistency when it
updates a data item while query ETs are in progress. Conversely, a query ET may import
some inconsistency when it reads a data item while uncommitted updates on that data item
exist. The correctness notion in ESR is based on bounding the amount of imported and
exported inconsistency for each ET. The benefits of ESR have been discussed in the papers
cited above. For instance, ESR may increase system availability and autonomy [22] in distributed
TP systems, since asynchronous execution is allowed. But in this paper we restrict
our attention to ESR in a centralized TP system.
In its full generality, update ETs may view inconsistent data the same way query ETs
may. However, in this paper we focus on the situation where query-only ETs run concurrently
with consistent update transactions. That is, the update transactions are not allowed to view
uncommitted data and hence will produce consistent database states.
Our first goal is to understand the behavior of queries in the presence of conflicts and to
show how ESR in fact is a generalization of SR. So, in section 2, using the
[5, 6, 4] we formally express the inter-transaction conflicts that are recognized by ESR and,
through that, define ESR, analogous to the manner in which conflict-based serializability is
defined.
Our second goal is to quantify the amount of inconsistency experienced by queries. To
this end, in section 3, expressions are derived for the amount of inconsistency (in a data
item) viewed by a query. These inconsistencies arise from concurrent updates allowed by
ESR. This section also considers how transaction aborts affect the inconsistency of data.
ESR imposes limits on the amount of inconsistency that can be viewed by a query. So,
our third goal is to find ways by which these bounds are maintained. Using the expressions
quantifying the inconsistency, we derive preconditions that operations have to satisfy. Derivation
of these preconditions is the subject of Section 4. These preconditions point to possible
mechanisms that can be used to realize ESR and show that more flexible implementations
than those presented in [21, 29] are possible.
The effects of the inconsistent view on the results of a query depend on what a query does
with the viewed data. In general, a small data inconsistency can translate into an arbitrarily
large result inconsistency. So our fourth goal is to derive the effect of the inconsistency of
the data read by a query on the results produced by the query. This derivation is done in
Section 5 which also shows some of the restrictions that need to be imposed on the queries
and updates so as to be able to bound the inconsistency in the result of the query to lie
within reasonable limits. This helps characterize the situations in which ESR is applicable.
Thus, one of the important byproducts of this work is the identification of different types
of queries which lend themselves to an analysis of the effects of data inconsistency on the
results of the query.
Related work is discussed in Section 6 while section 7 concludes the paper and offers
suggestions for further work.
In the rest of this introduction, we provide an informal introduction to ESR and define
the terms used.
1.1 ESR and ETs
A database is a set of data items. Each data item contains a value. A database state is the
set of all data values. A database state space is the set of all possible database states. A
database state space SDB is a metric space if it has the following properties:
ffl A distance function distance(u; v) is defined over every pair of states u; v 2 SDB on
real numbers.
The distance function can be defined as the absolute value of the difference between
two states of an account data item. For instance, the distance between $50 and $120
is $70. Thus, if the current account balance is $50 and $70 is credited, the distance
between the new state and the old state is $70.
ffl Symmetry. For every
Continuing with the example, suppose, the current account balance is $120 and $70 is
debited. The distance between the new state and the old state is still $70.
ffl Triangle inequality. For every u; v; w 2 SDB , distance(u; v)+distance(v; w) - distance(u; w).
The account data clearly satisfies triangle inequality. For example, suppose the current
account balance is $50 and $70 is credited. The distance between the new state and
the old state, as we saw before is $70. Suppose $40 is now debited. The distance
between the state after the credit and the state after the debit is $40. The distance
between the initial state of the account ($50) and the one after both updates ($80) is
inequality is satisfied.
Many database state spaces have such a regular geometry. As we just saw, in banking
databases, dollar amounts possess these properties. Similarly, airplane seats in airline reservation
systems also form a metric space.
Usually the term "database state space" refers to the state on disk (implicitly, only the
committed values). We are not restricted to the database state on disk, however, since we
also consider the intermediate states of the database, including the contents in the main
memory. We will use the shorter term "data state" to include the intermediate states. Note
that the magnitude of an update can be measured by the distance between the old data item
state and the new data item state.
ESR defines correctness for both consistent states and inconsistent states. In the case
of consistent states, ESR reduces to classic serializability. In addition, ESR associates an
amount of inconsistency with each inconsistent state, defined by its distance from a consistent
state. Informally, inconsistency in a data item x with respect to a query q is defined as the
difference between the current value of x and the value of x if no updates on x were allowed
to execute concurrently with q. A query imports inconsistency when it views, i.e., reads,
an inconsistent data item. Conversely, an update transaction exports inconsistency when it
updates, i.e., writes to, a data item while query ETs that read the data item are in progress.
ESR has meaning for any state space that possesses a distance function. In general, serializable
executions produce answers that have zero inconsistency, but if a (non-serializable)
query returns an answer that differs from a serializable result by at most $10,000 we say
that the amount of inconsistency produced by the query is $10,000. In addition, the triangle
inequality and symmetry properties help us design efficient algorithms. In this paper, we
will confine our attention to state spaces that are metric spaces.
To an application designer and transaction programmer, an ET is a classic transaction
with the addition of inconsistency limits. A query ET has an import-limit , which specifies
the maximum amount of inconsistency that can be imported by it. Similarly, an update
ET has an export-limit that specifies the maximum amount of inconsistency that can be
exported by it. Since our focus is on queries, and for simplicity of presentation, we examine
in detail ETs when import-limits are placed on individual data items (a single attribute in
the relational model). The algorithms can be extended to handle an import-limit that spans
several attributes (e.g., checking accounts and savings accounts).
An application designer specifies the limit for each ET and the TP system ensures that
these limits are not exceeded during the execution of the ET. For example, a bank may wish
to know how many millions of dollars there are in the checking accounts. If this query were
executed directly on the checking accounts during the banking hours, serious interference
would arise because of updates. Most of the interference is irrelevant, however, since typical
updates refer to small amounts compared to the query output unit, which is in millions of
dollars. Hence we must be able to execute the query during banking hours. Specifically,
under ESR, if we specify an import-limit for the query ET, for example, of $100,000, for
this query, the result also would be guaranteed to be within $100,000 of a consistent value
(produced by a serial execution of the same transactions). For example, if the ET returns the
value $357,215,000 (before round-off) then at least one of the serial transaction executions
would have yielded a serializable query result in the $325,215,000\Sigma$100,000 interval.
The inconsistency accumulated by a query that reads multiple data items, such as in the
example above, depends on how the values read are used within the query. The percolation
of inconsistency from the data items read by the query to the results of the query is an
interesting issue and is discussed in Section 5.
Sections 3 and 4 focus on individual data items. Let us assume that limits are imposed
on the amount of inconsistency an ET can import or export with respect to a particular
data item. Let import limit t;x stand for the import-limit that has been set for ET t with
respect to data x. Let import inconsistency t;x stand for the amount of inconsistency that
has already been imported by ET t on data item x. The system that supports queries reading
inconsistent data must ensure the following for every ET t (that accesses data item x):
import inconsistency t;x - import limit t;x (1)
export inconsistency t;x - export limit t;x : (2)
We call the invariants (1) and (2) Safe(t; x) for brevity. For query ET q reading x, Safe(q; x)
reduces to:
import inconsistency q;x - import limit q;x (3)
export inconsistency
states that a query q cannot exceed its import-limit and that q cannot export
inconsistency.
Thus, during the execution of each ET, the system needs to maintain the amount of
inconsistency the ET has imported so far. Note that the amount of inconsistency is given
by the distance function and the incremental accumulation of inconsistency depends on the
triangle inequality property of metric spaces. Without triangle inequality, we would have to
recompute the distance function for the entire history each time a change occurs. In Section
3 we derive the algorithms necessary to maintain the specified limit on the inconsistency
imported from individual data items.
Before we end this section we would like to point out that throughout the paper, it is
assumed that the read set of a query, i.e., the set of data items read by a query is not affected
by the inconsistency in the data read by a query.
2 A Formal Definition of ESR
We use the 4, 6] to introduce the notion of conflicts between operations
and discuss the dependencies induced between transactions when they invoke conflicting
transactions.
For a given state s of a data item, we use return(s; a) to denote the output produced by
operation a, and state(s; a) to denote the state produced after the execution of a. value(s; P )
denotes the value of predicate P in state s.
Given a history H, H (x) is the projection of the history containing the operation invocations
on a data item x. H both the order of execution of the
operations, (a i precedes a i+1 ), as well as the functional composition of operations. Thus, a
state s of a data item produced by a sequence of operations equals the state produced by
applying the history H (x) corresponding to the sequence of operations on the data item's
initial state s 0 For brevity, we will use H (x) to denote the state of a
data item produced by H (x) , implicitly assuming initial state s 0 . Note that H (x) may depend
on values read in H from data items other than x.
operations a and b conflict in a state produced by H (x) , denoted by
Thus, two operations conflict if their effects on the state of a data item or their return values
are not independent of their execution order.
Let a t i
[x] denote operation a invoked by t i on data item x. (a t i
[x]) implies that
a t i
[x] appears before b t j
[x] in H.
Let us first define the classic serializability correctness criterion.
Given a history H of events relating to
transactions in T , C SR , a binary relation on T , is defined as follows:
[x])).
Let C
SR be the transitive-closure of C SR ; i.e.,
H is (conflict preserving) serializable iff
SR t).
To illustrate the practical implications of this definition, let us consider the case where
all operations perform in-place updates. In this case, if transactions t i and t j have a C SR
has invoked an operations which conflicts with a previous operation by t i ,
as long as t i is serlialized before t j , the conflict can be tolerated. Consider the (serialization)
graph corresponding to the C SR relation induced by a history. The above definition states
that for the history to be serializable, there should be no cycles in the graph. That is, the
serialization order must be acyclic.
The following three definitions constitute the definition of ESR.
whose events are recorded in history H.
CESR , a binary relation on transactions in T , is defined as follows:
In other words, t i and t j are related by cesr if and only if they are related by C SR and they
violate one of the invariants that constitute the predicate Safe. Note that the last term in
the definition of cesr makes cesr strictly weaker than C
Just as C SR denotes ordering requirements due to conflicts under serializability, cesr denotes
the ordering requirements imposed by conflicts under epsilon serializability. Since cesr is a
subset of the C SR relationship, a smaller number of orderings are imposed under ESR than
under classic serializability.
Consider the graph corresponding to the C SR and cesr relations induced by a history.
Definition 4 A cycle formed by transactions t 0 , has a cesr edge iff
As the next definition shows, (unlike SR) ESR can tolerate cycles formed by the C SR
relation. However, if the graph has a cycle consisting of a cesr edge, then the history is not
ESR.
Definition 5 A history H is (conflict-preserving) epsilon serializable iff, in the graph which
corresponds to the C SR and cesr relations induced by the history, there is no cycle that has
a cesr edge.
Before we examine the practical meaning of the above definitions, let us summarize the
properties of ESR compared to serializability:
ffl When all import-limit and export-limit are zero, cesr reduces to C SR . cesr is then
just C SR and ESR reduces to serializability.
ffl A set of transactions may not satisfy serializability because of cycles in the C SR relation,
but may satisfy ESR.
ffl When some import-limits and export-limits are greater than zero, cesr ' C SR (given
the additional term in definition 3). That is, ESR may allow more operations to execute
concurrently than serializability.
To understand the practical meaning of the definitions, let us focus on a query q executing
concurrently with an update transaction t. Suppose q reads x and this is followed by t's write
to x. Assume that t's write does not violate safe(t,x). Thus (q C SR t) but (q cesr t) is not
true. Assume that now q does another read of x. Let us consider two scenarios:
1. Assume that q's second read does not violate safe(q,x) and so (t C SR q) but not (t cesr
q). So we have a cyclic C SR relationship and yet the read is permitted by ESR. The
reason for this is that, under ESR, the values of x read by q are considered acceptable,
i.e., within the limits of inconsistency specified. More precisely, the value of x read by q
when concurrently executed with t is within the inconsistency limits considering either
of the serialization orderings: (q, t) or (t, q). That is why no orderings are imposed by
ESR, since according to ESR, both orderings are acceptable.
2. Assume that q's second read violates safe(q,x). So (t cesr q). This imposes an
ordering requirement such that it is as though q read x serially after t. Thus (t, q)
is the only serialization order acceptable - in order to conform to the inconsistency
limits. This implies that we cannot have (q C SR
t) since that corresponds to the
opposite serialization ordering. Hence it is required that there be no cycles consisting
of C SR and cesr edges.
Given the above characterization of ESR, one of the first tasks is to quantify the inconsistency
experienced by a query so that we can check if the safe predicates hold. This is
done in Section 3. Then in Section 4 we examine how to ensure that only epsilon serializable
histories are produced. One way is to allow no cesr to form, i.,e., to disallow an operation if
it violates safe. The question of how the inconsistency in the data read by a query percolates
to the the results of the query is studied in Section 5. Different types of queries are identified
with a view to determining the amount of data inconsistency they can tolerate in order to
maintain specified limits on result inconsistency.
Inconsistency Imported by a Query ET
We focus on the inconsistency of a single data item x read by a query q. Informally, inconsistency
in x with respect to a query q is defined as the difference between the current value
of x and the value of x if no updates on x were allowed to execute concurrently with q.
Consider update transactions where each of the t i 's updates x. We allow a query
q to read x multiple times and each of the updating t i 's to write x multiple times. Let us
define a transaction t i 's write interval with respect to x to be the interval of time between
its first write and the last write. A read interval is defined similarly.
Every query q has a set of Concurrent Update Transactions (denoted by CUT(q)). Update
its write interval intersects with q's read interval. Note that lock-based
realizations of serializability ensure that
The question we are attempting to answer here is the following: What can one say about
the value of x read by q given the CUT(q)? Our main objective is to bound the inconsistency
in the value of x read by q. But first we establish that the write intervals of transactions
in CUT(q) are totally ordered, since consistent update ETs are serializable.
Theorem 1 The serialization order of the transactions t i 2 CUT(q), w.r.t. x, is the same
as the order in which each t i enters its write interval which in turn is the same as the order
in which they commit.
Now we name the values of x at different points in time:
ffl x current is the current value of x.
final is the value of x committed by transaction t i .
initial is the value of x when transaction t i in CUT(q) begins, i.e., x t i
final .
initial is defined to be the value of x before any of the transactions in CUT(q) begin
execution. That is, if CUT(q) 6= ;; x q
From these values of x we can derive:
current change t i
initial
during t i
fcurrent change t i ;x g
final change t i
initial
final
Clearly, final change t i ;x - max change t i ;x and current change t i ;x -
We are in a position to define inconsistency formally.
initial inconsistency q;x )
That is, inconsistency q;x denotes the distance between x q
initial and x current . So, inconsistency
in the value of x for a query q while t i is in progress and update ETs
committed is given by
inconsistency
initial
initial
initial
initial
initial
initial
final
initial
current change t
final change t
Let committed CUT(q) denote the subset of CUT(q) containing the ETs that have commit-
ted. Let t current 2 CUT(q) denote the update transaction whose write interval has begun but
has not ended yet. If no such t current exists, it has a "null" value and current change null;x is
defined to be 0.
From these discussions we can state the following theorem which expresses (bounds on)
the inconsistency of a data item read by a query q when its read interval intersects with the
write intervals of ETs in CUT(q).
Theorem 2
inconsistency
initial
2committed CUT(q)
final change t current change t current;x
2committed CUT(q)
final change t
2committed CUT(q)
Whereas expression (5) is an exact expression of the inconsistency, expressions (6) through
(8) can be viewed as different bounds on inconsistency q;x .
We are now in a position to relate the inconsistency bound with the conflict-based definition
of ESR given in Section 2. Recall the definitions of C SR and cesr :
A pair of transactions have a C SR relationship but not a cesr relationship
iff one of them is a query and the other is an update and the import limits are not
violated. Let us focus on C SR relationships induced by operations on x. Given
(8), each of the update transactions t i that appears in the pairs that belongs to
C SR but not to cesr contributes an inconsistency of at most max change t i ;x
to the value of x read by q.
So far we have considered the case when all transactions commit. As stated by the following
theorem, abortion of update transactions has the effect of increasing the inconsistency
imported by a query without changing the value of x.
Theorem 3 The maximum increase in imported inconsistency caused by aborted transactions
is given by
aborted
Proof: Suppose transactions t 1 to t i\Gamma1 have committed and then t i begins but subsequently
aborts. In addition to the inconsistency due to t 1 to t i\Gamma1 , derived earlier, if q
reads x any time during t i 's execution, it will experience an additional inconsistency of
aborts whereby changes made by t i are obliterated and thus
subsequent updates will increase the value of x only with respect to that resulting from t 1
to t i\Gamma1 .
Suppose all the transactions in CUT(q) that follow t i commit. Then max change t i ;x is the
only increase to the inconsistency due to aborted transactions and hence the theorem holds.
Suppose instead that t i+1 to aborts. When q reads x after t j begins,
x will only reflect the changes done by (1) transactions t 1 to t
to . (3) is bounded by max change t j ;x . If this is larger than
is the increase in inconsistency due to the aborted
transactions t i and t j and hence the theorem follows for two transaction aborts. If this is
smaller, remains the upper bound on the increase. That is, the maximum of
the two is the effective increase in inconsistency due to two transaction aborts. This proof
extends easily if further transactions abort.
Ensuring Epsilon Serializability: Pre-Conditions for
Operations
To make sure that all histories are ESR (as per Definition 4) we should ensure that no cycles
are formed with cesr edges in them. But what if we do not even allow cesr relations to
Just like SR can be realized by preventing the formation of serialization orderings (i.e.,
C SR relations), ESR can be realized by preventing the formation of cesr relations). Thus, if
we ensure that a query is always safe, i.e, (import inconsistency q;x - import limit q;x ) is an
invariant, then ESR is guaranteed. Specifically, the inequality must hold (before and) after
every read and write operation as well as every transaction transaction management event.
We derive the preconditions for performing the operations. These are sufficient to ensure
that import limits of transactions are not exceeded. The preconditions will in turn be used
to show how the transaction executions should be managed.
Let begin write t;x denote the attempt by ET t to begin its write interval with respect to x.
begin read t;x is invoked by t to begin its read interval with respect to x. Let end write t;x denote
that t has completed its writes on x. We will now consider the semantics of begin write,
begin read, end write, end read, read and write. There are two situations to consider. The
first is if a query ET q is already in progress (initially with committed
an update transaction's write interval begins. This may be followed by other update ETs
before q commits. The second is if an update ET is in progress when the query begins.
Recall that our attention is confined to a centralized database with a single transaction manager
Let q be a query and t be an update ET. / stands for assignment.
If query q is in progress,
begin write t;x j (t current /
current / null) - (committed CUT(q) / committed CUT(q) [ t)
Otherwise, begin write t;x j () and end write t;x j ():
If an update transaction t is in progress, begin read q;x j (t current /t) - (CUT(q) / t).
Otherwise, begin read q;x
Here are the semantics of the other operations.
read t;x j ()
read q;x j (import inconsistency q;x / inconsistency q;x )
current / x current
\Delta is a parameter to the write operation that denotes the amount by which x is modified
when the write occurs.
It is important to note from the above semantics that a query imports inconsistency only
if it performs a read operation. That is, the inconsistency in the value of x due to updates
translates to imported inconsistency only when read operations occur.
We will now establish the preconditions necessary to maintain (3), i.e.,
(import inconsistency q;x - import limit q;x )
Case 1: Preconditions only on read q;x Operations.
Given that inconsistency is imported by q only when it performs a read, the following
precondition is all we need to maintain (9):
inconsistency q;x - import limit q;x :
From (5), this implies the precondition
current ; x q
initial
Every read operations must be intercepted by the transaction management mechanism to ensure
that the above precondition holds. If the predicate does not hold, the read by the query
will have to be aborted or delayed. If q is a long query, this has performance implications.
This is the motivation for examining other possible ways to maintain (9).
Case 2: Preconditions on write t;x Operations and begin read q;x Operations
Suppose we satisfy the following invariant:
inconsistency q;x - import limit q;x ;
i.e.,
current ; x q
initial
Note that this is a stronger invariant than (9), i.e, if this is maintained, then (9) will be
maintained. (This has a negative side-effect: If the query does not read x at all, then the
allowable inconsistency on x has been restricted unnecessarily.) Given the semantics of the
various operations, and the expression (5) for inconsistency, the following precondition on
results.
current
initial
and given that x is in metric space, this implies the precondition
initial
where j\Deltaj denotes the absolute value of \Delta. (We also use j S j to denote the cardinality of
set S. The meaning should be obvious from the context.) This says that a write should
be allowed only if the increase in inconsistency caused by the intended increment will not
violate the limit imposed on the inconsistency imported by q.
Even though no precondition is necessary for a read, the following precondition is required
for begin read q;x when it is invoked while an update transaction t is already in progress:
current
initial
Note that x q
initial when q begins its read interval while t's writes are in progress.
This says that if the changes that have already been done by the update transaction exceed
the import limit imposed on q then the query must not be allowed to begin its read on x.
The above preconditions imply that for each query q, we should maintain x q
initial . This
can be avoided by maintaining an even stronger invariant, corresponding to the inconsistency
bound (6), i.e., by maintaining
2committed CUT(q)
final change t current change t current ;x - import limit q;x :
This imposes the following precondition on write t;x :
2committed CUT(q)
final change t current change t current;x
import limit q;x
and the following precondition on begin read q;x :
current change t current ;x - import limit q;x :
This implies that write operations by update ETs and requests by query ETs to begin
their reading have to be monitored to ensure that they are allowed only when the above
preconditions hold.
Both these invariants require maintenance of the most recent committed state of x. This
is available anyway. However, the need to check every write by an update ET implies
increased overheads and may also result in aborts or delays of update ETs in progress. Both
can be avoided as shown below if an even stronger invariant is maintained.
Case 3: Preconditions on begin read q;x and begin write t;x Operations
Consider the following invariant corresponding to inconsistency bound (7):
2committed CUT(q)
final change t current ;x - import limit q;x :
This inequality turns out to be the precondition for begin write t;x . begin read q;x has the
following precondition:
This implies that unlike the previous case, no preconditions are associated with individual
writes by update transactions. While this reduces transaction management overheads, it
does introduce some pessimism into the decision making since worst case changes to x by t
are assumed.
The precondition for begin write t;x requires knowledge about final change of transac-
tions. This can be avoided if the following invariant, corresponding to inconsistency bound
(8), is maintained:
2committed CUT(q)
current ;x - import limit q;x (11)
(11) is also the precondition for begin write t;x . (10) stays as the precondition for begin read q;x .
Suppose is the same for all update ETs t i . Then, a given import limit q;x
for a query q translates into a limit on the cardinality of CUT(q).
In terms of the impact of the above derivation on an implementation of ESR, note that
we progressed from preconditions on individual read and write operations to preconditions
for read and write intervals to begin. The latter introduce more pessimism, because of the
the assumptions that have to be made about the amount of changes done by a given update
transaction.
Modeling query and transaction executions in terms of their read and write intervals
allows us to capture different types of concurrency control techniques. For instance, if the
begin events correspond to the acquisition of locks and the end events correspond to the
release of locks, we get lock based protocols. Assume we use the preconditions on these
events to ensure bounds. This is the basis for the lock-based implementation in [29] wherein
precondition (11) for begin write corresponds to LOK-2 and precondition (10) for begin read
corresponds to LOK-1.
However, the above derivation is not restricted to lock-based implementations. In optimistic
concurrency control, writes are done after the validation phase. In this case, precondition
checking for writes will be part of the validation phase of an update transaction.
5 Inconsistency in the Results of a Query
Since a query, by definition, does not update data, it does not affect the permanent state of
the database. Furthermore, we have assumed that updates do not import inconsistency, i.e.,
they operate on consistent database states. Thus, assuming that each update ET maintains
database consistency, updates also do not affect the consistency of the database. The only
effect of the updates is on the inconsistency of the data read by queries. In Section 3
we derived expressions for the amount of inconsistency imported by a query. Given this
inconsistency, the only observable effect of a query ET is on the results produced by a query.
In other words, the inconsistency imported by a query can percolate to the results of a query,
in ways that are obviously dependent on the manner in which the query utilizes the values
read.
This section is devoted to determining the effect of the inconsistency of data read by a
query on its results. In general, a small input inconsistency can translate into an arbitrarily
large result inconsistency. Therefore, we study the properties of a query that make the result
inconsistency more predictable.
First we establish some terminology. Consider the situation where a query q reads data
items produces a result based on the values read. In general, the results
of such a query can be stated as a function of the form:
where g denotes a query ET and f i 's are functions such that f
the range of f i . We assume that R f is also a metric space. In practice, typically R f is a
subset of SDB . For example, aggregate functions and queries on the database usually return
a value in SDB .
Focusing on monotonic queries, in Section 5.1 we derive the inconsistency in the result
of a query and show that even though the inconsistency can be bound, the bound may
not be tight. Suppose, similar to import limit and export limit, a limit is placed on the
inconsistency in the result of a query. In Section 5.2, we derive the preconditions on ET
operations imposed by such a limit. In Section 5.3 a class of queries called bounded queries
is considered. Section 5.4 examines steady queries and discusses how queries can be designed
to have tighter inconsistency bounds thereby requiring less restrictive preconditions.
5.1 Monotonic Queries
The first important class of queries consists of monotonic functions. A function f is monotonically
increasing if x - y function g is monotonically decreasing if
function is called monotonic if it is either monotonically increasing
or decreasing. Without loss of generality in the rest of this section we describe only
monotonically increasing functions.
The result returned by a monotonic ET q assuming that the value of x i read by q is given
by x i;read is
where, if max inconsistency x i
is the maximum inconsistency in the value of x i read by
q (given by Theorem 2 of Section 3), x i;initial is the value of x i when the first update
ET in CUT(q) begins, and x
and x
inconsistency x i
, then
Thus, since g and the f i 's are monotonic, the result of the query can lie between
min result
and
Note that if f i is not monotonic, the smallest (largest) value of f i need not correspond to
the smallest (largest) value of x i .
Thus, by our definition of inconsistency,
result inconsistency
Let us look at some examples:
Example 1: n=1; the identity function. This corresponds to the single data
element case and hence the inconsistency in the result of q can be seen to be given by (13).
Example 2: n=20;
the identity function. In this case, as one would
expect, the result of the query, according to (14) and (15), will lie between
inconsistency x i
Example 3: n=20;
predicate has a value 1 if it is
true, otherwise 0.) In this case, the result of the query, according to (14) and (15), will lie
Example 4: This is a concrete case of Example 3. Consider a bank database with
accounts, numbered 1-20. Each account with an odd number happens to have $5,001
and even-numbered accounts have $4,999. The only update transaction in the system is:
transfers $2 from Acc i into Acc j . The query ET sums up all
the deposits that are greater than $5,000. Suppose that the first set of transactions executed
by the system are: Transfer(Acc these finish, the
following are executed: Transfer(Acc 2i ; Acc
These update transactions maintain the total of money in the database, and it is easy to
see that a serializable execution of the query ET should return $50,010, since at any given
exactly 10 accounts have more than $5,000.
This query will produce a result between $0 and $100,080 since it is exactly Example 3,
where,
The range of the result does include the serializable result of $50,010. However, given that
the range is not very "tight", it is too pessimistic. This occurs because the inconsistency
caused by the updates percolate, in a rather drastic manner, to the results of the query. In
Section 5.4, we identify a class of queries for which tight bounds on the results of a query
exist.
One other point to note here is that even this bound requires knowledge of x i;initial , the
value of x i when the first ET in CUT(q) begins. This has practical implications. Specifically,
before an update is begun, the data values may have to be logged in order to derive the
inconsistency for the queries that may subsequently begin. This is the case of systems that
require UNDO capability (using the STEAL buffering policy [12]).
Given that the lower bound on the result of the above query is 0, one may be tempted
to take the following solution: Assume that x i;initial is the smallest value x i can take, i.e., 0.
It is not too difficult to see why this will not produce the correct range for the above query's
result.
5.2 Pre-Conditions for Monotonic Queries
Suppose result inconsistency limit q denotes the maximum inconsistency that an application
can withstand in the result of a query q. Then
result inconsistency q - result inconsistency limit q
is an invariant. Just as we derived preconditions to maintain import limit q;x and export limit q;x ,
we can derive preconditions to maintain the above invariant.
For instance, consider the expression (8) for max inconsistency x . From this, given (16)
and the semantics of ET operations (see Section 3), we have the following precondition for
begin write t;x i
2committed CUT(q)
2committed CUT(q)
result inconsistency limit q
and the following precondition for begin read q;x i
result inconsistency limit q
In a similar manner, preconditions can be derived in case the other expressions for inconsistency
are used.
5.3 Bounded Queries
We say that a function f is bounded if there is a maximum bound in the result of f . It is easy
to see that we can calculate bounds on the inconsistency in the results of a query composed
from bounded functions.
Example 5: Consider the following variation of Example 4. The query ET sums up
all the deposits that are not greater than $5,000. For this query, n=20;
's are not monotonic because when x i increases from $4999 to
decreases from $4999 to $0. So the expressions derived for result inconsistency in
Section 5.2 do not apply.
It is easy to see that a serializable execution of the query ET should return $49,990, since
at any given time, exactly 10 accounts have balance - $5,000. It is also not difficult to see
that for the above ET query, the smallest possible result is $0 and the largest possible result
is $99,980.
Even though the the f i 's are not monotonic, we now show that it is possible to obtain
bounds on the query results. Let min f i denote the smallest value of f i for any value of
denote the largest value of f i for any value of x i in
as long as g is monotonic, the result of the query can lie between
Let us return to Example 5. In this case,
hence, the result of the query can lie between $0
and $100,000. Since the actual result of the query lies between $0 and $99,980, using the
maximum and minimum possible f i values leads to an overestimate of the inconsistency in
the query results.
A generalization of bounded functions and monotonic functions is the class of functions
of bounded variation. To avoid confusion for readers familiar with mathematical analysis,
we follow closely the usual definition of these functions in compact metric spaces.
Definition 6 If [a; b] is a finite interval in a metric space, then a set of points
satisfying the inequalities a is called a partition of [a; b].
The interval [x is called the k th subinterval of P and we write \Deltax
that
Definition 7 Let f be defined on [a; b]. If is a partition of [a; b], write
n. If there exists a positive number M such that
for all partitions of [a; b], then f is said to be of bounded variation on [a; b].
It is clear that all bounded functions are of bounded variation. In Example 5,
Furthermore, all monotonic functions are also of bounded variation. This happens because
for a monotonically increasing function f we have \Deltaf k - 0 and therefore:
In general, for a function of bounded variation, the M bound can be used as an (over)estimate
of result inconsistency given the interval [a; b] caused by input inconsistency. However, the
examples above show that what we need is to restrict the forms of ET queries such that
tighter bounds on result inconsistency can be found without overly restricting the type of
queries allowed.
5.4 Steady Queries
Let DS denote the set of distances defined by SDB and DR the set of distances defined by
R f . We say that f is steady if for every ffl 2
such that Steady functions on discrete metric spaces are analogous
to continuous functions on compact sets. The definition is similar, except that we exclude a
fixed number of small ffl due to the discrete nature of SDB . Informally, if ffl
to be zero.
The importance of steady functions is that the application designer may specify a limit
on the result inconsistency, result inconsistency limit (ffl), and the TP system can calculate
the limit on the imported inconsistency, max inconsistency (ffi), that guarantees the specified
limit on the result inconsistency. Section 5.2 shows how this calculation can be done for
monotonic functions. Note that every monotonic function can be steady with a convenient
choice of ffl 0 . However, the smaller is the ffl 0 the tighter is the bound on ffi. In the following
example, the bound is tight because ffl
Example Consider a query ET that returns the balance of a bank account. If an
update is executing, say transferring some money into the account, then the query result
inconsistency is equal to imported inconsistency and
For an example where ffl 0 is large, consider Example 4. When an account balance is
actually 5000, an input inconsistency of 1 may change the result by 5000. Therefore we have
since a smaller ffl requires
One way to handle such a situation is to reduce or eliminate the imported inconsistency
in the data item that causes a large ffl 0 . For instance, suppose that
that a large ffl 0 is due to x 1 . We should tighten the import limit for x 1 and allow inconsistency
only for x 2 . Consider the following example which is a simple variation of Example 4.
Example 7: The query ET returns the checking account balance of customers that have
savings accounts with balance greater than $5,000. Note that in this example, x 1 refers to the
savings account and x 2 to the checking account. In this case, we may specify import limit =
0 for the savings account balance and import limit = $100 for the checking account balance.
This way, we avoid the large ffl 0 with respect to x 1 but maintain the tight control over
result inconsistency since the function that returns the checking account balance is a steady
function with ffl (from Example 6).
Being able to calculate ffl from ffi and vice-versa are properties of ET queries that allow
the system to maintain tight bounds on result inconsistency. Functions of bounded variation
and steady functions are abstract classes of functions that have these properties. Clearly,
more elaborate characterization of these functions defined on discrete metric spaces will be
useful.
6 Related Work
6.1 General Weak Consistency Criteria
Several notions of correctness weaker than SR have been proposed previously. A taxonomy
of these correctness criteria is given in [23]. Here we contrast those that are closely related
to ESR with ESR.
Gray's different degrees of consistency [11] is an example of a coarse spectrum of consis-
tency. Of specific interest to us is degree 2 consistency which trades off reduced consistency
for higher concurrency for queries. Since degree 2 allows unbounded inconsistency, degree 2
queries become less accurate as a system grows larger and faster. In general, ESR offers a
much finer granularity control than the degrees of consistency.
Garcia-Molina and Wiederhold [10] have introduced the weak consistency class of read-only
transactions. In contrast to their WLCA algorithm, ESR is supported by many divergence
control methods [29]. Similarly, Du and Elmagarmid [7] proposed quasi-serializability
(QSR). QSR has limited applicability because of the local SR requirements despite unbounded
inconsistency. Korth and Speegle [16] introduced a formal model that include
transaction pre-conditions and post-conditions. In contrast, ESR refers specifically to the
amount of inconsistency in state space.
Sheth and Rusinkiewicz [26] have proposed eventual consistency, similar to identity
connections introduced by Wiederhold and Qian [28], and lagging consistency, similar to
asynchronously updated copies like quasi-copies [1]. They discuss implementation issues
in [24, 25]. In comparison, ESR achieves similar goals but has a general approach based on
state space properties and functional properties. Barbara and Garcia-Molina [2] proposed
controlled inconsistency, which extends their work on quasi-copies [1]. Their demarcation
protocol [3] can be used for implementing ESR in distributed TP systems. ESR is applicable
to arithmetic and other kinds of consistency constraints.
6.2 Asynchronous Transaction Processing
Garcia-Molina et al. [9] proposed sagas that use semantic atomicity [8] defined on transaction
semantics. Sagas differ from ESR because an unlimited amount of inconsistency (revealed
before a compensation) may propagate and persist in the database. Levy et al [19] defined
relaxed atomicity and its implementation by the Polarized Protocol. ESR is defined over
state space properties and less dependent on application semantics.
An important problem in asynchronous TP is to guarantee uniform outcome of distributed
transactions in the absence of a commit protocol. Unilateral Commit [13] is a
protocol that uses reliable message transmission to ensure that a uniform decision is carried
out asynchronously. Optimistic Commit [18] is a protocol that uses Compensating Transactions
[15] to compensate for the effects of inconsistent partial results, ensuring a uniform
decision. Unilateral Commit and Optimistic Commit can be seen as implementation techniques
for ESR-based systems.
Another way to increase TP concurrency is Escrow Method [20]. Like the escrow method,
ESR also uses properties of data state space, but ESR does not rely on operation semantics to
preserve consistency. Similarly, data-value partitioning [27] increases distributed TP system
availability and autonomy. ESR can be used in the modeling and management of escrow
and partitioned data-values.
Conclusions
Previous ESR papers discussed ESR in informal terms by motivating it via specific applications
[21, 22] and by presenting implementation-oriented considerations [29]. An evaluation
of the performance improvement due to ESR is reported in [14].
In this paper, we have examined epsilon serializability (ESR) from first principles. We
showed precisely how ESR is related to SR, for example, which conflicts considered by SR
are ignored by ESR. A conflict based specification of ESR using the ACTA formalism was
employed to bring out the differences between SR and ESR.
We began our formalization of query behavior by deriving the formulae that express
the inconsistency in the data values read by a query. From these expressions we derived
the preconditions, that depend on the data values and the import limits, for the read and
operations invoked by transactions and for transaction management events. In other
words, from a precise definition of ETs and ESR, we have been able to derive the behavioral
specifications for the necessary transaction management mechanisms. These form the second
contribution of this paper. Results showed that more flexible transaction management
techniques, than the ones discussed previously, are possible.
Another important aspect of this paper is the derivation of expressions for the inconsistency
of the results of queries. We showed that since arbitrary queries may produce results
with large inconsistency, it is important to restrict ET queries to have certain properties
that permit tight inconsistency bounds. Towards this end, we came up with different types
of queries that allow us to bound the result inconsistency, and in some cases, to find tight
bounds as well. Clearly, more work is needed in this area since generality of the queries has
to be traded off against the tightness of the result inconsistency.
Among the other active topics of research is the formal treatment of general ETs that
both import and export inconsistency. Also, the effect of relaxing some of the assumptions,
for instance, that read set of a query is unaffected by the inconsistency, needs to be studied.
Acknowledgements
The authors thank P. Chrysanthis, H.V. Jagadish, V. Wolfe, and the referees for their comments
on previous versions of this paper.
--R
Data caching issues in an information retrieval systems.
The case for controlled inconsistency in replicated data.
The Demarcation Protocol: A Technique for Maintaining Arithmetic Constraints in Distributed Database Systems
A formalism for extended transaction models.
ACTA: A framework for specifying and reasoning about transaction structure and behavior.
ACTA: The Saga continues.
Quasi serializability: a correctness criterion for global concurrency control in InterBase.
Using semantic knowledge for transactions processing in a distributed database.
Granularity of locks and degrees of consistency in a shared data base.
Principles of transaction-oriented database recovery
Unilateral commit: A new paradigm for reliable distributed transaction processing.
"Performance Characteristics of Epsilon Serializability with Hierarchical Inconsistency Bounds"
A formal approach to recovery by compensating transactions.
Formal model of correctness without serializability.
Bounded Ignorance in Replicated Systems.
An optimistic commit protocol for distributed trans-action management
A theory of relaxed atomicity.
The escrow transactional method.
Replica control in distributed systems: An asynchronous approach.
Autonomous transaction execution with epsilon-serializability
"In Search of Acceptability Criteria: Database Consistency Requirements and Transaction Correctness Properties"
Redundant data management in Bellcore and BCC databases.
Maintaining consistency of interdependent data in multidatabase systems.
Management of interdependent data: Specifying dependency and consistency requirements.
Modeling asynchrony in distributed databases.
Divergence control for epsilon-serializability
--TR
--CTR
Lyman Do , Prabhu Ram , Pamela Drew, The need for distributed asynchronous transactions, ACM SIGMOD Record, v.28 n.2, p.534-535, June 1999
Kun-Lung Wu , Philip S. Yu , Calton Pu, Divergence Control Algorithms for Epsilon Serializability, IEEE Transactions on Knowledge and Data Engineering, v.9 n.2, p.262-274, March 1997
Dang Depeng , Liu Yunsheng, Concurrency control in real-time broadcast environments, Journal of Systems and Software, v.68 n.2, p.137-144, 15 November
Lisa Cingiser DiPippo , Victor Fay Wolfe, Object-Based Semantic Real-Time Concurrency Control with Bounded Imprecision, IEEE Transactions on Knowledge and Data Engineering, v.9 n.1, p.135-147, January 1997
Nir Shavit , Dan Touitou, Elimination trees and the construction of pools and stacks: preliminary version, Proceedings of the seventh annual ACM symposium on Parallel algorithms and architectures, p.54-63, June 24-26, 1995, Santa Barbara, California, United States
Tei-Wei Kuo , Shao-Juen Ho, Similarity-Based Load Adjustment for Static Real-Time Transaction Systems, IEEE Transactions on Computers, v.49 n.2, p.112-126, February 2000
Alexander Totok , Vijay Karamcheti, Modeling of concurrent web sessions with bounded inconsistency in shared data, Journal of Parallel and Distributed Computing, v.67 n.7, p.830-847, July, 2007
Philip A. Bernstein , Alan Fekete , Hongfei Guo , Raghu Ramakrishnan , Pradeep Tamma, Relaxed-currency serializability for middle-tier caching and replication, Proceedings of the 2006 ACM SIGMOD international conference on Management of data, June 27-29, 2006, Chicago, IL, USA
Man Hon Wong , Divyakant Agrawal , Hang Kwong Mak, Bounded Inconsistency for Type-Specific Concurrency Control, Distributed and Parallel Databases, v.5 n.1, p.31-75, Jan. 1997
Yuan-Ting Kao , Chin-Fu Kuo, Two-Version Based Concurrency Control and Recovery in Real-Time Client/Server Databases, IEEE Transactions on Computers, v.52 n.4, p.506-524, April
Evaggelia Pitoura , Bharat Bhargava, Data Consistency in Intermittently Connected Distributed Systems, IEEE Transactions on Knowledge and Data Engineering, v.11 n.6, p.896-915, November 1999
Aloysius K. Mok, Real-Time Data Semantics and Similarity-Based Concurrency Control, IEEE Transactions on Computers, v.49 n.11, p.1241-1254, November 2000
Tei-Wei Kuo , Chih-Hung Wei , Kam-Yiu Lam, Real-Time Access Control and Reservation on B-Tree IndexedData, Real-Time Systems, v.19 n.3, p.245-281, Nov. 2000
Krithi Ramamritham , Panos K. Chrysanthis, A taxonomy of correctness criteria in database applications, The VLDB Journal The International Journal on Very Large Data Bases, v.5 n.1, p.085-097, January 1996
Yasushi Saito , Marc Shapiro, Optimistic replication, ACM Computing Surveys (CSUR), v.37 n.1, p.42-81, March 2005 | concurrency control;epsilon serializability;formal techniques;ACTA;transaction processing |
627724 | A Temporal Access Control Mechanism for Database Systems. | AbstractThis paper presents a discretionary access control model in which authorizations contain temporal intervals of validity. An authorization is automatically revoked when the associated temporal interval expires. The proposed model provides rules for the automatic derivation of new authorizations from those explicitly specified. Both positive and negative authorizations are supported. A formal definition of those concepts is presented in the paper, together with the semantic interpretation of authorizations and derivation rules as clauses of a general logic program. Issues deriving from the presence of negative authorizations are discussed. We also allow negation in rules: it is possible to derive new authorizations on the basis of the absence of other authorizations. The presence of this type of rules may lead to the generation of different sets of authorizations, depending on the evaluation order. An approach is presented, based on establishing an ordering among authorizations and derivation rules, which guarantees a unique set of valid authorizations. Moreover, we give an algorithm detecting whether such an ordering can be established for a given set of authorizations and rules. Administrative operations for adding, removing, or modifying authorizations and derivation rules are presented and efficiency issues related to these operations are also tackled in the paper. A materialization approach is proposed, allowing to efficiently perform access control. | Introduction
In many real-world situations, permissions have a temporal dimension, in that they are usually limited in time
or may hold only for specific periods of time. In general, however, access control mechanisms provided as part
of commercial data management systems do not have temporal capabilities. In a typical commercial Relational
DBMS (RDBMS), for example, it is not possible to specify, by using the authorization command language, that
a user may access a relation only for a day or a week. If such a need arises, authorization management and access
control must be implemented at application program level. This approach makes authorization management very
difficult, if at all possible. Thus the need of adding temporal capabilities to access control model appears very
strong, as pointed out also by Thomas and Sandhu in [11].
In this paper, we present an authorization model that extends conventional authorization models, like those
provided by commercial RDBMSs, with temporal capabilities. Our temporal authorization model is based on
two main concepts. The first concept is the temporal interval for authorizations. Each authorization has a
time interval associated with it, representing the set of time instants for which the authorization is granted.
An authorization expires after the associated time interval has elapsed. The second concept is the temporal
dependency among authorizations. A temporal dependency can be seen as a rule allowing an authorization to be
derived from the presence (or absence) of another authorization. A temporal dependency can be used, for example,
to specify that a user has an authorization as long as another user has the same or a different authorization. Four
different temporal dependency operators are provided in our model. Temporal dependencies are expressed in
form of derivation rules. Such rules may be parametric, in that a single rule may denote a set of dependencies.
For example, a single derivation rule may specify that a user can read all the files that another user can read,
relatively to an interval of time.
Besides these temporal capabilities, the model supports both positive and negative authorizations. The
capability of supporting explicit denials, provided by negative authorizations, can be used for specifying exceptions
and for supporting a stricter control in the case of decentralized authorization administration [5]. The combination
of positive/negative authorizations with temporal authorizations results in a powerful yet flexible authorization
model.
A critical issue in our model is represented by the presence of derivation rules that allow to derive new
authorizations on the basis of the absence of other authorizations. From one point of view these rules provide
more expressiveness for the representation of temporal dependencies. From another point of view they introduce
the problem of generating a unique set of authorizations. Indeed, a given set of authorizations and derivation
rules may generate different sets of authorizations, depending on the evaluation order. To avoid this problem
we impose a syntactical restriction on the set of derivation rules and we show how this condition guarantees the
uniqueness of the set of derived authorizations. In the paper, we show also how this problem is related to the
problem of negation in logic programming.
Another issue discussed in the paper is the efficiency of the access control. Whenever an access must be
enforced, the system must check whether the appropriate authorization is present in the authorization catalogs
or whether it can be inferred from the authorizations in the catalogs through the derivation rules. The activity
of inferring an authorization can be rather expensive, like performing a query on a deductive database. Thus, a
materialization approach has been adopted. This approach is very similar to the view materialization approach
used in deductive and relational databases [6, 8]. Under such an approach, the results of a view are calculated and
stored when the view is defined, rather than being recomputed each time the view is queried. We use a similar
each time a new authorization is added, all authorizations that can be inferred from it are calculated
and stored into the authorization catalogs. Thus, access control is very efficient, since there is no difference in
costs between explicit authorizations and derived authorizations. Note that administrative operations become
more expensive, but they are much less frequent than access control. Moreover, we use proper maintenance
algorithms to update the materialized authorizations without need of recomputing them all upon execution of
administrative requests.
Time issues in access control and derivation rules for authorizations have come to the attention of the
researchers only recently. The Kerberos system [10], based on the client-server architecture, provides the notion
of ticket, needed for requiring a service to the server, with an associated validity time. The validity time is used
to save the client from the need to acquire a ticket for each interaction with the server. The ticket mechanism
is not used to grant accesses to the resources managed by the system. Rather, it is only used to denote that a
client has been authenticated by the authentication server. Thus, the scope of the temporal ticket mechanism is
very different from our access control model.
Woo and Lam in [13] have proposed a very general formalism for expressing authorization rules. Their
language to specify rules has almost the same expressive power of first order logic. A major issue in their
formalism is the tradeoff between expressiveness and efficiency which seems to be strongly unbalanced in their
approach. We think that it is important to devise more restricted languages focusing only on relevant properties.
The temporal authorization model we propose in this paper is a step in this direction.
A logic language for stating security specifications, based on modal logic, has been proposed by Abadi et Al.
in [1]. However, their logic is mainly used to model concepts such as roles and delegation of authorities and their
framework does not provide any mechanism to express temporal operators for authorization derivation.
A preliminary version of the authorization model presented in this paper was presented by Bertino, Bettini
and Samarati in [4]. The model presented in this paper has a number of major differences with respect to
the previous model. The current model supports both positive and negative authorizations, and it provides
substantial extensions to derivation rules. In particular, in the current model, derivation rules also have temporal
interval of validity. This extension coupled with negative and positive authorizations leads to several interesting
questions concerning both theory and implementation, that we investigate in the current paper. We investigate
also efficiency issues, by proposing a materialization strategy for computing the set of valid authorizations and
by giving algorithms for the maintenance of such materialization.
In this paper, we only deal with discretionary access control and not with mandatory access control. Note,
however, that the majority of DBMS only provide discretionary access control. Therefore, since the focus of
our research is how to extend the authorization facilities provided by a conventional DBMS, we only address
discretionary access control. Recent multilevel DBMS (like Trusted Oracle [9]) provide mandatory access control
coupled with discretionary access control. The new features provided by our model could be orthogonally
incorporated into such systems as well.
The remainder of this paper is organized as follows: Section 2 describes the authorization model giving the
basic definitions and examples. In Section 3 we present the formal semantics for authorizations and derivation
rules and explain the problems due to the presence of negations in rules. A sufficient condition to guarantee the
presence of a unique set of derived authorizations, and an algorithm for checking this condition are given. In
Section 4 we show how all the valid authorizations can be computed. Administrative operations that allow the
users to add, remove, or modify temporal authorizations and rules are described in Section 5. Efficiency issues
concerning the need of updating the set of valid authorizations upon administrative operations are considered in
Section 6. For lack of space we refer the reader interested in proofs to [3].
2 The authorization model
In this section we illustrate our authorization model. To keep our authorization model general and thus applicable
to the protection of information in different data models, we do not make any assumptions on the underlying
data model against which accesses must be controlled and on the access modes users can exercise in the system.
The choice of the data model and of the access modes executable on the objects of the model is to be made when
the system is initialized.
In the following U denotes the set of users, O the set of objects, and M the set of access modes executable
on the objects.
Our model allows the specification of explicit authorizations , stating the permission or denial for users to
exercise access modes on objects, and of derivation rules stating the permission or denial for users to exercise access
modes on objects conditioned on the presence or the absence of other permissions or denials. Each authorization
and derivation rule has a time interval associated with it indicating the time at which the authorization/rule is
applicable.
We assume time to be discrete. In particular, we take as our model of time the natural numbers IN with the
total order relation !.
We are now ready to introduce temporal authorizations and derivation rules.
2.1 Temporal authorizations
In our model both positive and negative authorizations can be specified. Positive authorizations indicate permissions
whereas negative authorizations indicate denials for access.
Authorizations are formally defined as follows.
Definition 2.1 (Authorization) An authorization is a 5-tuple (s,o,m,pn,g) where:
s2 U is the user to whom the authorization is granted;
o2 O is the object to which the authorization refers;
m2 M is the access mode, or privilege, for which the authorization is granted;
pn2 f+; \Gammag indicates whether the authorization is positive (+) or negative (\Gamma);
g2 U is the user who granted the authorization.
Tuple (s,o,m,+,g) states that user s has been granted access mode m on object o by user g. Tuple (s,o,m,\Gamma,g)
states that user s has been forbidden access mode m on object o by user g.
We consider a temporal constraint to be associated with each authorization. We refer to an authorization
together with a temporal constraint as a temporal authorization. Temporal authorizations are defined as follows.
Definition 2.2 (Temporal authorization) A temporal authorization is a pair (time; auth), where time is a
is an authorization.
Temporal authorization ([t 1 ,t 2 ],(s,o,m,pn,g)) states that user g has granted user s an authorization
(positive if negative if mode m on object o that holds between times t 1 and t 2 .
For example, authorization ([10,50],(John,o 1 ,read,+,Bob)) states that John can read object
instants 10 and 50 and that this authorization was granted by Bob.
Note that an authorization without any temporal constraint can be represented as a temporal authorization
whose validity spans from the time at which the authorization is granted to infinity.
In the following, given a temporal authorization A = ([t b ,t e ],(s,o,m,pn,g)) we denote with s(A), o(A),
respectively the subject, the object, the privilege, the sign of the authorization
(positive or negative), the grantor in A, and the starting and ending time of A.
2.2 Derivation rules
Additional authorizations can be derived from the authorizations explicitly specified. The derivation is based on
temporal propositions, used as rules, which allow new temporal authorizations to be derived on the basis of the
presence or the absence of other temporal authorizations. Derivation rules can be applied to both positive as well
as negative authorizations. Like authorizations, derivation rules have a time interval associated with them. The
time interval associated with a derivation rule indicates the set of instants in which the rule is applied.
Derivation rules can also contain variables in their specification. We refer to derivation rules where all the
terms in the authorizations are explicitly specified as ground derivation rules and to derivation rules containing
variables as parametric derivation rules .
2.2.1 Ground derivation rules
Ground derivation rules are defined as follows.
Definition 2.3 (Ground derivation rule) A ground derivation rule is defined as ([t b ,t e ], A 1 hopi A 2 ),
is the time interval associated with the rule, t b 2 IN; t e are
authorizations, is the user who specified the rule, and hopi is one of the following operators: whenever,
aslongas, whenevernot, unless.
Rule ([t b ,t e states that user s 1 is authorized (if pn
'+') or denied (if pn according to the presence or absence (depending on
the operator) of the authorization
The formal semantics of the temporal operators used in the derivation rules will be given in Section 3. Their
intuitive semantics is as follows:
We can derive A 1 for each instant in [t b ,t e ] for which A 2 is given or derived. For example, rule R 1
in
Figure
1, specified by Sam, states that every time, in [7,35], Ann can read object thanks to an
authorization granted by Sam, also Chris can read object
We can derive A 1 for each instant t in [t b ,t e ] such that A 2 is either given or derived for each instant from
t b to t. Note that, unlike the whenever operator, the aslongas operator does not allow to derive A 1 at
an instant t in [t b ,t e ] if there exists an instant t 0 , with t b - t 0 - t, such that A 2 is not given and cannot
be derived at t 0 .
We can derive A 1 for each instant in [t b ,t e ] for which A 2 is neither given nor derived.
We can derive A 1 for each instant t in [t b ,t e ] such that A 2 is neither given nor can be derived for each
instant from t b to t. Note that, unlike the whenevernot, the unless operator does not allow to derive
A 1 at an instant t in [t b ,t e ] if there exists an instant t 0 , with t b - t 0 - t, such that A 2 is given or derived
at t 0 .
Example 2.1 Consider the authorizations and derivation rules illustrated in Figure 1. The following temporal
authorizations can be derived:
authorizations A 1 and A 2 and rule R 1 .
authorization A 1 and rule R 2 .
([41,1], (John,o 1 ,read,+,Sam)) from rule R 3 .
(R1) ([7,35], (Chris,o1,read,+,Sam) whenever (Ann,o1,read,+,Sam))
(R2) ([10,35], (Matt,o1,read,+,Sam) aslongas (Ann,o1,read,+,Sam))
Figure
1: An example of authorizations and derivation rules
ffl ([5,9], (Bob,o 1 ,read,+,Sam)) from rule R 4 .
ffl ([5,9], (Jim,o 1 ,read,+,Sam)) from rules R 4 and R 5 .2.2.2 Authorizations and derivation rules specification
Before proceeding to illustrate the semantics of derivation rules and authorizations we need to make a remark on
authorizations and rules. In our model, only users explicitly authorized can specify authorizations and derivation
rules. Administrative privileges give users the authority of granting accesses on objects to users either directly
(explicit authorizations) or indirectly (through derivation rules). Three different administrative privileges are
considered: refer, administer, and own. The semantics of these privileges is as follows.
ffl refer: If a user has the refer privilege on an object, he can refer to the object in a derivation rule, i.e.,
the object can appear at the right of the temporal operator in a derivation rule specified by the user.
ffl administer: If a user has the administer privilege on an object, he can grant to and revoke from other
users authorizations to access the object (either explicitly or through rules).
ffl own: It indicates possession of an object. When a user creates an object he receives the own privilege on it.
The own privilege allows the user to grant and revoke access authorizations as well as to grant and revoke
administrative privileges (but own) on his object.
Administrative authorizations, i.e., authorizations for administrative privileges are not constrained to a
specific time interval but hold from the time at which they are specified until the time they are revoked by
the object's owner. However, for sake of simplicity and uniformity with respect to other authorizations, we
associate time intervals also to administrative authorizations. The time interval associated with an administrative
authorization spans from the time at which the authorization is specified to 1. Administrative authorizations
are formally defined as follows.
Definition 2.4 (Administrative authorization) An administrative authorization is defined as
is the time interval associated with the authorization, s 2 U is the user
to whom the authorization is granted, O is the object on which the authorization is specified, and p is the
administrative privilege granted to s.
For instance, administrative authorization ([20,1], (John,o 1 ,administer)) states that John has the administer
privilege on object and therefore can grant other users access authorizations on starting from time 20.
For each authorization ([t b ,t e ], A 1 ), we require to have the own or administer privilege on
Moreover, for each derivation rule both the following conditions must be satisfied:
has either the own or administer privilege on
has either the own, administer, or refer privilege on
These conditions are checked at the time an authorization/rule is specified and the insertion of the autho-
rization/rule is accepted only if the conditions are satisfied 1 .
2.2.3 Parametric derivation rules
Derivation rules can also use variables in their specification. We refer to these rules as parametric derivation
rules. To introduce parametric derivation rules, we first give the definition of authorization pattern.
Definition 2.5 (Authorization pattern) An authorization pattern AP is a tuple (s,o,m,pn,g) where s,g
\Gammag.
Symbol ' ' is a special character denoting any user, object, or access mode, depending on its position in the
authorization pattern.
Definition 2.6 (Matching authorization) An authorization A matches a pattern AP if each element of A is
equal to the corresponding element of AP, if different from ' '.
Parametric derivation rules are defined as follows.
Definition 2.7 (Parametric derivation rule) A parametric derivation rule is defined as
are authorization patterns, and all the other elements are as in
definition 2.3. Authorization patterns in the rule must verify the following conditions:
and at least one element among s(AP 1 are different from ' '
ffl if symbol ' ' is used for s(AP 1 used for the corresponding element s(AP 2 ),
We will elaborate on this in Section 5.
(R1) ([1,1], (Chris,*,+,Sam) whenever (sam-friends,*,+,Sam))
Figure
2: An example of parametric derivation rules
A parametric derivation rule can be seen as a shorthand for specifying several ground derivation rules operating
on different subjects, objects, or access modes. Given a parametric derivation rule, we refer to the ground
rules to which it corresponds as instances of the parametric rule. This is expressed by the following definition.
Definition 2.8 (Parametric rule instances) Let derivation
rule. The set of instances of R is the set composed of all possible ground derivation rules
([t b ,t e ], Am hopi An) such that Am matches AP 1 , An matches AP 2 , and such that the following conditions are
Note that instances derived from parametric rules must also satisfy the constraints on administrative privileges
illustrated in the previous section for rules.
The following example illustrates the use of parametric derivation rules.
Example 2.2 Sam wishes to grant the authorization to exercise a certain number of access modes on certain
objects to a group of friends, Chris, Matt, and Jim. Instead of specifying one authorization for every access
mode and every object for each of his friends, Sam can proceed as follows. A new user sam-friends, playing the
role of the group is defined. For each user that Sam wishes to include in the group, a whenever rule parametric
over the object and the access mode is defined where the authorization at the left of the operator has as subject
the user identifier and the authorization at the right has as subject sam-friends (see Figure 2). The time
interval associated with the rule can be interpreted as the time interval at which the user appearing on the left
is considered as a member of group sam-friends. For example, rules R 1 ,R 2 , and R 3 in Figure 2 allow given a
positive authorization specified for sam-friends, to derive the same authorization for for Chris, Matt, and Jim
respectively. Rule R 2 expires at time 100 (intuitively, after that time Matt will not be considered anymore a
member of the group); hence, the time interval associated with the authorizations derived for Matt will have
ending time equal to 100. 4
In the example above, Sam appears as grantor of the authorization on the right of the operators in rules R 1 -R 3 .
Hence, authorizations for Chris, Matt, and Jim will be derived only from authorizations granted to sam-friends
by Sam. Sam can require the rules to fire regardless of the grantor of the authorizations to sam-friends by putting
' ' as grantor in the right side of rules R 1 -R 3 .
3 Formal semantics
In this section we formalize the semantics of temporal authorizations and derivation rules. First of all it is
necessary to point out that the possibility to express negative authorizations introduces potential conflicts among
authorizations. Suppose that a negative authorization for a privilege on an object is granted to a user who has
previously obtained the same privilege on that object. We then have, for a given time interval, the presence
of both negative and positive authorizations. This is not to be intended as an inconsistency, since we consider
negative authorizations as prevailing with respect to positive authorizations.
Considering the set of authorizations and rules in Figure 2, from rule R 3 and authorization A 4 we can derive
authorization A 5 we have ([50,1],(Jim,o 2 ,write,-,John)). This
is not an inconsistency, since we apply the denials-take-precedence principle. Hence, the negative authorization
prevails, and Jim will have the authorization to write object only in the interval [10,49]. The formal semantics
obeys to the denials-take-precedence principle. We start the description of the formal semantics by introducing
the concept of a TAB.
Definition 3.1 (Temporal Authorization Base) A Temporal Authorization Base (TAB) is a set of temporal
authorizations and derivation rules.
In the rest of the paper, we denote with INST-TAB a TAB where each parametric rule has been substituted
by its set of instances according to Definition 2.8. Obviously, TAB and INST-TAB are equivalent.
The semantics of a TAB is given as a set of clauses in a general logic program corresponding to INST-TAB.
We use a logic with two sorts, the natural numbers (IN) as a temporal sort and a generic domain (D) as the other
sort. The language includes constant symbols natural numbers, a finite set of constant symbols (e.g.,
for elements in D, and temporal variable symbols t; t include the
temporal predicate symbols - and ! with the fixed interpretation of the corresponding order relation on natural
numbers, the predicate symbol F () with temporal arity 1 and domain arity 5, the predicate symbols FN () and
FG () with temporal arity 2 and domain arity 5, and the predicate symbol G() with temporal arity 1 and domain
arity 3. The resulting language is very similar to the temporal deductive language proposed in [2] with the main
difference being the presence of negation in our rules.
For each type of authorization/rule in INST-TAB, Table 1 reports its corresponding clause/set of clauses.
Intuitively, the predicate F () is used to represent the authorizations at specific instants. The fact that F (t;
is true in an interpretation corresponds to the validity of A at instant t according to that interpretation. The
predicates G(); FN () and FP () are auxiliary predicates, used to avoid quantification. Intuitively, G(t; s; o; m) is true
in an interpretation if there is at least one negative authorization, with the same s,o,m, valid at instant t according
to that interpretation. FN (t 00 ; t; A) is true in an interpretation if there is at least an instant t 0 with t 00 - t at
which authorization A is false according to that interpretation. FP (t 00 ; t; A) is true in an interpretation if there is
at least an instant t 0 with t 00 - t t at which authorization A is true according to that interpretation.
We denote the logic program corresponding to a TAB with PTAB . We consider stable model semantics of
logic programs with negation [7] to identify the models 2 of PTAB .
Definition 3.2 (Valid Authorization) Given a model M of PTAB , an authorization A is said to be valid at
time t with respect to M if F (t; A) is contained in M . If PTAB has a unique model M and M contains F (t; A),
we simply say that A is valid at time t.
3.1 Restrictions on rules
An important property that we require for our set of temporal authorizations and rules is that we must always
be able to derive a unique set of valid authorizations. This means, for example, that each set of rules together
with a fixed set of explicit authorizations should not derive different authorizations depending on the evaluation
order. We give an example illustrating how different authorizations can be derived depending on the evaluation
order.
Example 3.1 Consider the following rules:
(R
(R
Suppose that there are no explicit authorizations for A 1 or A 2 in the TAB and these are the only rules. If we
consider first R 1 we derive authorization A 1 and we cannot derive A 2 . If we consider first R 2 , we derive A 2 and not
A 1 . Hence, we have two different sets of derived authorizations. In this case there is no reason to give preference
to one set or the other. 4
From the point of view of the semantics that we have given, the property of always having a unique set
of valid authorizations is guaranteed only if there exists a unique model of the program corresponding to the
TAB. Hence, we limit derivation rules so that a unique model can be computed. In the rest of this section we
formally define sets of rules that should be avoided in order to guarantee a unique model for PTAB , and we give
an algorithm for their detection.
In the following, we use the term negative operator (negop) to refer to whenevernot or unless, and
negative rule to refer to a rule using a negative operator. Similarly, positive operator (posop) is used to refer to
whenever or aslongas, present operator (presentop) is used to refer to whenever or whenevernot, and
2 Due to the properties of the resulting program, in this case stable models are identical to well-founded models [12].
G(t; s; o; m) / F (t; s; o; m; \Gamma; g)
Table
1: Semantics of temporal authorizations and rules
past operator (pastop) is used to refer to unless or aslongas. Moreover, we use symbols A i as a shortcut for
the 5-tuple (s
binary relation ,! among the temporal authorizations appearing in INST-TAB is defined as follows:
ffl if there is a rule ([t b ,t e ], Am hopi An ) in INST-TAB, where hopi is an arbitrary operator, then An [t] ,! Am [t]
for each t with t b -t-t e . The ,! relation represents a dependency of Am at instant t from An at the same
instant. When hopi is a negative operator we say that ,! represents a strict dependency.
ffl if there is a rule ([t b ,t e ], Am hpastopi An ) in INST-TAB, then An [t] ,! Am
represents a strict dependency.
Using this relation we can define the more complex notion of priority among temporal authorizations.
Definition 3.3 (Priority) An authorization An at time t has higher priority than an authorization Am at time
one of the following conditions holds:
ffl a sequence An [t]=A 1 exists such that at least one of the ,! relationships
is a strict dependency,
ffl two sequences An [t]=A 1
l
exist such that s(A
l
l
l
ffl an authorization A l and an instant t 00 exist such that An [t] ? A l [t 00 ] and A l [t
Note that the second condition in the above definition implies that each negative authorization has higher
priority than its positive counterpart at the same instant.
We are now ready to identify critical sets of derivation rules.
Definition 3.4 (Critical set) A TAB contains a critical set of rules if and only if an authorization Am in
INST-TAB and an instant t exist such that Am at instant t has priority over itself ( Am [t] ? Am [t]).
Example 3.2 Consider the set of rules:
(R 1
(R
(R 3
These three rules form a critical set. It is easily checked that definition 3.4 applies to this set of rules. In-
deed, by the first condition in Definition 3.3 and rules R 1 and R 2 we have that (Bob,o 1 ,write,+,Jim)[41]
the second condition, and, again by the first condition and rule R 3 , we obtain (John,o
Applying twice the third condition (transitivity) we have (Bob,o 1 ,write,+,Jim)[41]
Hence, this set of rules is critical. 4
The CSD (Critical Set Detection) algorithm, described in the next subsection, can be used to recognize and
reject a TAB containing a critical set.
3.2 An algorithm for critical set detection
We use a set of disjoint 3 intervals compact notation for the set of natural numbers
included in these intervals. Hence, the operations of union (T 1 [T 2 ), intersection
have the usual semantics of set operations. However, we implement these operations so that they can be performed
using intervals and giving the result as a set of disjoint intervals. We use two kinds of set membership: t 2 T is
true if t is one of the natural numbers represented by T , exactly one
of the disjoint intervals of T .
Given a INST-TAB, the algorithm for critical set detection returns FALSE if a critical set exists in TAB;
otherwise it returns a sequence of sets (levels) hL representing a partition of the set of pairs hA; ti for
each authorization A appearing (either explicitly or in a rule) in INST-TAB and for each instant t between 1
and tmax . We define tmax to be the first instant greater than the maximum temporal constant appearing in
INST-TAB. In the following, we refer to each set L i as level i. If pair hA; T i is in level i , we say that A is in
level i for each t 2 T . Intuitively, authorizations appearing at lower levels for a certain set of instants have higher
priority for evaluation than authorizations appearing at higher levels (for the same or different sets of instants).
In this and other algorithms in the paper, we use the functions 'Add()' and `Delete()' to add/delete or modify
the pairs hA; T i. The result of the statement 'Add hA to L' is the addition of that pair to L if there is no
in L for any T , otherwise it is the replacement of hA Analogously, the result
of 'Delete hA from L' is the deletion of that pair from L if the pair hA
it is the replacement of hA
The algorithm is reported in Figures 3,4, and 5, and it works as follows. In step 1, tmax is substituted for each
occurrence of symbol '1' in time intervals associated with authorizations and rules in INST-TAB. There is no
need to consider all time instants up to 1. For instants greater than tmax the authorizations that are valid remain
unchanged. If a critical set exists, it will be found at a time lower than or equal to tmax . In step 2, max-level is
determined as the number of authorizations appearing in INST-TAB multiplied by tmax . max-level corresponds
to the number of pairs hA; ti to be partitioned. Then, the number of levels (top-level) is initialized to 1. Level
initially contains all authorizations in INST-TAB for each instant between 1 and tmax . Step 3 recursively
calls function 'check-levels()' which examines the authorizations at different levels and the dependencies among
authorizations. It possibly changes level to pairs hA; T i on the basis of the dependency. The loop at step 3 ends
when the last call of 'check-levels()' does not change any level or the level number is greater than max-level. In
the first case, the levels constructed by the algorithm are returned. In the second case, FALSE is returned.
Function 'check-levels()' is composed of three steps. In step 1, all levels from top-level to 1 are examined. If a
negative authorization An is found at a given level l for a certain set of time intervals T n;l , the level of all positive
authorizations Am having same subject, object, and access mode as An and appearing at a level lower than l is
3 Two intervals are considered disjoint if they cannot be collapsed into a single one (note that [1; 2] and [3; 4] are not disjoint).
Algorithm 3.1 Critical Set Detection (CSD) Algorithm
INPUT: INST-TAB.
OUTPUT: FALSE if a critical set is detected;
otherwise a sequence of sets representing a partition of
the set of pairs hA; ti such that A appears in INST-TAB and 1 - t - t max .
Each set L i is called level i and L
T j;i is a set of time intervals associated with A j at level i.
1. For each temporal authorization or rule having the time limit
2. max-level := n-auth*t max , where n-auth is the number of authorizations appearing in INST-TAB
top-level
For each authorization A appearing in INST-TAB Do
endfor
3. Repeat check-levels
Until there are no changes to any level or top-level ? max-level.
4. Return FALSE if top-level ? max-level, the sequence
Figure
3: An Algorithm for critical set detection
increased to l + 1 for all time instants in T n;l . In step 2, all the rules are
evaluated. Levels are examined in decreasing order starting from top-level. Every time authorization An is found
at level l for a time interval T n;l not disjoint from [t b ; t e ], function 'update()' is called to increase the level of
Am for the time instants appearing in both T n;l and [t b ; t e ]. The new level is l, if the operator in the rule is
whenever, and l + 1, if it is whenevernot. In step 2, all the rules are evaluated.
Again, levels are examined in decreasing order starting from top-level. Every time authorization An is found at
level l for a time interval T n;l not disjoint from [t b ; t e ], function 'update()' is called to increase the level of Am for
the time instants greater than or equal to the minimum instant t l in both . The new level
is l, for instant t l , and l + 1, for instants in [t b ; t e ] greater than it.
Function 'update()', given a level lev, an authorization Am , and a set of time intervals T , brings authorization Am
at level lev for each time instant for which Am appears at levels lower than lev.
Example 3.3 Consider a TAB containing the following authorizations and rules:
([40,60],A
where A
2jJohn indicates a negative authorization with same subject, object, and access mode as A 2 but with John
as grantor. The algorithm for critical set detection returns the following levels:
Correctness of the CSD algorithm and model uniqueness
The following two theorems state some properties of the levels returned by the CSD algorithm with respect to
the dependencies among authorizations.
Theorem 3.1 Let A n and A m be two authorizations appearing in INST-TAB and t; t 0 be two time instants lower
than or equal to t max such that A n [t] ,! A m [t 0 ]. Then, either the algorithm returns FALSE or, at the end of the
execution, authorization A m for instant t 0 appears at a level higher than or equal to that of authorization A n for
instant t. If ,! is a strict dependency then A m for instant t 0 appears at a level higher than that of A n for instant t.
Theorem 3.2 Let A n and A m be two authorizations appearing in INST-TAB with same subject, access mode,
and object but with different sign. Then, either the algorithm returns FALSE or, at the end of the execution,
the positive authorization appears at a level higher than that of the negative authorization for each time instant
between 1 and t max .
The correctness of the CSD algorithm is stated by the following theorem.
Theorem 3.3 Given a TAB, i) the CSD algorithm terminates and ii) it returns a FALSE value if and only if
the TAB contains a critical set.
As we have observed, for the purpose of determining the authorization state of the system at a certain instant,
the uniqueness of the PTAB model at that instant is required. The uniqueness of the model in absence of critical
sets is guaranteed by the following theorem.
Theorem 3.4 Given a TAB with no critical sets, the corresponding logic program PTAB has a unique model.
4 Materialization of authorizations
In our model, the control of whether a request to access an object for a given access mode can be authorized may
require the evaluation of several rules. Two different strategies can be used to enforce access control:
Function check-levels(hL
1. For l
partition the set of pairs hA l such that
ig such that
For each S i Do
For h :=
For each
such that s(Am)=s i , o(Am)=o i , m(Am)=m i ,
Delete
endfor
endfor
endfor
endfor
2. For each rule
TR
l := top-level
While TR 6= f[ ]g and l - 1 Do
Case hpresentopi of
endcase
TR := TR n T n;l
endif
l
endwhile
endfor
3. For each rule
l := top-level
While
case hpastopi of
aslongas: If t l ! t f then update (l
update
unless: update (l
endcase
endif
l
endwhile
endfor
Figure
4: Function check-levels
Function
If lev ? top-level then
top-level := lev
endif
While T 6= f[ ]g and h - 1 Do
Delete
endif
endwhile
Figure
5: Function update
Run-time derivation: Every time a user requires an access, the system verifies whether the access request can
be authorized on the basis of the authorizations and the derivations rules in TAB and by computing, if
necessary, the derived authorizations.
Materialization: The system permanently maintains all the valid authorizations, both explicit and derived.
Upon an access request, the system can immediately check whether a valid corresponding positive authorization
exists.
Both these approaches have some pros and cons. The first approach has the advantage that no actions are
required upon modification of the TAB; however access control becomes cumbersome since each access request
may require the computation of derived authorizations. In the second approach, this run-time computation is
avoided at the price of explicitly maintaining the derived authorizations that will have to be updated every time
the TAB is modified.
generally, access requests are considerably more frequent than administrative requests modifying authorizations
and/or rules, we argue that the second approach is preferable. Moreover, the drawback provided
by the need of recalculating the explicit authorizations upon modifications to the TAB can be overcome by the
application of efficient algorithms that update the materialized authorizations upon modifications without need
of reconsidering all rules and recomputing all the materialized authorizations.
For the reasons above, we adopt the materialization approach. In the following we illustrate how to compute,
given a TAB, the corresponding valid authorizations. In Section 6 we will provide algorithms for reflecting
changes to the TAB in the materialized authorizations without the need of recomputing all authorizations from
the beginning.
Definition 4.1 (Temporal Authorization Base Extent) The Temporal Authorization Base Extent (TABEXT )
of TAB is the set of valid authorizations derived from TAB.
TABEXT contains all the valid authorizations of TAB computed according to the semantics of explicit
authorizations and derivation rules.
Authorizations are maintained in TABEXT using a compact representation: each A k is associated with a set
T k of disjoint intervals, representing the instants at which A k is valid.
At time t=0, TABEXT does not contain any explicit or derived authorizations. Upon the execution of each
administrative operation (such as grant/revoke of authorizations or rules) TABEXT is updated to reflect the
effects of the operation execution.
If the strategy of maintaining both explicit and derived authorizations is not adopted from the beginning,
it is necessary to populate TABEXT from the explicit authorizations and derivation rules already present in
TAB. If there is no critical set, the CSD algorithm returns a sequence of levels hL such that, for each
authorization, the corresponding set of instants partitioned among the k levels. This sequence is
essential to establish an evaluation order that guarantees that the computed TABEXT contains all and only valid
authorizations.
Algorithm 4.1, reported in Figure 6, computes the TABEXT of a TAB. The algorithm receives as input the
TAB's instantiated version INST-TAB and the sequence hL given by the CSD algorithm. The algorithm
is based on the technique used to compute the model of (locally) stratified logic programs. Intuitively, rule
instances and authorizations are partitioned among a finite number of levels according to a priority relation and
inferences at a certain level are performed only when all possible inferences at lower levels have been performed.
The main step of the algorithm (step 2) is an iteration on the k levels returned by the CSD algorithm. For
each level i, starting from
a) Constructs the set X i of authorizations and rules available at level i. More precisely, X i contains pairs
is an element of INST-TAB. x can be an explicit authorization ([t b ,t e ],A m ) or a rule ([t b ,t e ],
Am hopiAn ). T 0 is the set of intervals representing all instants such that Am is in level i for instant
t.
b) Derives new authorizations drawing all possible inferences at level i by using the elements in X i and the
authorizations previously derived.
The last step of the algorithm (step 3) extends the intervals of derived authorizations on the basis of the
following observation:
If we have derived an authorization for the instant t max , we are guaranteed that the authorization can
be derived for any instant greater than t max
This fact is due to the particular form of our rules and it is formally proved as part of the proof of Theorem 4.1.
The following example illustrates an application of the algorithm for TABEXT generation.
Algorithm 4.1
INPUT: The output of the CSD Algorithm and INST-TAB.
is a valid authorization for each interval in T i g
sequence hX
with x=([t b ,te ],Am ) or x=([t b ,te ],Am hopi An ), and T 0
;g.
and each X i are initialized to be empty
For i:=1 to k Do
a) #Construction of X i #
For each element x in INST-TAB, where x=([t b ,te ],Am ) or x=([t b ,te ],Am hopiAn ) Do
endfor
b) #Construction of TABEXT#
Repeat
For each element hx; T
If
endfor
Until no new authorization can be derived
endfor
For each hA; T i in TABEXT with t with 1.
Function
Case op of
aslongas: If t b 2 I for some I 2Tn then T :=
else
unless: If
else
endif
endcase
If pn(Am
return T
Figure
An algorithm for TABEXT generation.
Example 4.1 Consider the TAB illustrated in Example 3.3. The levels computed by the CSD algorithm are
illustrated in the same example. We now apply the algorithm for TABEXT generation. Let TAB (i)
EXT be the
TABEXT resulting from the i-th iteration.
ffl For
2.b TAB (1)
since h([10; 200]; A 1 ); f[10; 200]g)i is the only element of X 1 and there
are no authorizations in TAB (1)
blocking A 1 .
ffl For
2.a
80]gig.
2.b From h([40; 60]; A
2jJohn whenevernot A 3 ); f[40; 60]gi and authorizations in TAB (1)
EXT we obtain:
From h([5; 100]; A 2 whenever A 1 ); f[5; 39]; [61; 100]gi we obtain:
Hence, TAB (2)
80]gig.
ffl For
2.a
2.b Function Derive-auth(h([5; 100]; A 2 whenever A 1 ); f[40; 60]gi, TAB (2)
returns T=;, since authorization
for the time interval [40,60].
80]gig.
ffl There are no more levels, t 201 in this example and it does not appear in TAB (3)
EXT . Hence, the
algorithm terminates returning TAB (3)
EXT .The correctness of the algorithm is stated by the following theorem:
Theorem 4.1 Given TABEXT as returned by Algorithm 4.1, an authorization A is valid at time t if and only if
there exists hA; T i in TABEXT with t 2 T .
Once we have an updated TABEXT , each access request can be checked against TABEXT . An access request
from user s 1 to exercise access mode m 1 on object o 1 at time t will be allowed only if a pair hA; T i exists in
TABEXT such that s(A)=s 1 , o(A)=o 1 , m(A)=m 1 , pn(A)= '+', and t 2 T .
5 TAB administration
Administrative operations allow the users to add, remove, or modify temporal authorizations and derivation rules
and to give or revoke other users the right to administer their objects or to refer to them in derivation rules.
Each temporal authorization, and each derivation rule in the TAB is identified by a unique label assigned by
the system at the time of its insertion. The label allows the user to refer to a specific temporal authorization or
derivation rule upon execution of administrative operations.
In the following we discuss the administrative operations considered in our model. The syntax of the
operations in BNF form is given in Table 2. With reference to the figure, non terminal symbols hsubjecti,
hobjecti,haccess-modei, hauth-ti, and hnat-numberi represent elements of the domains S, O; M; f+; \Gammag; and
IN respectively. Non terminal symbols haidi and hridi represent system labels. Symbol # can be used in the
specification of the starting time for an authorization/rule to indicate the time at which the administrative request
is submitted to the system.
Administrative requests can affect access authorizations, derivation rules, or administrative authorizations,
as follows.
Requests affecting the authorizations on an object
These are requests for granting or revoking authorizations on an object. The user requesting them must
have either the own or the administer privilege on the object.
Grant To grant an access mode on an object to a subject for a specified time interval. The grant operation
results in the addition of a new temporal authorization. The starting time of the authorization must
be greater than or equal to the time at which the authorization is inserted (it is not possible to specify
retroactive authorizations).
Deny To deny an access mode on an object to a subject for a given time interval. The deny operation
results in the addition of a new temporal negative authorization.
Revoke To revoke an access mode on an object from a subject. The revoke operation can be required with
reference to a single authorization by specifying its label (i.e., the deletion of a specific authorization is
requested) or with reference to an access mode on an object with respect to a given time interval. The
revoke operation results in the deletion or modification of all the temporal authorizations of the revokee
for the access mode on the object granted by the user who revokes the privilege. If the time interval
for which the revocation is requested spans from the time of the request to 1 all authorizations for
the access mode on the object granted by the revokee to the revoker will be deleted. If the revocation
is required for a specific time interval, all the authorizations for the access mode on the object granted
to the revokee by the revoker will be deleted or modified to exclude the interval (and possibly split in
more authorizations). Note that a user can revoke only the authorizations he granted and then the
revoke request by a user affects only the authorizations granted by that specific user.
Revoke negation To revoke the negation for an access mode on an object from a subject. It is analogous
to the Revoke operation with the only exception that it applies to negative authorizations.
ffl Requests affecting rules
These are requests for specifying or deleting rules. The user requesting them must have either the own or the
administer privilege on the object appearing at the left of the operator and either the own, administer,
or refer privilege on the object appearing at the right of the operator.
Addrule To add a new derivation rule. The grantor of the authorization appearing at the left of the
temporal operator identifies the user inserting the rule. Like for authorizations, the starting time of
the interval associated with the rule must be greater than the time at which the request is specified.
Droprule To drop a derivation rule previously specified. The operation requires, as argument, the label
of the rule to be deleted. Like for the revocation of authorizations, a user can drop only the rules that
he has specified.
ffl Requests affecting administrative authorizations
These are requests for granting or revoking administrative privileges on an object. They can be executed
only by the owner of the object.
Grantadm To grant the administer privilege on an object to a subject. It results in a new administrative
authorization spanning from the time of the request to 1.
Revokeadm To revoke the administer privilege on an object to a subject. It results in: i) the deletion
of the authorization for the administer privilege on the object previously granted to the revokee,
and ii) the deletion of the authorizations on the object and of the derivation rules where the object
appears in the authorization at the left of the operator specified by the revokee. If the revokee does
not have the reference privilege on the object, also the derivation rules where the object appears in
the authorization at the right of the operator are deleted.
To grant the refer privilege on an object to a subject.
Revokeref To revoke the refer privilege on an object to a subject. It results in the deletion of the
authorization for the refer privilege on the object previously granted to the subject and in the deletion
of all the rules granted by the revokee where the object appears in the authorization at the right of
the operator.
Grant haccess-modei on hobjecti to hsubjecti
hdenyi ::= Deny haccess-modei on hobjecti to hsubjecti
Revoke haccess-modei on hobjecti from hsubjecti
Revoke negation haccess-modei on hobjecti
from hsubjecti
hgrant-admi ::= Grantadm on hobjecti to hsubjecti
hrevoke-admi ::= Revokeadm on hobjecti from hsubjecti
hgrant-refi ::= Grantref on hobjecti to hsubjecti
hrevoke-refi ::= Revokeref on hobjecti from hsubjecti
Table
2: Syntax of administrative operations
6 TABEXT Maintenance
Execution of administrative operations illustrated in the previous section can change the set of valid authoriza-
tions. The TABEXT has to be modified accordingly. For instance, the insertion of an explicit authorization can
cause the deletion of authorizations from TABEXT . This happens if the authorization appears in the right side
of a negative rule, or if it is a negative authorization. A similar problem arises for authorization deletion.
We have devised a set of algorithms that update TABEXT upon each administrative request, without the
need of recomputing all the materialized authorizations. These algorithms use methods similar to those employed
for the maintenance of materialized recursive views with negation [8].
The maintenance algorithms make use of sequences defined in Section 4, that
are permanently stored and updated by them to reflect the changes in TAB. The approach exploits the fact that,
authorizations in TAB (i)
EXT are derived using only authorizations in TAB (i\Gamma1)
EXT and rules in X i . Thus, a change
for an authorization/rule of level i does not affect authorization in TAB (j)
EXT with Only authorizations in
EXT with need to be reconsidered.
In the following, we illustrate an algorithm for updating TABEXT upon insertion of new positive autho-
rizations, based on the Dred algorithm [8]. The methods to maintain TABEXT after the insertion/deletion of
a negative authorization and the deletion of a positive one are very similar to that for positive authorizations
insertion. We refer the reader to [3] for the description of these algorithms and for the ones for insertion/deletion
of derivation rules.
6.1 Insertion of explicit positive authorizations
The algorithm in Figure 7 implements the maintenance of TABEXT for the insertion of an explicit positive au-
thorization. It receives as input TABEXT , INST-TAB, its corresponding sequences hL
and a positive authorization and returns TAB u
EXT , the set of valid authorizations resulting from the insertion of
the positive authorization, and the updated sequences hL 0
i. The algorithm works as
follows: suppose that a positive authorization ([t b ,t e ],A k ) has been inserted. If the inserted authorization does not
appear in INST-TAB or its time interval exceeds tmax , it is necessary to recompute the sequence hL
which authorizations have been partitioned by the CSD algorithm, because the partition of the authorizations
among the levels changes and the number of levels could increase (step 1). In step 2 the positive authorization is
inserted in INST-TAB. Step 3 iteratively considers all the elements hA,T i in TABEXT and replaces each symbol
'1' in T with tmax . Step 4 initializes S INS , SDEL and TAB u
EXT . S INS and SDEL are two data structures
containing the authorizations inserted and deleted from TAB u
EXT till the current point of the computation. The
authorizations are kept in S INS and SDEL using the same representation as for TABEXT . Then (step 5), the
algorithm computes l min , the least level in which authorization A k appears in an instant t of the time interval
All the operations for TABEXT maintenance will be executed starting from level l min . Step 6 computes
the sets X 0
. In this case X 0
since the insertion of the new authorization does not change the
levels of the authorizations in INST-TAB. Step 7 is an iteration on the levels returned by the CSD algorithm,
starting from level l min . For each level i , the algorithm performs the following operations:
ffl compute the set X 0
by adding to X i the element hA is the
set of time intervals representing all the time instants in which the inserted authorization is in
level
ffl compute T , the set of time intervals representing all the time instants in which the inserted
authorization is in level i and it is not blocked by a negative authorization;
ffl insert the element hA k ; T i in S INS and in TAB u
ffl call function 'Dred-Ext()' that computes all the authorizations of level i which have to be inserted or removed
from TAB u
EXT because of the insertion of ([t b
Finally, the last step of the algorithm iteratively considers all the elements in the updated TABEXT
and substitutes each value tmax in T with symbol '1'.
Function 'Dred-Ext()', given a level l and the authorizations inserted and deleted from TAB u
EXT till the
current point in the computation, updates the TAB u
EXT according to the rules that can be fired in level l. The
function consists of three main steps: step (a) adds to SDEL and removes from TAB u
EXT an overestimate of
the authorizations that need to be deleted because of the insertion of ([t b ;t e ];A k ). An authorization is added to
SDEL by step (a) if the insertion of ([t b ;t e ];A k ) invalidates any derivation of the authorization from the elements
of
l . Step (b) reinserts in TAB u
EXT the authorizations deleted in the previous step that have an alternative
derivation. The reinserted authorizations are obviously removed from SDEL . Finally step (c) adds to S INS and to
TAB u
EXT all the new authorizations that can be derived from the derivation rules in X 0
l because of the insertion
of ([t b
Theorem 6.1 Given a TAB and a positive authorization ([t b ,t e ],A k ), i) Algorithm 6.1 terminates. Moreover
ii) the sequence hX 0
computed by the algorithm is correct. Finally iii) the TAB u
EXT , computed by the
algorithm contains all and only the valid authorizations wrt TAB [ ([t b ,t e ],A k ).
The following example illustrates how Algorithm 6.1 works.
Example 6.1 Consider the TAB illustrated in Example 3.3. The set of materialized authorizations and sets X i
for this TAB are illustrated in Example 4.1. Suppose that at time t=7 authorization ([40,50],A 3 ) is inserted in
the TAB. It is not necessary to run the CSD algorithm since the upper bound of the time interval of the inserted
authorization does not exceed tmax=201, and A 3 already appears in INST-TAB. Since l all the X 0
with 1- i - 3 will be considered.
After the first iteration of step (7) of Algorithm 6.1, X 0
inserts h[40; 50];A 3 i in TAB u
EXT .
In the second iteration, each element of X 0
(a) of function 'Dred-Ext()' searches for
elements
2 such that
n i is in S INS with T INS
;. The only
Algorithm 6.1 Insertion of an explicit positive authorization
positive authorization ([t b ,te ],A k ).
(each element denoted with hA
The sequence
The sequence hX
EXT (the updated TABEXT ).
2) The sequence hL 0
The sequence hX 0
else k:= k, L 0
For each hA; T i in TABEXT substitute each symbol '1' in T with tmax
SINS and SDEL are initialized to be empty, TAB u
For i:=lmin to k Do
(a) If 9X i then X 0
else
(b) T :=
fT u
(c) Add to SINS and TAB u
(d) hTI ,TD i := Dred-Ext(SINS,SDEL ,i)
(e) For each
(f) For each
endfor
For each
EXT , substitute each value
Figure
7: An Algorithm for positive authorization insertion
Function Dred-Ext(S I ,S D ,l)
(a) Repeat
For each hx; T
l , where
If pn(Am
else
Case op of
endcase
else
endif
endfor
Until SD does not change
For each
(b) Repeat
For each hx; T
l , where
Delete
endfor
Until SD does not change
(c) Repeat
For each hx; T
l , where
If pn(Am
else
Case op of
endcase
else
I and TAB u
endif
endfor
Until S I does not change
For each
For each
return hS I ,S D i
Figure
8: Function Dred-Ext
Function
If
Case op of
aslongas,unless: If T
ts := max(ftjt2Tg)
endif
endcase
else
If
return T
Figure
9: Function Derive
element that satisfies the condition is
is added to SDEL and removed
from TAB u
EXT . In the third iteration, step (c) of function 'Dred-Ext()' is executed and fhA 2 ; f[40; 50]gig is added
to S INS . No other changes will be made by this iteration. Hence, the algorithm terminates. The resulting
TAB u
EXT is:
TAB u
7 Conclusions and future work
In this paper we have presented an authorization model with temporal capabilities. The model introduces the
concept of temporal authorization which is an authorization together with a start and an expiration time. Both
negative as well as positive authorizations can be specified. Derivation rules can be expressed which allow new
temporal authorizations to be derived on the basis of the presence or the absence of other temporal authorizations.
Four different temporal operators can be used in the derivation rules. Administrative authorizations regulate the
insertion and removal of authorizations and rules by users.
We have given the formal semantics of temporal authorizations and derivation rules in terms of a general logic
program. The problem of ensuring the uniqueness of the derived authorizations corresponds to the theoretical
issue of the existence of a unique model for the logic program. We have presented an approach to solve this problem
based on the stratification of authorizations and derivation rules. We have provided an algorithm that determines
whether an authorization base has a stratification and proved that, if the authorization base is stratified, a unique
set of derived authorizations is always computed.
Performance issues have been addressed and a materialization approach in which derived authorizations are
explicitly stored has been proposed. Algorithms for building the materialized set of derived authorizations and
for maintaining them upon execution of administrative operations have been proposed.
The proposed model is currently under implementation to investigate the system's performance for various
characteristics of the authorization base.
We are currently extending this work in several directions. First, decentralized authorization administration
facilities are being added to the model. Second, the model is being extended with periodic authorizations. Such
capability allows to specify, for example, that a given subject may access a data item every Thursday. Also, access
control based on past access histories will be included into the model. Finally, we plan to investigate different
temporal logic formalisms and constraint logic programming as possible foundations for temporal authorization
models.
Acknowledgment
The authors wish to thank Prof. Michael Gelfond for useful discussions on problems related to the semantics of
negation.
--R
A calculus for access control in distributed systems.
On the representation of infinite temporal data and queries (extended abstract).
A temporal access control mechanism for database systems.
A temporal authorization model.
Authorizations in relational database management systems.
Deriving Production Rules for Incremental View Maintenance.
The stable model semantics for logic programming.
Maintaining Views Incrementally.
An authentication service for open network systems.
Discretionary access control in object-oriented databases: Issues and research directions
The well-founded semantics for general logic programs
Authorizations in distributed systems: A new approach.
--TR
--CTR
Elisa Bertino , Silvana Castano , Elena Ferrari , Marco Mesiti, Specifying and enforcing access control policies for XML document sources, World Wide Web, v.3 n.3, p.139-151, 2000
Elisa Bertino , Sushil Jajodia , Pierangela Samarati, A flexible authorization mechanism for relational data management systems, ACM Transactions on Information Systems (TOIS), v.17 n.2, p.101-140, April 1999
Sushil Jajodia , Pierangela Samarati , V. S. Subrahmanian , Eliza Bertino, A unified framework for enforcing multiple access control policies, ACM SIGMOD Record, v.26 n.2, p.474-485, June 1997
Elisa Bertino , Claudio Bettini , Elena Ferrari , Pierangela Samarati, Supporting Periodic Authorizations and Temporal Reasoning in Database Access Control, Proceedings of the 22th International Conference on Very Large Data Bases, p.472-483, September 03-06, 1996
Mizuho Iwaihara , Ryotaro Hayashi , Somchai Chatvichienchai , Chutiporn Anutariya , Vilas Wuwongse, Relevancy-based access control and its evaluation on versioned XML documents, ACM Transactions on Information and System Security (TISSEC), v.10 n.1, p.3-es, February 2007
Mizuho Iwaihara , Somchai Chatvichienchai , Chutiporn Anutariya , Vilas Wuwongse, Relevancy based access control of versioned XML documents, Proceedings of the tenth ACM symposium on Access control models and technologies, June 01-03, 2005, Stockholm, Sweden
H. F. Wedde , Mario Lischka, Role-based access control in ambient and remote space, Proceedings of the ninth ACM symposium on Access control models and technologies, June 02-04, 2004, Yorktown Heights, New York, USA
Michael J. Covington , Wende Long , Srividhya Srinivasan , Anind K. Dev , Mustaque Ahamad , Gregory D. Abowd, Securing context-aware applications using environment roles, Proceedings of the sixth ACM symposium on Access control models and technologies, p.10-20, May 2001, Chantilly, Virginia, United States
Elisa Bertino , Claudio Bettini , Elena Ferrari , Pierangela Samarati, An access control model supporting periodicity constraints and temporal reasoning, ACM Transactions on Database Systems (TODS), v.23 n.3, p.231-285, Sept. 1998
Elisa Bertino , Silvana Castano , Elena Ferrari , Marco Mesiti, Controlled access and dissemination of XML documents, Proceedings of the 2nd international workshop on Web information and data management, p.22-27, November 02-06, 1999, Kansas City, Missouri, United States
Avigdor Gal , Vijayalakshmi Atluri, An authorization model for temporal data, Proceedings of the 7th ACM conference on Computer and communications security, p.144-153, November 01-04, 2000, Athens, Greece
Franois Siewe , Antonio Cau , Hussein Zedan, A compositional framework for access control policies enforcement, Proceedings of the ACM workshop on Formal methods in security engineering, p.32-42, October 30, 2003, Washington, D.C.
Xinwen Zhang , Jaehong Park , Francesco Parisi-Presicce , Ravi Sandhu, A logical specification for usage control, Proceedings of the ninth ACM symposium on Access control models and technologies, June 02-04, 2004, Yorktown Heights, New York, USA
Xinwen Zhang , Francesco Parisi-Presicce , Ravi Sandhu , Jaehong Park, Formal model and policy specification of usage control, ACM Transactions on Information and System Security (TISSEC), v.8 n.4, p.351-387, November 2005
Shermann S. Chan , Qing Li , Jos A. Pino, VideoAcM: a transitive and temporal access control mechanism for collaborative video database production applications, Multimedia Tools and Applications, v.29 n.1, p.29-53, April 2006
Sushil Jajodia , Pierangela Samarati , Maria Luisa Sapino , V. S. Subrahmanian, Flexible support for multiple access control policies, ACM Transactions on Database Systems (TODS), v.26 n.2, p.214-260, June 2001
Nabil Adam , Yelena Yesha, Strategic directions in electronic commerce and digital libraries: towards a digital agora, ACM Computing Surveys (CSUR), v.28 n.4, p.818-835, Dec. 1996
N. R. Adam , V. Atluri , E. Bertino , E. Ferrari, A Content-Based Authorization Model for Digital Libraries, IEEE Transactions on Knowledge and Data Engineering, v.14 n.2, p.296-315, March 2002
Vijayalakshmi Atluri , Avigdor Gal, An authorization model for temporal and derived data: securing information portals, ACM Transactions on Information and System Security (TISSEC), v.5 n.1, p.62-94, February 2002
Jean Bacon , Ken Moody , Walt Yao, A model of OASIS role-based access control and its support for active security, ACM Transactions on Information and System Security (TISSEC), v.5 n.4, p.492-540, November 2002
Vijayalakshmi Atluri , Soon Ae Chun, An Authorization Model for Geospatial Data, IEEE Transactions on Dependable and Secure Computing, v.1 n.4, p.238-254, October 2004 | access control;temporal reasoning;general logic programs;database security;database management;temporal authorization |
627737 | A Guide to the Literature on Learning Probabilistic Networks from Data. | AbstractThis literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. | Introduction
Probabilistic networks or probabilistic graphical models
are a representation of the variables in a problem and
the probabilistic relationships among them. Bayesian net-
works, a popular kind of probabilistic network, have been
used in different applications including fault diagnosis,
medical expert systems, and software debugging [1]. In
this review of learning I focus mainly on Bayesian networks
which are based on directed graphs.
Probabilistic networks are increasingly being seen as a
convenient high-level language for structuring an otherwise
confusing morass of equations. They are an explicit
representation of dependencies or independencies between
variables that ignores the specific numeric or functional
details. Depending on interpretation, they can also represent
causality [2], [3], [4], [5]. Probabilistic networks in
this broad sense were independently developed in a number
of communities [6]: in genetics [7], in social science, in
statistics to factor multi-dimensional contingency tables;
in artificial intelligence to model probabilistic intelligent
systems [8]; and in decision theory to model complex decisions
[9]. An area not considered in this review is graphical
modeling in social science which has had rich development
and application, and strong interactions with the artificial
intelligence and statistical communities [10], [3], [11], [12].
Networks in general play the role of a high-level language,
as is seen in artificial intelligence, statistics, and to a lesser
degree in neural networks (where biological views offer an
alternative interpretation). See the survey by Ripley [13].
Networks are used to build complex models from simple
components. Networks in this broader sense include prob-
Current address: Thinkbank, 1678 Shattuck Avenue, Suite
Berkeley, CA, 94709. Email: [email protected], URL:
http://WWW.Thinkbank.COM/wray/
abilistic graphical models of the kind considered here, as
well as neural networks [14], and decision trees [15]. Probabilistic
networks have the distinguishing characteristic that
they specify a probability distribution-they therefore have
a clear semantics that allow them to be processed in order
to do diagnosis, learning, explanation and many other
inference tasks necessary for intelligent systems. For in-
stance, a new research area considered briefly in the last
section is where a probabilistic network is the input specification
for a compiler that generates a learning algorithm.
This compilation is made easier because the network defines
a probability distribution.
Why is learning probabilistic networks of particular in-
terest? Most of the earlier work in artificial intelligence on
building expert systems involved a tedious process of manual
knowledge acquisition [16]. This tedium spurred two
developments that more or less continued independently
until recently: machine learning which originally focused
on learning rule based systems [17], [18], and uncertainty
in artificial intelligence which focused on developing coherent
probabilistic knowledge structures whose elicitation
suffered less pitfalls. For instance, Henrion and Cooley give
a detailed case study [19], and Heckerman developed similarity
networks [20] which allow a complex network to be
elicited more simply than one would expect. The interest
in artificial intelligence in learning of probabilistic networks
is a result of the marriage of machine learning and uncertainty
in artificial intelligence.
Neural network learning has developed concurrently,
based almost exclusively on learning from data. The networks
in the computational side of neural networks (in-
terested in information processing as opposed to biological
modeling) have increasingly been moving in the direction
of probabilistic models. Therefore, there is some overlap
between learning of probabilistic networks and neural networks
[21], [22], [23]. In statistics, many general inference
techniques [24], [25], [26] have been developed that have
been applied to learning of probabilistic networks. Computer
scientists, for instance in artificial intelligence, have
often contributed more in terms of combining and scaling
up these techniques, and generalizing them to classes
of representations. More examples of the variety of probabilistic
networks and their applications to learning are given
in [23], [27].
Learning of probabilistic networks includes a number of
complications: learning the structure, the parameters given
a structure, hidden variables whose values are never present
in the data, and values of a variable that are sometimes
missing. This review describes some current literature addressing
these various tasks, reviews the major methodolo-
IEEE TRANSACTIONS ON KNOWLEDGE AND
gies applied, and describes some of the major algorithms.
Available software for learning Bayesian networks is not
discussed in this review. An extensive list of software for
general inference on probabilistic networks is maintained
on the World Wide Web [28]. A list of relevant online tutorial
articles and slides, several of those mentioned here, is
also available at [29]. Another area not considered in this
review is the empirical evaluation of learning algorithms for
probabilistic networks. Empirical evaluation of learning algorithms
is fraught with difficulties [30]. Notwithstanding,
interesting empirical studies appear in [31], [32], [33], [34],
[35], [36], [37], [38].
II. An introduction to probabilistic networks
This section introduces Bayesian networks, and some
more general probabilistic networks. For tutorial articles
on Bayesian networks see [39], [40], [41]. For an introduction
from the artificial intelligence perspective, see [8].
For a statistical introduction to graphical models in general
see [42], and a tutorial introduction see [43]. For an
introduction to Bayesian networks and Bayesian methods
for learning them see [44]. Other kinds of networks include
Markov (undirected) networks and Markov random fields
are considered widely in image analysis, spatial statistics
[45] and neural networks [14].
This section introduces Bayesian networks with a simple
example, and then illustrates the richness of the representation
with additional examples. Consider Bayesian networks
on discrete variables. In their simplest form these consist
of a network structure and its associated conditional probability
tables. The example below is adapted from [39].
A. The structure, S
The network structure is represented by a Directed
Acyclic Graph (DAG) as given in Fig. 1. This network
Occupation Climate
Age
Disease
Symptoms
Fig. 1. A simple Bayesian network
is by definition equivalent to the following functional decomposition
for the joint probability (full variable names
have been abbreviated):
which is in turn equivalent to the following set of conditional
independence statements:
Symptoms ?? fAge; Occ; Climg
I
Two of the five probability tables
Age - 45 0.54
Disease
Symptoms stomach myocardial neither
ulcer infarction
stomach pain 0.80
chest pain 0.15 0.90 0.10
neither
Here, A??BjC reads that A and B are independent given
C [8], [46]. Take the node for Symptoms as an exam-
ple. This node only has one parent, Disease, but three
other ancestors, Age; Occ; Clim. From this one reads the
assumption that the symptoms are only dependent on
age, occupation and climate indirectly through their influence
on the disease. This network substructure, by
definition, translates into the third independence statement
above. Bayesian networks therefore simplify the
full joint probability distribution for a set of variables,
show independencies
between the variables.
B. The conditional probability tables, parameters '
Conditional probability tables are needed to specify a
probability distribution based on the network. For the
structure in Fig. 1, we see from Equation (1) that the tables
for p(Age), p(Occ), p(Clim), p(DiseasejAge; Occ; Clim),
and p(SymptomsjDisease) need to be specified. These
tables may be specified in any form: implicitly by some
parametric probability distribution, or explicitly as ta-
bles. Two such tables are given below for p(Age) and
p(SymptomsjDisease). Notice that Age, while being a
real valued variable, is discretized to create a binary vari-
able. Symptoms is a three valued discrete variable, as is
Disease. Without the assumptions of the network which
leads to Equation (1), instead of five smaller tables, one
large joint table on all five variables would be required.
Networks provide a way of simplifying the representation
of a probability distribution.
C. Some extensions
While the variables above are treated as simple discrete
variables, and the conditional probabilities in the example
above are simple tables, in general a variety of variables
and functions can be used on Bayesian networks. Variables
could be real valued, integer valued, or multivariate.
A real-valued variable may have a probability density function
such as a Gaussian. Instead of giving a probability table
for it as above, the mean and variance of the Gaussian
would be given as functions of the parent variables. These
BUNTINE: A GUIDE TO THE LITERATURE ON LEARNING GRAPHICAL MODELS FROM
constructions allow Bayesian networks to represent standard
statistical models such as regression with Gaussian
error, and log-linear models [42]. Furthermore, graphical
models are not restricted to be directed. Undirected arcs
can be used in problems such as diagnosis where association
between symptoms might be represented, and image
analysis, for associations between regions of an image. The
combination of directed and undirected graphical models,
developed by Lauritzen and Wermuth [47], forms a rich representation
language. For an introduction to these combinations
see [48]. As an example of this richness, I consider
feed-forward neural networks next.
D. Connections to feed-forward neural networks
Fig. 2 shows the transformation of a feed-forward neural
network predicting real valued variables into a probabilistic
network. Fig. 2(a) shows a feed-forward network in
Sigmoid
Sigmoid
Sigmoid Sigmoid
Gaussian
Gaussian
(b)
(a)
Fig. 2. A feed-forward network to a Bayesian network
the form used in [14], and Fig. 2(b) shows a corresponding
probabilistic network with a bivariate Gaussian error distribution
grafted onto the output nodes of the network. The
feed-forward neural network has the three lower nodes filled
in to indicate they are input nodes. The bivariate Gaussian
has been represented on the probabilistic network as
two nodes with a directed arc between them; an equivalent
representation would use an undirected arc. The transformation
into the Bayesian network needs to be qualified in
several ways. Notice that the interior nodes in the Bayesian
network are labeled as Simoids, the transfer function typically
used in a feed-forward network. The nodes are also
double ovals rather than single ovals. This is short-hand to
say that the variable is a deterministic function of its in-
puts, rather than a probabilistic function. Neural networks
usually have a weight associated with each arc, giving in
some sense the strength of the association. In probabilistic
networks, the arc indicates some form of probabilistic
dependence or correlation, and any weights are instead
associated with each node, and are used to parameterize
the functions at the node instead. Furthermore, the probabilistic
network explicitly includes the measured output
variables in the network, the neural net-work
only includes the predicted output variables m 1 and
. The probabilistic network therefore explicitly represents
the error function, whereas the neural network leaves
it unspecified. In summary, the Bayesian network indicates
Class
Fig. 3. A simple clustering model
that the output variables, have a Gaussian distribution
based on the variables m 1 and m 2 , which themselves
are deterministic Sigmoid functions of the hidden variables
More sophisticated dynamic networks are the recurrent
neural networks [49]-roughly, these might be thought of as
a flexible, non-linear extension to probabilistic models like
Kalman filters and hidden Markov models. While these
networks are based on feed-forward neural networks, the
relationship of these to probabilistic networks is still under
development.
E. Connections to statistics and pattern recognition
Whittaker [42], and Wermuth and Lauritzen [50] provide
a rich set of examples of modeling statistical hypotheses
using graphical models, some using mixed graphs incorporating
both undirected and directed networks.
Consider clustering, a style of unsupervised learning. A
Bayesian network can be drawn for a clustering algorithm
such as Autoclass [51], where it is assumed that the observed
variables are independent given the hidden class. In
clustering, the cases are to be grouped in some coherent
manner. The probabilistic network in Fig. 3. suggests a
way of doing this. A discrete variable class is introduced
that is termed a latent or hidden variable. Its value never
appears in the data, and it indicates the unknown class to
which each case belongs. The advantage of this construction
is that once the class value is known for a case, the
probability distribution becomes a simple one with A, B
and C independent, needing only 3 real valued parameters
to define it. This model is called a mixture model because
the joint probability is a mixture of the data obtained for
the different classes. For a visual illustration of the power
of mixture models, consider real valued variables X;Y . A
bivariate Gaussian places an oval shaped cloud centered at
a point. A mixture of four bivariate Gaussians is illustrated
in Fig. 4 has four clouds of points. When the mixture contains
many classes, the density can become quite complex.
Popular models used in pattern recognition, speech
recognition and control, the Kalman filter and the hidden
Markov model (HMM) can also be modeled with Bayesian
networks [52], [53]. A simple hidden Markov model is given
in Fig. 5. A sequence of observations are made, such as
phonemes in an utterance. These are indicated with the
shaded nodes observe 1 , ., observe i , observe i+1 . Shading
indicates the variables have been observed. The observations
are dependent on the hidden states hidden 1 , .,
hidden i , hidden i+1 of the underlying system. If the observations
are phonemes, then the hidden states may be
letters of the underlying word being spoken, which are of
course hidden from the observer. These kinds of models are
Fig. 4. Data from a 2-dimensional mixture of Gaussians
observe 1
observe i
hidden
observe
Fig. 5. A simple hidden Markov model.
dynamic, in the sense that the network is a set of repeated
units that are expanded in time, as for instance used in
forecasting [54].
F. Causal networks
A useful trick used in the elicitation of Bayesian networks
is to assume the arcs represent causality. Consider the net-work
from [39], reproduced in Fig. 1. One could imagine
the environmental variables causing the disease, which in
turn causes the symptoms, and this is a nice way of explaining
this particular graph to the expert. When Bayesian
networks have this interpretation, they are sometimes referred
to as causal networks [2], [3], [4], [55]. Causality is of
fundamental importance in science because of the notion
of intervention [55], [5]. While identifying the observed
probabilities relating smoking, sex, and lung cancer is an
interesting task in itself, the real goal of such a study is
to establish that the act of changing someone's smoking
habits will change their susceptibility to lung cancer. This
kind of action is an external intervention on the variables.
A causal model is expected to be stable under acts of external
conclusions drawn from them are still
valid. In the probabilistic interpretation of networks used
elsewhere in this review, there is an assumption that cases
are got through passive observation of independently and
identically distributed examples. Networks can be used to
represent causality in this manner, but these networks have
a different interpretation to the probabilistic networks considered
here. Causality, networks and learning causlity are
not covered in this review. Learning and identification of
causality is considered in [56], [3], [57], [58], [59].
II
A sample database in a relational table
case A B C
III. Some simple examples, and some basic
concepts
As an example of learning, consider data about three binary
variables, A; B; C. This data would take the form of
a table, as given in the simple example in Table II. The
4 rows in the table give 4 cases, which might be different
patients. More typically, hundreds or thousands of cases
would exist in a relational database. In Table II, each case
has three variables measured and their values recorded.
The values for each variable are either true, indicated by
T or false, indicated by F . A variable could also have the
value "?". This represents a missing value, which means
the value for the variable is unknown. Missing values are
common in some domains, especially where variables are
expensive to measure.
A. The hypothesis space
Some example Bayesian networks that might match this
problem are given in Fig. 6. First consider structure (a),
A
C A
A
A
A
A
Fig. 6. Some Bayesian networks on three variables, A; B; C.
I will denote S a , which represents that the three variables
are independent. For this structure, probability tables for
p(A), p(B) and p(C) are needed. Since the variables are
binary, these three probabilities are specified by three real
numbers between 0 and 1. Denote these tables by the parameter
set ' a 2 ! 3 . For structure (c) denoted S c , probability
tables for p(A), p(B) and p(CjB), denoted ' c , are
needed. This parameter set is in ! 4 because while p(A)
and p(B) are specified by one value each, p(CjB) is specified
by two values, for instance
Consider the conditional probability
distributions that complete the network Sm . The probability
table for p(XjY ) will be a subset of the real space of
is the number of values
of the variable X. The fully connected network matching
Table
II, where every two variables are connected, will have
BUNTINE: A GUIDE TO THE LITERATURE ON LEARNING GRAPHICAL MODELS FROM
7 real values, where 7 is calculated from 2 3 \Gamma 1. So a net-work
of k binary variables needs between k and 2
values to specify its conditional probability tables. A real-valued
node whose conditional probability distribution is a
Gaussian with k parents will require k(k + 1)=2 real values
to specify the mean and the covariance matrix. In gen-
eral, the real values used to specify conditional probability
tables either explicitly (in a table) or implicitly (in some
are referred to as the parameters of the network.
A simple counting argument shows there are 25 different
networks on just the three variables in Fig. 6. However, it
happens that several of these are equivalent in the sense
that they represent equivalent independence statements.
For these networks there are only 11 different equivalence
classes of networks on three variables. For instance, consider
the last three networks given in Fig. 6, (d), (e) and (f).
The networks have the following functional decompositions
respectively (labeled d, e and f):
Some basic algebra using the laws of conditional probability
show that the Bayesian networks (d) and (e) have
equivalent functional decompositions and therefore equivalent
independence properties, but the Bayesian network
for (f) is different. The structures S d and S e are said to
be equivalent probability models. Properties of this equivalence
relation have been worked out in general for Bayesian
networks [2] (this is discussed further in Section V). Since
there are k(k \Gamma 1)=2 different undirected arcs one can place
on a network of k variables, that means there are 2 k(k\Gamma1)=2
different undirected networks on the k variables. If the
variables are ordered ahead of time so that an arc can only
point towards a variable later in the ordering, then there
are 2 k(k\Gamma1)=2 different directed networks. There would be
many more if the ordering is allowed to vary (although
some will be equivalent probability models).
B. The sample likelihood
The maximum likelihood approach is the starting point of
most statistical theory, so it is introduced here. First, fix a
structure Sm and its parameters ' m for the model matching
the problem of Table II, and calculate the likelihood of the
sample as follows:
Y
p(case
where the case probabilities p(case are calculated
using the probability tables given by ' m . This formulation
assumes that each case is independent of the others given
the "true" model Sm that is they are independently
and identically distributed. The "true" model is the unknown
model believed to represent the process generating
the data, and is assumed to exist for purposes of modeling
(perhaps a reasonable approximation exists, perhaps not).
For instance, for structure S d from Fig. 6,
p(case 1 jS d ; ' d
The three terms on the right of this equation are found
from the corresponding entries in the probability tables ' d .
This quantity Equation (2) is called the sample likelihood.
The maximum likelihood approach for fixed structure Sm
chooses the parameters ' m to maximize the sample likelihood
It is important to notice the structure of the maximum
likelihood calculation. The probability appearing
in the likelihood for case 1 is a function of the parameters
used in the conditional probability table for the
variable A. The parameters ' d for the Bayesian network
structure S d can be partitioned into the different parameters
at each node (A; B and C):
where ' d;B represents the parameters for the conditional
probability table for the variable B. The sample likelihood
now becomes:
Y
Notice this product has separate terms for ' d;A , ' d;B , and
likelihood optimization of ' can be decomposed
into maximum likelihood optimization of these
three different variable sets individually. This can be represented
as
to show we have three local maximum likelihood problems,
one for each node. The sample likelihood is said to decompose
for Bayesian networks which have neither deterministic
variables, missing or hidden values, nor undirected arcs
This decomposition also applies
as a network is incrementally modified, for instance during
search [23], [60].
If all the parameters ' d describe probability tables for
binary variables, as in Table II, then Equation (3) corresponds
to a product of binomials. For instance,
where the counts pA and nA give the occurences of
and respectively in the data. As is the case for the
binomial, the maximum likelihood is given by the observed
frequency, d
nA+pA . Likewise for the other variables
and all the entries in the other tables.
An important and common assumption used in computing
the sample likelihood is the complete data assumption.
This holds when no case has missing values. This can be an
6 IEEE TRANSACTIONS ON KNOWLEDGE AND
unrealistic assumption. For instance, if data comes from a
historical medical database it is likely that expensive measurements
would not have been taken and recorded if they
were not considered critical to the diagnosis. The complete
data assumption simplifies calculation of the sample likelihood
for a network. For instance, consider the model for
Fig. 6(f), and consider the likelihood for case 3. Suppose
the variable C had a missing value, "?".
p(case 3 jS d ; ' d
C2fT;Fg
As before, the three terms on the right of this equation are
simply the corresponding entries in the probability tables
f . However, notice the summation outside this. When
there are many of these summations, there is no longer a
simple closed form solution for maximizing the sample like-
lihood. Furthermore, the optimization problem no longer
decomposes, as was demonstrated with Equation (3). Hidden
variables lead to the same problem, and violate the
complete data assumption, because the summations above
always appear in the sample likelihood.
A concept central to these and subsequent techniques is
the family of statistical distributions known as the exponential
family [26], [63]. An introduction in the context of
probabilistic networks appears in [23]. This family, which
includes the Gaussian, the Bernoulli, and the Poisson has
the general functional form of
which lends itself to many convenient computational properties
including compact storage of the training sample,
simple calculation of derivatives, and fitting guaranteed to
be linear in the size of the sample. One needs to become familiar
with these features of the exponential family in order
to understand many of the recent developments in learning
probabilistic models. Many of the properties of the sample
likelihood, the impact of complete data assumption, exact
solutions to the maximum likelihood equations and so
forth follow directly from standard results for the exponential
family-the effort is usually expended in formulating
the probabilistic network as a member of the exponential
family, and then the standard results for exponential family
follow [26], [63].
C. Basic statistical considerations
Suppose the structure Sm of a network on discrete or
Gaussian variables is fixed. Then it remains to learn the
. For the probability tables considered earlier
and with enough data, the sample likelihood is a well-behaved
differentiable function of its parameters. This is
often called a parametric problem. A non-parametric prob-
lem, in contrast, has potentially an infinite number of pa-
rameters, or no coherent likelihood function is defined so
it is un-parameterized. This is not always clear from the
literature because in some cases a model is presented in a
non-parametric manner, whereas it can be given a parametric
basis (classification trees are an example [64], [15]).
Now consider the problem of learning the structures as well,
and remember there are a finite number of them. A fixed
network structure has its own distinct set of parameters.
When allowing a set of different structures, each with its
own parameters, the full probability density has no single,
natural, global real-valued parameterization, but has different
parameterizations depending on which structure is
used. Such problems are sometimes referred to as semi-
parametric, but the same qualifications apply. Of course, a
clever mathematician can coerce a full specification of the
network and its parameters into some single real number.
However, this would be an artificial construct with complex
non-continuous derivatives. Furthermore, for the structures
of Fig. 6, the probability distributions represented
by structure S a are a set of measure zero in the probability
distributions with structure S b , which themselves are a
set of measure zero within S e
1 . By offering these structures
as valid alternatives, the set of measure zero is not to
be ignored. I will refer to this combination of detail-for a
given structure their is a neat parametric model, and structures
form nested hierarchies with some being a subset of
measure zero of others-as the parametric structure of the
problem.
Learning network structures from data is sometimes
termed a model selection problem in the sense that each
network corresponds to a distinct model, and one is to be
selected based on the data. Both non-parametric methods
and model selection are active research areas in modern
statistics [65], [25], [66]. More recently, researchers
in statistics have focused on model uncertainty because it
is accepted that selection of a single "best" model from
an exponential-sized family of models-as is the case for
learning Bayesian networks-is often infeasible [67], [68],
[25]. Rather than selecting a single best model, one looks
at a subset of "reasonable" models, attempting to quantify
uncertainty about them.
D. The complexity of learning
network learning involves choosing from, possibly, an
exponential number of network structures, and giving values
to, possibly, an exponential number of real values. Why
is this a problem? Basic results from computational learning
theory show how difficult this can be, both in terms of
the number of cases required for training, and the time or
space required for the optimization. These two aspects are
referred to as sample complexity and computational complexity
respectively.
In learning there are roughly three distinct phases as
more cases are obtained to learn from: the small sample,
medium sample, and large sample phases. Initially with
1 For the purposes of this paper, a subspace has measure zero if its
integrated area relative to the full space is zero. Usually this means
it is a space of lower dimension. A line has measure zero in a finite
plane, but a rectangle on the finite plane has non-zero measure. A
two-dimensional slice of a cube has measure zero in the full three-dimensional
cube.
BUNTINE: A GUIDE TO THE LITERATURE ON LEARNING GRAPHICAL MODELS FROM
a small sample, learning corresponds to going with one's
biases or priors. With a large sample, learning close to
the "true" model is possible with high probability, where
"close" is measured according to some reasonable utility
criteria such as mean-square error or Kullback-Leibler dis-
tance. This learning should be possible by many reasonable
algorithms that asymptotically converge to the "truth". In
between the small and large sample phase is a medium
sample phase where some algorithms should perform better
than others, depending on how well their particular
biases align with the "true" model. I use the term biases
here in a loose sense. As more cases are obtained to learn
performance may increase gradually or sometimes in
jumps as the algorithm better approximates the "truth".
This is illustrated by the learning curve in Fig. 7 which
plots error of some idealized algorithm as it gains more
cases (represented by the sample size N ). The asymptotic
error0.0
Bayes optimal error
small
sample
medium
sample
large
sample
Fig. 7. An idealized learning curve.
error in this example approaches the Bayes optimal error
rate from above. Without prescience, there will be a lower
bound on what error rate can be achieved by any algorithm
(for instance, in predicting coin tosses from a fair coin, the
Bayes optimal error rate is 50%). The theory of learning
curves is developed, for instance, in [69]. Suppose the hypothesis
space is a family of probabilistic networks (S
K. Results from computational learning theory
[70] show that under many conditions the transition
to the large sample phase is made when the sample size is
given by
This sample size is the sample complexity. For the discrete
Bayesian networks discussed earlier, the first term will be
exponential in k (the number of variables), and the second
term quadratic.
Of course, this ignores the issue of computational com-
plexity. Given that there are an exponential number of
networks, it should not be surprising that in some formula-
tions, learning a Bayesian network is an NP-complete problem
[71], [72], [36]. In some formulations, learning is viewed
as a maximization problem: find the network maximizing
some quality measure. As is the case for the the sample
likelihood, these scores usually decompose, often because
they are based on the sample likelihood, see for instance
[61], [23], [37], [62]. The optimization problem is to find a
network S on variables X maximizing some function of the
quality(xjparents S (x); sample)
where the network S influences the quality measure
through the parents function, parents S (:), and the quality
measure may be a log-probability, log-likelihood, or a
complexity measure (to be minimized). These measures
are discussed further in Section VIII. This maximization
problem is an instance of a maximum branchings problem
(see the discussion in [37]) which in general (allowing
any quality function at the nodes) is NP-complete even if
variables in network are restricted to have at most 2 par-
ents. It is polynomial if each variable has at most 1 parent.
Another variation of this problem, discussed in [37], is to
find the best l networks in terms of the quality measure.
For Bayesian networks, this search problem is also confounded
because of the existence of equivalent networks.
Nevertheless, experience with existing systems shows that
standard search algorithms such as greedy algorithms and
iterated local search algorithms often perform well. Basic
greedy search is explored in [35]. Furthermore, the search
problem adapts nicely to branch and bound using some
standard methods from information theory to provide the
bounds [73], and savings over an exhaustive search appear
to be many orders of magnitude.
IV. Parameter fitting
For a fixed graphical structure, Sm , the parameter fitting
problem is to learn the parameters ' m from data. The
mathematics of fitting parameters to a Bayesian/Markov
network is an extension of standard fitting procedures in
statistics. Fitting algorithms exist for Bayesian networks
and more general probabilistic networks in the cases of
complete and missing data [74], [42], [75], [76]. See Whittaker
for a more extensive discussion and review of methods
and theory. In the case of a Bayesian network with
complete data, where the distributions at the nodes are
discrete probability tables or Gaussians, fast close form solutions
exist that can be computed in time proportional
to the size of the data set. As an example, consider fitting
the model of Fig. 6(a) to the data in Table 6. Each
of the probabilities OE in this model occurs in the sample
likelihood in the form OE which has a maximum
at -
n+m . The maximum likelihood solution for the
parameters is therefore equal to the observed frequency of
the relevant probabilities:
In other cases, a variety of iterative algorithms exist that
make use of these fast closed form solutions as a subrou-
tine. Some common techniques I shall not explain here are
the expectation maximization (EM) algorithm [77] and the
iterative proportional fitting (IPF) algorithm [75]. Once
again, the exponential family is important here.
Maximum likelihood approaches suffer from so-called
sparse data because, for instance, they may become undefined
whenever a table of counts total to zero. Consider the
model of Fig. 6(e) and consider estimating
there are no instances of in the sam-
ple, so the maximum likelihood estimate for this probability
is undefined since the sample likelihood does not exist.
For k binary variables and a fully connected Bayesian net-work
(where every two variables are directly connected),
clearly need greater than 2 cases in the sample for the
maximum likelihood estimate to be defined.
A related problem is the problem of over-fitting. Suppose
sparse data is not a problem. Observe the maximum likelihood
estimate above for This was equal to
1.0 because in the data, all observed cases of the variable C
had the value T . Now this is based on four cases. It would
seem reasonable that the "true" value could be 0.9, and
by chance have all T 's in the data. The estimate 1.0 must
be an upper bound on the probability. By definition, the
maximum likelihood value (1:0 4 ) must be an over-estimate
of the "true" sample likelihood (0:9 4 ). As the sample size
gets larger and larger, the over-estimate will gradually converge
to the "true" value; assured in most cases by large
sample properties of maximum likelihood theory (for an
introduction see [78]). However, for small samples, the
maximum likelihood value may be much larger than the
"true" likelihood, and in general the maximum likelihood
solution will attempt to fit the data as well as possible-for
instance, regression using 10 degree polynomials will fit 11
data points exactly, whereas for 11 data points one might
more reasonably attempt to fit a 2 or 3 degree polynomial
and assume the remaining lack of fit is due to noise in the
data. The maximum likelihood parameter values are therefore
said to over-fit the data. This is a well-known problem
in supervised learning, for instance as addressed by pruning
methods for classification trees [64], [15].
The Bayesian Maximum a-posterior (MAP) approach extends
the maximum likelihood approach by introducing a
prior probability. Good introductions to this simplified
Bayesian approach and some of its extensions can be found
in [79], [80]. The approach places a probability distribution
on the unknown parameters ' and reasons about them
using the axioms of probability theory. The likelihood is
augmented with a prior that gives the initial belief about
before seeing any data. Consider just the column of data
for A in Table II, and consider ' A , the parameter giving
the probability of A. By Bayes Theorem:
p(sample)
where the numerator contains the sample likelihood and
the prior, and the denominator is obtained by integrating
the numerator,
Z
Again, these computations become simplified in some cases
of the exponential family, mentioned previously, Gaussians,
Bernoulli, and so forth. An example is given in Fig. 8. The
Fig. 8. Priors, likelihoods and posteriors for ' A .
left graph shows two different priors. These priors are Beta
distributions with parameters ff marked on the plot. The
second prior with has a mild preference for ' to
be about 0.625, whereas the other prior is agnostic. The
middle graph shows the likelihoods for 3 different samples
(0,1 or 2 counts of in a sample size of 4), and the
right graph shows the resulting posterior for the (2 \Theta
posteriors resulting. The cluster of three peaks at the top
are three posteriors for the prior how
the agnostic prior more influenced by
the likelihood, whereas the three posterior peaks for the
mild prior reflect the shape of the prior quite strongly. The
maximum posterior value is the value of ' at the maximum
of each curve. Notice how it is effected by both the prior
and the likelihood.
Many general algorithms exist for addressing parameter
fitting problems of probabilistic networks: missing and latent
variables, large samples, recursive or incremental tech-
niques, special nodes, and subjective priors [26], [24], [81],
[25], [23], [42]
Table
III lists the major techniques and their
application. References given are introductions, new extensions
or examples of their use, and are by no means
a thorough list of references in the area. The common
versions of the EM and IPF algorithms, and mean field
theory are based on the exponential family, although generalizations
exist. Used in conjunction with these methods
are a large number of optimization techniques, for
finding a MAP, or computing the various quantities used
in the Laplace approximation. Several optimization techniques
are specific to parameter fitting in learning. This
includes the Fisher Scoring method [89], which is an approximate
Newton-Raphson algorithm, and stochastic optimization
which computes gradients on subsamples or individual
cases at a time [90]. Variations of this method are
popular in neural networks [91], having been a feature of
early methods [92], and have proven to yield computational
BUNTINE: A GUIDE TO THE LITERATURE ON LEARNING GRAPHICAL MODELS FROM
Algorithm Problems Refs.
MAP general [25]
Laplace 2nd-order approx. [25], [82]
EM missing and hidden values [77], [76], [83]
IPF undirected network [75]
mean field approximate moments [84], [22]
Gibbs approximate moments [85], [86]
MCMC approximate moments [87], [88]
III
Some general algorithms for parameter fitting
savings in many studies.
An extension of parameter fitting to handle sequential
(on-line) learning and missing data is described in [93].
This uses Bayesian methods to overcome the problems of
sparse data, by defining a Dirichlet prior of entries for the
probability tables. A full implementation in described in
[94]. Extensions have been made to Gaussians and other
popular nodes types for the Bayesian network [95]. When
combined with some structure elicitation, techniques for
parameter fitting can prove powerful in applications, for
instance in dynamic models in the medical domain [96],
[54].
V. Structure identification methods
Ignoring the issue of sample size for the moment, a difficult
question is whether particular network structures with
or without latent variables are identifiable in the limit with
probability 1. That is, assuming there are large amounts of
data to accurately estimate various probabilities, can the
"true" probabilistic network be reconstructed at all in the
sense that a learning algorithm, given a sufficiently large
sample, will invariably return a hypothesis (graphical structure
and parameters) close to the "truth"? This question is
formalized and addressed from several angles in computational
learning theory [97] under the name of identification
and learnability, as well as in statistics [78], [26] under the
name of consistency. This is the situation of N ! 1 in
Fig. 7.
In Bayesian networks, this question is confounded by
the existence of equivalence classes of graphs (one example
of a redundant model [78]) and by the use of hidden
or latent variables. For instance, consider the networks
given in Fig. 6 again. The Bayesian networks (d) and (e)
have equivalent probability models but the Bayesian net-work
for (f) is different. Therefore, Bayesian networks (d)
and (e) have equivalent sample likelihoods and cannot be
distinguished from data without some additional criteria or
knowledge, whereas the Bayesian network (f) could be identified
from data alone. A theoretical tool used to analyze
identifiability is the equivalence of graphical models with
latent variables [98], [56], [99] and without [100], [101], [2],
[102], and more recently involving causality where variables
are manipulated [57]. A thorough treatment of the issues
of equivalence, latent variables, and causality appears in
[3]. In some cases, only a class of equivalent graphs can be
reconstructed from data, and in other cases latent variables
and their properties cannot be identified uniquely.
These identification methods have lead to some of the
earliest algorithms for learning structure from data [103],
[56], and a related approach that also combines cross validation
to address model selection is [104]. Identification
methods are also used in TETRAD II, the successor to
TETRAD [12].
The theory of network identification from data and net-work
equivalence are a precursor to techniques for learning
from medium sized samples of Fig. 7. Network equivalence
is an important concept used in some Bayesian techniques
for learning Bayesian networks from data, used in advanced
work on priors for Bayesian networks [105], [37]. This will
be discussed later.
VI. Diagnostics, elicitation and assessment
The day to day practice of learning and data analysis
may have a learning algorithm at its core but a lot of the
work involves modeling and assessment: building a model
and trying to find out what is going on with the data, and
with the expert's opinions. Some of the work relevant to
learning here comes from statisticians who generally have
more experience [106], [107] and decision analysts who use
these methods in constructing systems and working with
experts [41], [108].
The basic problem of elicitation is a twist on the problem
of knowledge acquisition for expert systems.
ffl In the medium sample regime, which applies fre-
quently, data should be complemented with prior
knowledge and constraints if reliable and useful results
are to be obtained.
ffl Prior knowledge can often only be obtained from the
domain experts by the manual process of knowledge
elicitation.
ffl Domain experts can be poor at judging their own limitations
and capabilities, and estimating probabilities
[109]. One of the common mistakes of beginners is to
assume that the expert's claims are valid.
In applications these issues are crucial because a learning
problem does not come prepackaged in its own neat wrapper
with instructions for assembly: "here's the data, use
these five variables, and try the C4.5 tree program." A
learning problem is usually embedded in some larger prob-
lem. A domain expert may be needed just to circumscribe
the learning component: which variables might be used,
what is being predicted from what, and so forth. Sometimes
this is crucial to success, and the learning algorithm
used is almost incidental [110].
A number of techniques exist at the interface of learning
and knowledge acquisition. Diagnostics are measures
used to evaluate particular model assumptions [111], [112]
[113]. Sensitivity analysis [114] measures the sensitivity of
the results of a study to the model assumptions, using the
same techniques taught to engineers everywhere: wiggle
the inputs to the model (in the case of learning, this means
the constraints and priors) and watch how the output of
the model wiggles. Assessment and elicitation is the usual
process discussed in manual knowledge acquisition of interviewing
an expert in order to obtain prior estimates of
relevant quantities. Because the elicitation and evaluation
of probabilistic networks is a well developed area, the further
refinement of networks via learning is made possible,
as is discussed later under priors.
VII. Learning structure from data
The earliest result in structure learning was the Chow
and Liu algorithm for learning trees from data [115]. This
algorithm learns a Bayesian network whose shape is a tree.
If there are k variables, then there are O(k 2 ) trees, much
less than the exponential number of Bayesian networks.
The sample complexity is thus O(2 log more than the
sample complexity for each tree, which is O(k), thus learning
is feasible from small samples. Furthermore, the computational
complexity of searching for a tree shaped net-work
requires at most a quadratic number of network eval-
uations. Herskovits and Cooper [116] demonstrated on a
problem of significant size that complex structure learning
was possible from quite reasonable sample sizes (in their
case, about 10,000), despite being faced with a potentially
exponential sample complexity and an NP-complete search
problem. Other early work on structure learning was often
based on the identification results discussed in the previous
section, for instance [103], [56], [104], [117].
Problems like learning the structure of a Bayesian net-work
suffer when samples are smaller. This happens because
of over-fitting in the structure space, similar to over-fitting
in the parameter space discussed previously. Maximum
likelihood and hypothesis-testing methods provide
techniques for comparing one structure to another, "shall
add an arc here?" "Is model S c better than model S f ?"
This is done, for instance, using the likelihood ratio test
[42], [43]. Repeated use of this test can lead to problems
because, by chance, hypothesis tests at the 95% confidence
level should fail 1 in 20 times, and hundreds of such tests
may need to be made when learning a network structure
from data. A comparable problem in the statistics literature
is variable subset selection in regression. In this prob-
lem, one seeks to find a subset of variables on which to
base a linear regression. The pitfalls of hypothesis testing
in this context are discussed in [67]. The basic problem
is that model selection focuses on choosing a single "best"
model.
For discrete variables at least, the problem of learning
Bayesian networks from complete data is related to the
problem of learning classification trees, exemplified by the
CART algorithm [64] in statistics and ID3 and C4 in artificial
intelligence [15]. This relationship holds because
the sample likelihood for a binary classification tree can be
represented as a product of independent binomial distribu-
tions, just like the sample likelihood for the Bayesian networks
on binary variables described in Section III. Both
problems also have a similar parametric structure. The
classification tree problem has a long history and has been
studied from the perspective of applied statistics [64], artificial
intelligence [15], Bayesian statistics [118], minimum
description length (MDL) [119], [120], genetic algorithms,
and computational learning theory. An adaptation of a successful
tree algorithm to an algorithm for learning Bayesian
networks appears in [121], and the relationship between the
two approaches is discussed in [122].
Another adaptation, which is not quite as direct, is
the Constructor algorithm of [104] which adapts the cost-
complexity technique from the CART algorithm for trees.
There are a variety of heuristic techniques developed for
trees, including the handling of missing values [123] and the
discretization of real-valued attributes [124], which have yet
to find their way into algorithms for probabilistic networks.
VIII. Statistical Methodology
In most work on learning structure, researchers have applied
standard statistical methodology for fitting models
and handling over-fitting. It is therefore appropriate to
discuss these standard methodologies, done so in this sec-
tion. The problem of over-fitting was encountered and addressed
by the earliest methods. It is important to note
that the role of a statistical methodology is to convert a
learning problem into an optimization problem. Some of
the statistical methodologies, despite their wide philosophical
differences, reduce a learning problem to the same kind
of optimization problem, so the practitioner could well be
left wondering what all the differences are about. It is
also important to note that most structure learning is built
around some form of parameter learning as a sub-problem.
In general, the many different structure learning methods
are extensions of the general algorithms summarized in Table
III. In some cases, this can be as simple as placing a
model selection wrapper around a parameter fitting system
[125], in other cases more sophistication is layered on top.
It is perhaps unfortunate that so many different, competing
statistical methodologies exist to address essentially
the same problem. Partly, this stems from the apparent impossibility
of handling smaller sample learning problems in
any objective manner, and the difficulty of establishing a
basis on which a statistical methodology can be judged.
See, for instance, the efforts made to compare different
learning algorithms in [30], and consider that a statistical
methodology is a higher level of abstraction than a learning
algorithm. A discussion of the Bayesian perspective
on the issues of learning appears in [26], touching on prior
probabilities, and subjective statistical analysis. Different
disciplines have addressed these problems in parallel while
they attempted to extend the classical maximum likelihood
and hypothesis testing approaches from statistics. Each
methodology comes with a cast of staunch protagonists
and antagonists and a litany of standard claims, dogma,
paradoxes, and counter-claims. It is useful to become familiar
with the different approaches and the mappings and
approximations between them to better understand their
differences, however this can be difficult given the confusing
state of the literature. Each methodology has its particular
strengths that make it suitable under certain conditions:
ease of implementation, adequate for large samples, more
BUNTINE: A GUIDE TO THE LITERATURE ON LEARNING GRAPHICAL MODELS FROM
appropriate for the engineer, availability of software and
training. and so forth. I believe no one methodology is
superior in all respects.
My comments in this review are colored from a Bayesian
perspective. I have tried to keep my comments below to the
realm of what is "generally believed" by those knowledgeable
in this area rather than merely repeating the dogma of
each community. Also, this section is not an introduction
to these methodologies. I include appropriate tutorial references
below. Finally, there are really hundreds of different
methodologies, one for each small cluster of researchers.
The list below presents different corners in a continuum.
A. Maximum likelihood and Minimum cross entropy method
The maximum likelihood approach says to find the net-work
structure Sm whose maximum likelihood over parameters
' m is the largest
'm
The minimum cross entropy approach says to find the
structure whose minimum cross entropy with the data is
the smallest. These two approaches are equivalent [126],
and they are also well known to suffer from over-fitting, as
discussed in Section IV. If the "true" model has one single
equivalent representative in the hypothesis space, then
the maximum likelihood approach is consistent in the sense
that in the limit of a large sample it will converge on this
"truth" [78]. The maximum likelihood method can also be
viewed as a simplification of most other approaches, so it is
an important starting point for everyone. When in a large
sample regime, the best strategy is to use the maximum
likelihood approach to avoid all the mathematical or implementation
details of the more complex approaches. The
results from computational learning theory for bounding
the onset of the large sample phase are useful for deciding
when to do this. For Bayesian networks, the maximum
likelihood approach has been applied by [127], [116].
The paper by Herskovits and Cooper was the major break-through
in learning Bayesian networks. It was clear from
this paper that MDL and Bayesian methods, which extend
the maximum likelihood approach, could be applied in all
their detail.
B. Hypothesis testing approaches
Hypothesis testing is the standard model selection strategy
from classical statistics. For probabilistic networks
methods are well developed and a variety of statistical software
exists [28], [43], [13]. As mentioned before, the problem
is that this is only a viable approach if a small number
of hypotheses are being tested. Clever or greedy search
techniques can help here [128] by reducing the number of
hypothesis tests required. Another way for thinking about
this is to deal with multiple hypotheses: let hypothesis testing
return a set of possible models rather than expecting it
to isolate a single one [128]. This strategy then resembles a
Bayesian approach where multiple models are considered.
This is discussed in the context of probabilistic networks
below.
C. Extended likelihood approaches
A number of extensions to the maximum likelihood approach
have been proposed to overcome the problem of
over-fitting, and to overcome the problems inherent in hypothesis
testing. These approaches replace the sample likelihood
by a modified score that is to be maximized. Examples
include the penalized likelihood, Akaike information
criteria (AIC), the Bayesian information criteria (BIC) and
others [66], [129]. Typically, this involves minimizing a formula
such as the BIC formula
BIC(Sm jsample)
where c
' m is the maximum likelihood estimate of ' m fixing
the structure to be Sm , N is the sample size and dim('m )
is the dimensionality. The BIC criteria and some related
variations are asymtotically Bayesian but avoid specification
of the prior, and are similar to variations of the minimum
information complexity approaches described below.
Examples for undirected probabilistic networks with the
BIC criteria appear in [67].
D. Minimum information complexity approaches
There are several different schools under the general
rubric of minimizing some information complexity measure
("code length"), for instance minimum description length
(MDL) [130], minimum message length [131], and minimum
complexity [132]. A simple approximation for MDL
is equivalent to the BIC above, but other variations involve
statistical quantities such as the Fisher Information,
and hypothesis dependent complexity measures chosen particularly
for the domain. These approaches are popular
among engineers and computer scientists who learn coding
and information theory as undergraduates. From one
perspective, these methods are related to Bayesian MAP
methods although there are subtle differences [133]. One
advantage that some proponents claim of this approach
(particularly those in the MDL school) is that it requires
no prior and is hence objective. In most instances a corresponding
"implicit prior" can be constructed from the
code. Some authors use this approach so that they can
use Bayesian methods in disguise without being ridiculed
by their anti-Bayesian colleagues. Search bounds, for instance
[134], are one area where the information complexity
approach takes advantage of the techniques developed
in information theory. Suzuki has developed a branch and
bound technique for learning Bayesian networks based on
information-theoretic bounds [73]. For Bayesian networks,
MDL has been applied by [61], [135], [136].
Resampling approaches
Modern statistics has developed a variety of resampling
schemes for addressing over-fitting in parametric situations
like learning networks. Resampling refers to the fact that
pseudo-samples are created from the original sample. A
popular approach is cross validation, applied by [104]. Re-sampling
schemes have been used to great success in applied
multivariate statistics, see for instance a tutorial in
[137]. Their strength lies in the fact that they are reliable
black box method that can be used without requiring
some of the complex mathematical treatment found in the
Bayesian or minimum complexity methods [138]. These
resampling schemes therefore provide a good benchmark
for comparison with more complex schemes which have additional
mathematical and implementation pitfalls. Their
theoretical justification is large sample, although they have
empirical successes in the small sample case for a wide
range of problems.
F. Bayesian approaches
There are a rich variety of Bayesian methods, and depending
on the approximations and shortcuts made, most
of the previous methodologies can be reproduced with
some form of Bayesian approximation. In its full form the
Bayesian approach requires specification of a prior probability
(for a tutorial and a list of references, see [139]). A
good general introduction to Bayesian methods for learning
Bayesian networks can be found in [79]. Advanced introductions
and reviews of Bayesian methods for learning can
be found in [25], [26], [24].
The Bayesian approach has many different approxima-
tions. The simplest MAP approach seeks to find the structure
Sm maximizing the log-probability
log
The term p(samplejSm ) is called the evidence and differs
from the likelihood p(samplejSm The evidence is the
average sample likelihood rather than the maximumsample
likelihood used in the earlier techniques:
Z
'm
Sometimes a relative value is calculated instead,
for some base structure S 0 . This is called the Bayes factor
and a variety of techniques and approximations exist for
computing it [25], [26], [23].
The basic technique for Bayesian learning of Bayesian
network structures from complete data uses standard
Bayesian methods, and was worked out in one form or an-
other, by many [140], [35], [121], [111], [112], [68], [141],
[142], [143], [37], [38]. Certainly, these techniques use standard
Bayesian manipulations and should be obvious to
most students of Bayesian theory. The general case for
the exponential family is worked through in [105]. Good
summaries of this line of work can be found in [111], [68],
[144], [37], [23], and a thesis covering many of the issues is
[36].
The full Bayesian approach is a predictive one: rather
than returning the single "best" network, the aim might
be to perform prediction or estimate probabilities for new
cases. For instance, one might be interested in the probability
of new cases based on the sample, p(new-casejsample).
In general this is estimated by averaging the predictions
across all possible networks using the probability identity
Sm
This situation is represented in Fig. 9. This approaches
GIBBS
New Data
|Sample)
p(New
* =Fig. 9. Averaging over multiple Bayesian networks
matches the intuition: "five different networks all seem
quite reasonable so let's hedge our bets and combine them.''
In practice this full summation is not possible so approximations
are used. Bayesian methods for learning probabilistic
networks in this more general sense can be found in
[121], [68], [143], [144], [145], [35], [146], [147]. Computational
aspects of finding the best l networks are discussed
in [37]. A related concern is how to combine the posterior
network probabilities efficiently and to compute conditional
posterior probabilities [148], [111], [32].
A general Bayesian algorithm family for inference that
applies in any context, parameter fitting or structure learn-
ing, is the Markov Chain Monte Carlo (MCMC) family of
algorithms. An introduction is given in [149], [23], and an
extensive review is given by [87]. This family uses the following
kind of trick. Suppose we wish to sample from the
distribution p(A; B; C). In general this might be a complex
distribution and no convenient sampling algorithm
may be known. When the complete data assumption is
violated for instance, as discussed in Section III-B, it is
quite easy to get an intractible sample likelihood distribution
for network parameters, and hence the posterior distribution
for network parameters may have no convenient
functional form to sample from-this is exactly the kind
of problem that MCMC methods were designed for. They
can even be used for instance, to estimate posterior predictions
when learning with complex parametric systems such
as sigmoidal feed-forward neural networks [88]. To sample
from p(A; B; C) using the Gibbs sampler, the simplest
kind of MCMC method, we start at A 0
repeatedly re-sample each variable in turn according to its
current conditional distribution ("-" should be read as "to
BUNTINE: A GUIDE TO THE LITERATURE ON LEARNING GRAPHICAL MODELS FROM
be sampled
Probabilistic networks are an ideal framework for developing
MCMC methods because these conditional distributions
can be generated automatically from the network.
MCMC methods can be used for parameter fitting, to sample
different network parameters, and for structure learn-
ing, to sample from different possible probabilistic network
structures. Use of MCMC methods for learning probabilistic
networks is discussed in [85], [144], [147], [146],
[23]. Madigan, Gavrin and Raftery [146] refer to the use of
MCMC methods for averaging over multiple probabilistic
networks-the full predictive approach-as Markov Chain
Monte Carlo Model Composition (MC 3 ).
The key distinction between Bayesian and non-Bayesian
methods is the use of priors. Priors can unfortunately
be complex mathematically, so poorly chosen priors can
make a Bayesian method perform poorly against other
methods-a real danger in the case of Bayesian networks
because of their semi-parametric nature. Both informative
priors [68], [111], [121], [37], [35], [38], [146], [147], and non-informative
priors can be used. A fundamental assumption
is that equivalent network structures should have equivalent
priors on their parameters [121], [60], [37], [150]. For
instance, consider structures S d and S e from Fig. 6. The
prior probability p(' d jS d ), by virtue of equivalence, can be
converted into a prior for ' e using a change of variables
with the Jacobian for the transformation:
Notice this prior is constructed from the prior for S d , and
is not necessarily equal to the prior actually used for S e ,
The assumption of prior equivalence sets these
two priors equal, something not applicable if the network
has a causal interpretation [58]. This gives a set of functional
equations that the prior should satisfy. This basic
theory and other properties of priors for Bayesian networks
is discussed in [105], extending techniques presented in [37].
The ability to use a variety of informative, subjective
priors for Bayesian networks is one of their strengths. Informative
priors can include constraints and preferences on
the structure of the network [121], [37], as well as preferences
on the probabilities, and even using the expert to
generate "imaginary data" [146]. An example in the language
of chain graphs (an extension to Bayesian networks)
is given by [38]. The potential for using Bayesian networks
as a basis for knowledge refinement has been suggested by
[121], [37], [111], [146], and in applications this offers an
integrated approach to the development and maintenance
of intelligent systems, long considered one of the potential
fruits of artificial intelligence.
IX. More on Learning Structure
An exact algorithm for handling incomplete data or missing
values can be found in [151]. The problems involved
here for exact methods were previously explained in [35].
While impractical for larger problems, this could serve as
a tool to benchmark on non-trivial sized problems for the
many approximate algorithms that exist, for instance, some
are mentioned in Table III.
Simple clustering algorithms learn Bayesian networks
with a single latent/hidden variable at the root of the net-
work. So these kinds of problems have been addressed in a
limited sense for many years in the AI and statistics community
[152]. A Bayesian method is [153], [51]. Likewise.
missing values can be handled by the well known EM algorithm
[76], or more accurately by Gibbs sampling [85].
More recent versions of these clustering algorithms search
over possible structures as well [51].
Some algorithms do not fit neatly into the categories
above. Learning Markov (undirected) networks from data
is related to the early Boltzmann machine from neural networks
[21]. Also the earlier Bayesian methods seemed to
require as input a strict ordering of variables [35], [121],
whereas the identification algorithms did not require this.
one thought is a combination of Bayesian with identification
algorithms [33]. But Bayesian methods do equivalent
things in the large sample case to the independence tests
used by identification algorithms, and the strict ordering
is not entirely necessary for the Bayesian algorithms [32],
[37]. A variety of hybrid algorithms exist [59], [104], [12],
[73] that provide a rich source of ideas for future development
X. Constructing learning software
For a variety of network structures with latent variables
and different parametric nodes (Logistic, Poisson,
and other forms), the BUGS program can generate Gibbs
samplers automatically [154], [86]. This effectively allows
data analysis algorithms to be compiled from specifications
given as a probabilistic network, and the technique
addresses a number of non-trivial data analysis problems
[155], [86]. Unfortunately, Gibbs sampling without much
thought to domain specific optimization can be time intensive
because convergence may be slow, so other methods
need to be developed to make this approach more
widely applicable. Other algorithm schemas from Table
III can be applied within this compilation framework
as well, so it may be possible to construct more efficient
algorithms automatically. An exposition of the techniques
used by algorithms for learning Bayesian networks-
exact Bayes factors, and differentiation-
all readily automated-can be found in [23], [156].
--R
"Real-world applications of Bayesian networks: Introduction"
"Equivalence and synthesis of causal models"
"A definition and graphical representation for causality"
"Graphical models, causality, and intervention"
"Local computations with probabilities on graphical structures and their application to expert systems (with discussion)"
"Correlation and causation"
Probabilistic Reasoning in Intelligent Systems
"Influence diagrams"
"Attitude formation models: Insights from TETRAD"
"Inferring causal structure among unmeasured variables"
"Network methods in statistics"
Introduction to the Theory of Neural Computation
Building Expert Systems
"Current developments in expert systems"
"In- ductive knowledge acquisition: A case study"
"An experimental comparison of knowledge engineering for expert systems and for decision anal- ysis"
"Probabilistic similarity networks"
"Connectionist learning of belief networks"
"Mean field theory for sigmoid belief networks"
"Operations for learning with graphical models"
Tools for
"Bayes factors and model uncer- tainty"
Bayesian Theory
"Graphical models for discovering knowl- edge"
"Software for belief networks"
Machine Learning
"Diagnos- tic systems created by model selection methods: A case study"
"Properties of Bayesian belief network learning algorithms"
"An algorithm for the construction of Bayesian network structures from data"
"An evaluation of an algorithm for inductive learning of Bayesian belief networks using simulated data sets"
"A Bayesian method for the induction of probabilistic networks from data"
Networks: from Inference to Construction
"Learning Bayesian networks: The combination of knowledge and statistical data"
recursive models Induced From Relevant knowledge, Observa- tions, and Statistical Techniques"
"Thinking backwards for knowledge acquisition"
"Bayesian networks without tears"
"Decision analysis and expert systems"
Graphical Models in Applied Multivariate Statis- tics
Introduction to Graphical Modelling
"Bayesian networks for knowledge representation and learning"
Spatial Statistics
"Independence properties of directed Markov fields"
"Graphical models for associations between variables, some of which are qualitative and some quantitative"
"Chain graphs for learning"
"Finite state machines and recurrent neural networks -automata and dynamical systems approaches"
"On substantive research hy- potheses, conditional independence graphs and graphical chain models"
"Bayesian classification with correlation and inheritance"
Planning and Control
Decision Analysis with Continuous and Discrete Variables: A Mixture Distribution Approach
"Uncertain reasoning and forecasting"
"Causal diagrams for empirical research"
"A theory of inferred causation"
"On the identification of nonparametric structural equations"
"A Bayesian approach to learning causal net- works"
"Causal inference in the presence of latent variables and selection bias"
"Hyper Markov laws in the statistical analysis of decomposable graphical models"
"Using causal information and local measures to learn Bayesian networks"
"Learning Bayesian networks: A unification for discrete and Gaussian domains"
Information and exponential families in statistical theory
Classification and Regression Trees
"Small-sample and large-sample statistical model selection criteria"
"Bayesian model selection in social research (with discussion by gelman & rubin, and hauser, and a rejoiner)"
"Model selection and accounting for model uncertainty in graphical models using Occam's window"
"Rigor- ous learning curve bounds from statistical mechanics"
"Decision theoretic generalizations of the PAC model for neural net and other learning applications"
"Learning bayesian networks is np-complete"
"Learning and robust learning of product dis- tributions"
"On an efficient mdl learning procedure using branch and bound technique"
"Hierarchical interaction models"
"On the effective implementation of the iterative proportional fitting procedure"
"The EM algorithm for graphical association models with missing data"
"Maximum likelihood from incomplete data via the EM algorithm"
"A tutorial on learning Bayesian networks"
"Decision analysis: perspectives on inference, decision, and experimentation"
"Decision theoretic sub-sampling for induction on large databases"
"Laplace's method approximations for probabilistic inference in belief networks with continuous variables"
"Accelerated quantification of Bayesian networks with incomplete data"
"Factorial learning and the EM algorithm"
"Markov chain Monte Carlo methods for hierarchical Bayesian expert systems"
"A language and program for complex Bayesian modelling"
"Probabilistic inference using Markov chain Monte Carlo methods"
Bayesian Learning for Neural Networks
Chapman and Hall
"A stochastic optimizationmethod"
Efficient Training of Feed-Forward Neural Net- works
McClelland, and the PDP Re-search Group
"Sequential updating of conditional probabilities on directed graphical structures"
"aHUGIN: A systems creating adaptive causal probabilistic networks"
"Parameter adjustment in Bayesian networks. the generalized noisy OR-gate"
"Tradeoffs in constructing and evaluating temporal influence diagrams"
"On learning in the limit and non-uniform (ffl; ffi)-learning"
"Equivalence of causal models with latent variables"
"An algorithm for deciding if a set of observed independencies has a causal explanation"
"The chain graph Markov property"
"Identifying independence in Bayesian networks"
"On the Markov equivalence of chain graphs, undirected graphs, and acyclic digraphs"
"An algorithm for fast recovery of sparse causal graphs"
"A system for induction of probabilistic models"
"A characterization of the Dirichlet distribution with application to learning Bayesian net- works"
"The quantification of judgment: Some methodological suggestions"
"Assessment, criticism and improvement of imprecise subjective probabilities for a medical expert system"
Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis
Judgement under Un- certainty: Heuristics and Biases
"Applications of machine learning and rule induction"
"Bayesian analysis in expert systems"
"Learning in probabilistic expert systems"
"Sequential model criticism in probabilistic expert systems"
"Sensitivity analysis for probability assessments in Bayesian networks"
"Approximating discrete probability distributions with dependence trees"
"Kutat'o: An entropy-driven system for construction of probabilistic expert systems from databases"
"Automated construction of sparse Bayesian networks"
"Learning classification trees"
Stochastic Complexity in Statistical Enquiry
"Coding decision trees"
"Theory refinement of Bayesian networks"
"Classifiers: A theoretical and empirical study"
"Unknown attribute values in induction"
"Multi-valued interval discretization of continuous-valued attributes for classification learning"
"MLC++: A machine learning library in C++"
Information Theory and Statistics
"An entropy-based learning algorithm of Bayesian conditional trees"
"A fast model selection procedure for large families of models"
"Three approaches to probability model selection"
"Stochastic complexity"
"Estimation and inference by compact encoding"
"Minimum complexity density estimation"
"Mml and bayesianism: similarities and differences"
"Admissible stochastic complexity models for classification problems"
"Learning Bayesian belief networks: An approach based on the MDL principle"
"A construction of Bayesian networks from databases based on an MDL scheme"
"Statistical data analysis in the computer age"
"A study of cross validation and bootstrap for accuracy estimation and model selection"
"Prior probabilities"
"A Bayesian method for the induction of probabilistic networks from data"
"An influence diagram approach to medical technology assessment"
"Learning in probabilistic expert systems"
Bayesian Methods for the Analysis of Misclassified and Incomplete Multivariate Discrete Data
"Bayesian graphical models for discrete data"
"Strategies for graphical model selection"
"Eliciting prior information to enhance the predictive performance of bayesian graphical models"
"Estimation of the proportion of congenital malformations using double sam- pling: Incorporating covariates and accounting for model un- certainty"
"Minimal assumption distribution propogation in belief networks"
John Wiley
"Learning Bayesian networks: The combination of knowledge and statistical data"
"A method for learning belief networks that contain hidden variables"
Statistical Analysis of Finite Mixture Distributions
"Bayesian classification"
"BUGS: A program to perform Bayesian inference using Gibbs sampling"
"Modelling complexity: BUNTINE: A GUIDE TO THE LITERATURE ON LEARNING GRAPHICAL MODELS FROM applications of Gibbs sampling in medicine"
"Networks for learning"
Uncertainty in Artificial In- telligence: <Proceedings>Proceedings of the Eleventh Conference</Proceedings>
Selecting Models from Data: Artificial Intelligence and Statistics IV
Uncertainty in Artificial Intelligence:
Uncertainty in Artificial Intelligence:
Uncertainty in Artificial Intelligence 5
Bayesian Statistics 4
Artificial Intelligence Frontiers in Statistics
--TR
--CTR
Marek J. Druzdzel , Linda C. van der Gaag, Building Probabilistic Networks: 'Where Do the Numbers Come From?' Guest Editors' Introduction, IEEE Transactions on Knowledge and Data Engineering, v.12 n.4, p.481-486, July 2000
Peter L. Spirtes, Data mining tasks and methods: Probabilistic and casual networks: mining for probabilistic networks, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Sajjad Haider, Belief Functions Based Parameter and Structure Learning of Bayesian Networks in the Presence of Missing Data, International Journal of Hybrid Intelligent Systems, v.1 n.3,4, p.164-175, December 2004
Xiaoming Zhou , Cristina Conati, Inferring user goals from personality and behavior in a causal model of user affect, Proceedings of the 8th international conference on Intelligent user interfaces, January 12-15, 2003, Miami, Florida, USA
Wei Yi Liu , Ning Song, Fuzzy functional dependencies and Bayesian networks, Journal of Computer Science and Technology, v.18 n.1, p.56-66, January
David Maxwell Chickering, Optimal structure identification with greedy search, The Journal of Machine Learning Research, 3, p.507-554, 3/1/2003
Jie Cheng , David A. Bell , Weiru Liu, Learning belief networks from data: an information theory based approach, Proceedings of the sixth international conference on Information and knowledge management, p.325-331, November 10-14, 1997, Las Vegas, Nevada, United States
Jiaying Shen , Victor Lesser, Communication management using abstraction in distributed Bayesian networks, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Sajjad Haider, A hybrid approach for learning parameters of probabilistic networks from incomplete databases, Design and application of hybrid intelligent systems, IOS Press, Amsterdam, The Netherlands,
Peggy Wright, Knowledge discovery in databases: tools and techniques, Crossroads, v.5 n.2, p.23-26, Winter 1998
Marina Meila , Michael I. Jordan, Learning with mixtures of trees, The Journal of Machine Learning Research, 1, p.1-48, 9/1/2001
Nir Friedman , Dan Geiger , Moises Goldszmidt, Bayesian Network Classifiers, Machine Learning, v.29 n.2-3, p.131-163, Nov./Dec. 1997
Padhraic Smyth , David Heckerman , Michael I. Jordan, Probabilistic independence networks for hidden Markov probability models, Neural Computation, v.9 n.2, p.227-269, Feb. 15, 1997
Thomas D. Nielsen , Finn V. Jensen, Learning a decision maker's utility function from (possibly) inconsistent behavior, Artificial Intelligence, v.160 n.1, p.53-78, December 2004
Peter L. Spirtes, Data mining tasks and methods: Probabilistic and casual networks: methodology for probabilistic networks, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Rong Chen , Edward H. Herskovits, A Bayesian network classifier with inverse tree structure for voxelwise magnetic resonance image analysis, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Clifford S. Thomas , Catherine A. Howie , Leslie S. Smith, A New Singly Connected Network Classifier based on Mutual Information, Intelligent Data Analysis, v.9 n.2, p.189-205, March 2005
Helge Langseth , Thomas D. Nielsen, Fusion of domain knowledge with data for structural learning in object oriented domains, The Journal of Machine Learning Research, 4, 12/1/2003
David J. Miller , Lian Yan, Approximate Maximum Entropy Joint Feature Inference Consistent with Arbitrary Lower-Order Probability Constraints: Application to Statistical Classification, Neural Computation, v.12 n.9, p.2175-2207, September 1, 2000
Russell Greiner , Xiaoyuan Su , Bin Shen , Wei Zhou, Structural Extension to Logistic Regression: Discriminative Parameter Learning of Belief Net Classifiers, Machine Learning, v.59 n.3, p.297-322, June 2005
David W. Albrecht , Ingrid Zukerman , An E. Nicholson, Bayesian Models for Keyhole Plan Recognition in an Adventure Game, User Modeling and User-Adapted Interaction, v.8 n.1-2, p.5-47, 1998
David Maxwell Chickering, Learning equivalence classes of bayesian-network structures, The Journal of Machine Learning Research, 2, p.445-498, 3/1/2002
John Binder , Daphne Koller , Stuart Russell , Keiji Kanazawa, Adaptive Probabilistic Networks with Hidden Variables, Machine Learning, v.29 n.2-3, p.213-244, Nov./Dec. 1997
Jie Cheng , Russell Greiner , Jonathan Kelly , David Bell , Weiru Liu, Learning Bayesian networks from data: an information-theory based approach, Artificial Intelligence, v.137 n.1-2, p.43-90, May 2002
David Maxwell Chickering , David Heckerman, Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables, Machine Learning, v.29 n.2-3, p.181-212, Nov./Dec. 1997
Luc De Raedt , Kristian Kersting, Probabilistic logic learning, ACM SIGKDD Explorations Newsletter, v.5 n.1, July
David Heckerman, Bayesian Networks for Data Mining, Data Mining and Knowledge Discovery, v.1 n.1, p.79-119, 1997
Paolo Frasconi , Marco Gori , Giovanni Soda, Data Categorization Using Decision Trellises, IEEE Transactions on Knowledge and Data Engineering, v.11 n.5, p.697-712, September 1999
Rebecca F. Bruce , Janyce M. Wiebe, Decomposable modeling in natural language processing, Computational Linguistics, v.25 n.2, p.195-207, June 1999
Paul J. Krause, Learning probabilistic networks, The Knowledge Engineering Review, v.13 n.4, p.321-351, February 1999
P. I. Bidyuk , A. N. Terent'Ev , A. S. Gasanov, Construction and Methods of Learning of Bayesian Networks, Cybernetics and Systems Analysis, v.41 n.4, p.587-598, July 2005
Anthony Hunter, Hybrid argumentation systems for structured news reports, The Knowledge Engineering Review, v.16 n.4, p.295-329, December 2001
Nuria M. Oliver , Barbara Rosario , Alex P. Pentland, A Bayesian Computer Vision System for Modeling Human Interactions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.8, p.831-843, August 2000
Sreerama K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-Disciplinary Survey, Data Mining and Knowledge Discovery, v.2 n.4, p.345-389, December 1998 | bayesian networks;learning;graphical models;hidden variables;knowledge discovery;learning structure;probabilistic networks |
627752 | Objective-Driven Monitoring for Broadband Networks. | AbstractAn approach to sensor configuration, installation, and activation for real-time monitoring of broadband networks for managing its performance is presented. An objective-driven measurement strategy for establishing the dynamic and statistical databases of the network is described. Objective driven monitoring allows the activation of sensors for data collection and abstraction based on a set of objectives. The objectives are derived from the quality of service requirements for real-time traffic control and operator submitted queries. The methodology of objective-driven monitoring for selective activation of sensors is implemented as a set of rules in the knowledge base of the monitor. | Introduction
Broadband networks consist of many subsystems (switching nodes, multiplexers, links, etc.) that
are geographically distributed, carry multiple classes of traffic and have access to different information
patterns. Although these subsystems make their own local decisions, they work together
for the achievement of the common system wide goal of information transport. The common goal
is to guarantee the Quality of Service (QOS) negotiated during the call setup for each of the traffic
classes[1]. The QOS is specified through a set of performance parameters.
Monitoring of these parameters and of all network resources, such as buffer space, switching
and communication bandwidth, and call processing, is required in order to guarantee the QOS[1].
A network monitoring system should also be applicable to several representative networks. There-
fore, a proposed set of measurement parameters must be network independent [2]. They must
be declared in generic terms, such as throughput, time-delay, arrival rate, inter-arrival time, etc.
Sensors (measurement points) for these parameters must be made available in all the networks to
be monitored. A set of objective criteria or strategies are needed by which sensors can be selectively
activated and deactivated among a large number of sensors in a distributed environment.
One of the main objectives of the monitoring task is the real-time support of the network control
and management system during the decision making process. A consistent view of the network is
assumed to be available for monitoring [3].
The monitoring of networks can be viewed at different levels of abstraction. Monitoring takes
place both at hardware and software level depending upon the hardware and software components
that support the information transport. In [4] a network operation center to monitor, control, and
manage ARPANET-like packet-switching networks is presented. In [5], [6], [7], and [8], network
monitoring is done for LANs or interconnected LANs carrying only single class (data) traffic. In
the latter work, major emphasis was on the evaluation of usage of communication resources. In
[9], monitoring of a metropolitan area network, called MAGNET II, is carried out by hardware
observation units (HOU) connected to network access points. Real-time traffic measurements are
reported. The quality of service of traffic classes in the network is evaluated by monitoring the buffer
occupancy distribution, the packet time delay distribution, the packet loss, and the gap distribution
of consecutively lost packets. In [10], the monitoring of switching resources was considered for
managing AT&T's dynamic non-hierarchical routing algorithm for automatic as well as operator
oriented control of the network.
Since a network can be considered to be a distributed system, the approaches to monitoring
of distributed systems can also be applied to monitoring broadband networks. The monitoring of
distributed systems can be classified as event-driven monitoring and as a database approach to
monitoring. Most of the work in event-driven monitoring of distributed systems was done on the
application level. Debugging of distributed systems [11], [12], [13] and parallel programming environments
[14] are typical examples. Here, major emphasis was given to the performance evaluation
of processing resources. In [15] a relational approach to monitoring was presented. In the relational
approach, monitoring is viewed as an information processing activity and the historical database,
a class of relational databases that encode time, is considered an appropriate formalization of the
information processed by the monitor.
In this paper the steps required to configure, install and activate sensors for monitoring broad-band
networks are discussed and a knowledge-based approach is presented as a solution to the
problem. In order to monitor object behavior, sensors need to be configured and installed in the
network. Sensor configuration specifies the characteristics of sensors declared in the knowledge
database of the monitor. These characteristics are specified by a set of attributes and a set of
procedures for operations. Sensor installation involves identification of the measurement points in
the network.
The architecture of our objective-driven monitoring system is knowledge-based. It consists of
a knowledge database and an inference engine for reasoning on the database [16]. The inference
engine consists, in turn, of two parts: a deductive inference processor and a statistical inference
processor. The role of the deductive inference processor is to process the queries about the network
behavior and activate sensors in the network. The role of the statistical inference processor is to
abstract the information obtained by the sensors.
The monitoring system processes queries on the system as well as on the conceptual level, and
sets up sensors to collect information. The system level monitor supports queries only if precise
knowledge about the system is available. On the conceptual level, the monitor allows general
queries without the precise knowledge of the system architecture of the network.
An objective-driven monitoring scheme is presented that selectively activates and deactivates a
subset from among a large number of sensors already installed in the network. The objective-driven
monitoring scheme is closely related to the concept of experimental frame of [17] that characterizes
modeling objectives by specifying the form of experimentation required for obtaining answers to the
questions of interest. For the class of objective-driven monitoring tasks considered in this paper,
the fundamental concepts are derived from the requirements of supporting Quality of Service and
of operator submitted queries. The objective-driven monitoring scheme deals with the problem of
complexity in monitoring broadband networks through the concept of observation frame that we
have earlier proposed [18].
An object-oriented definition of sensors is introduced and a method for specifying the configuration
of the sensors in the network is given. This definition represents an alternative to "a collection
of code" given in [15]. Through the specification of object-specific and variable-specific generic
sensors, we can define the starting and stopping time for monitoring and also how frequently the
observation samples are to be collected and recorded. Since the various measures for performance
analysis are specified through a set of operators, we can easily add a new set of performance measures
or selectively activate a subset of measures. Based on our approach, we can select any object,
state variable, event or their performance parameter for monitoring.
This paper is organized as follows. Section 2 outlines the architecture of the experimental
environment that represents a platform for knowledge-based monitoring of broadband networks.
Section 3 describes the key ideas about sensor configuration, installation and query analysis for
monitoring. Finally, in section 4, the objective-driven measurement strategy and a query based
activation of sensors for broadband networks are discussed.
2. The System Architecture of the Monitor
The architecture of the knowledge-based monitoring system was modeled as a real-time system
(as shown in Figure 1) where the monitor asynchronously interacts with the network through an
interface [19]. Thus, the network can be viewed as the environment for the monitor. The network
behavior is monitored through the interface and the collected information is sent to the monitoring
system that maintains an image of the network. The interface is all the monitor sees of the network.
The characteristics of the interface depend to large extent on the environment. What is and what
is not part of the interface depends on the specific requirements of the management and control
tasks.
The interface between the network and the monitor consists of a set of state variables. A
state variable is persistently present and throughout its existence it has a value that changes with
progression of time. For the task of monitoring the network, state variables get their values from
the processes operating in the environment. The semantic information about network objects and
the interface are represented by the Entity-Relationship model [20]. The computational model
consisting of a set of sample path and performance evaluation operators defined in [18], [19] is used
to describe various processes that are associated with state variables.
Thus, in representing the environment and the interface, the concept of modularity was achieved
through the object representation. The location and ownership of a state variable was declared
through these objects. These objects are responsible for acquisition, manipulation, and dissemination
of the information of their state variables. Note that, while implementing the network
architecture, one has to explicitly declare a set of state variables that form the interface between
the network and the monitor. These variables characterize the observable behavior of the network.
The exact specification of the interface depends on the monitor and the specific management tasks,
such as performance, fault and configuration management, that the monitor is going to support.
The architecture of the knowledge-based monitoring system (as shown in Figure 2) consists of
the knowledge database and an inference engine for reasoning on the database for query processing,
sensor activation, and interpretation of data collected by the sensors. The inference engine consists
of two blocks: the deductive inference processor and the statistical inference processor. The role of
the deductive inference task is to set up a distributed observation frame, i.e., a data space in which
a query may be answered.
Figure
3 shows the organization of the knowledge database of the network. The knowledge
database is organized as follows. The system level knowledge about the network is represented in
the configuration database. The configuration database contains the knowledge about the network
entities, such as buffers, sources, and servers and their specific instances. Figure 4(a) describes the
attributes of those network entities. The dynamic database contains
the information about the state and event variables of the objects in the configuration database.
Figure
4(b) describes the attributes of the state and event variables. The sensor database contains
the generic description of sensor objects and also any specific class of state and event variables
and all of the instances of the sensor object class. The objects in the sensor database indicate the
specific sampling pattern for data collection and specific sensor instances indicate the activation
of the sensors. The sensor database together with the configuration database forms the static
database. These two databases change much less often than the dynamic database. The dynamic
database only exists for those state and event variables that are measured by activating the sensors
in the network. The statistical database is obtained by applying abstraction operators on both state
and event variables and provides various performance measures for each state and event variable.
Figure
4(c) describes the attributes of the performance parameters.
Typically, a query submitted by the query generator, i.e., a control task or a human operator,
requires information about the performance of certain objects in the network. The query is then
processed to find out the specific instances of the performance parameters of interest. Based
upon these parameters, the deductive inference processor creates a derived object containing the
identified performance parameters and their corresponding state and event variables. An associated
derived sensor capable of monitoring the derived object is also created, which in turn creates the
appropriate sensors in the sensor database for the selected state and event variables. Creation of
the sensors for state variables in the knowledge database activates the sensors in the network and
data is collected. The statistical inference processor then applies the statistical operators, passed
during the query submission or implicit in the performance parameter specification, on the collected
information and transmits the processed information to the query processor.
3. Monitoring
As mentioned in the previous section, the monitor is defined to be a real-time system that maintains
an ongoing relationship with its environment, i.e., the network. The interface between the monitor
and the network is defined by a set of state variables. The interface is all the monitor sees of the
network. Thus, the characteristics of the interface depend to a large extent on the network.
Depending upon the information requirement, a network can be monitored in two ways: monitoring
the states (status monitoring) and monitoring the events. During status monitoring the
collection of values of any of the state variables obtained by sensor activation is recorded. The rate
at which the information is generated by sensors depends on the speed of operation of the corresponding
objects, such as buffers, servers and sources. For example, on the network access level,
the rate of generation of state information may be equal to the packet arrival and departure rate;
and at the session layer, it may be equal to the rate of arrival of new calls. Events are abstractions
of state variables obtained by applying the sample path operators of the computation model over
time. An event derived from a state variable is recorded by a sensor as an event variable.
The design of the network monitor can be characterized by the following steps that are in part,
adapted from [15]:
This step involves the design of the sensor, i.e., specification of its attributes and the procedures
of operation that handle the necessary interaction with the monitor, enabling and
disabling the sensors and buffering of monitored data and requested tasks. The sensor attributes
specify the starting and stopping times for monitoring, how frequently the monitored
events, measures, or resources are to be recorded, and other related performance information
to be collected simultaneously;
ffl Step 2: Sensor Installation
This step involves identifying the state and event variables that are to be monitored by
sensors.
ffl Step 3: Query Analysis Specification
This step specifies how to decompose a query, activate sensors and create various dataspaces
for information abstraction.
ffl Step 4: Execution
This step is comprised of activating the sensors, generating and abstracting the data collected
from the network, transmitting the data from the network to the monitor, and finally
presenting the data on a graphics terminal. (This step is discussed in section 4.)
Even though we have adopted the steps in [15] there are differences between [15] and our
approach. In [15] the approach was relational whereas our approach is object-oriented. Since a
sensor monitors state variables, there exists only one type of sensors that need to be configured for
installation in the network. Thus, we need to instrument an object only once in order to obtain
various measures, such as average and variance of buffer occupancy from the buffer state variable
associated with a buffer. In our approach the various performance measures are defined as operators
for the sensor.
In [15] there is no concept of object-specific or variable-specific generic sensors. Through the
inheritance mechanism of the object-oriented approach we can specify object or variable specific
generic sensors. In the case of sensor installation, we show how to select the measurement points
based on performance management objectives. These measurement points are based on the actual
variables that are responsible for generating information and on which the performance measures
are applied. In the query analysis specification step we show how the data transformation takes
place during data collection and how the interpretation is done based on this data. We have a clear
separation between the raw data that is collected and its abstractions. In this step we show how
the information provided by a simple query is used to identify sensors, collect data, detect events
and how abstraction and interpretation is done on the collected data.
3.1 Sensor Configuration
A sensor is defined to be an object with a set of attributes and a set operators (algorithm or
code), implemented either in hardware or software. The sensors installed in the network collect
information about the state or event variables of an object and transfer it to the monitor. From
an implementation point of view, every state and event variable includes its sensor as a component
object. Sensor operations are executed by the set of sample path and statistical operators described
in [18], [21].
Sensors installed in the network to monitor the state variables of an object are termed primitive
sensors. The primitive sensor corresponds to the object class SENSOR in the sensor database; its
attributes are the same as that of SENSOR (as shown in Figure 5(a)). The attribute sensor code id
specifies the operator to be applied to abstract information from the history of a state variable. The
attribute initiated by indicates the initiator of the query based on which the sensor is activated.
Primitive sensors contain the code for sample path and performance evaluation operators.
The attributes of a sensor are defined based on the requirements for both status and event
monitoring and they are shown in Figure 5(a). The abstraction operators of a primitive sensor
operate on two time scales. The sample path operators abstract events on the time scale of the
state variables. The performance evaluation operators operate on both state and event variables
on a time scale based on the interval for statistics collection. The parameters for performance
evaluation operators are provided by a set of sensor attributes. These attributes are sample count,
sample on, sample off, duration of activation, and sampling interval.
The sensor attribute sample count specifies the total number of state and event variables
samples that are to be collected for statistical inference. The average on a fixed number of samples
is computed based on the value of the attribute sample count and it is the default procedure for
evaluating the average over a period. If the value of sample count is not specified and the attribute
sample on is specified, then the later is used with sampling interval to compute the total number
of samples to be collected for statistical inference. The specific values of sample count or sample on
for monitoring a state variable are determined by the rate at which the variables are changing their
values. Their values are also determined by the control algorithm that is managing the object.
In order to repeat the statistical inference process, the sensor attribute duration of activation
specifies the duration of the monitoring process or the duration of time the sensor remains active.
The attribute sample off of sensor specifies the duration between two consecutive measurement
intervals, i.e., sample on period.
Primitive sensors are activated by sending them a message. Conversely, a primitive sensor
transmits information to the monitor by sending a message. Each message is time stamped with
the time of creation of the information sent. If the transmitted message contains the value of a
state variable then the time of creation indicates the last sampling time. If the message contains
an event indication, then the time of creation indicates the event occurrence time. If the message
contains the information about a performance parameter of a state or event variable, then the time
of creation indicates when the value of the performance parameter was computed.
Primitive sensors are provided with the capability to queue up multiple requests for monitoring.
The messages sent by the monitor to the sensor contain information about the specific sample path
and performance evaluation operators to be applied to the collected values. In order to allow
multiple users or control algorithms to query the state and event variables, the primitive sensors
are provided with the ability of both one-to-one and one-to-many communications.
Two subclasses of primitive sensors, STATUS SENSOR and EVENT SENSOR, are defined to monitor
the state and the event variables, respectively. The STATUS SENSOR inherits all the attributes of
SENSOR. The relationship type MONITORING-GENERIC-STATE-VAR and MONITORING-STATE-VAR establishes
the relationships between the sensor and the subclass of a state variable and the specific
instance of the state variable being monitored, respectively. The EVENT SENSOR is declared as a
subclass of the STATUS SENSOR and thus inherits all of its attributes. The EVENT SENOSR has an
additional attribute, event operator id, that defines the operator for extracting the event. Since
the behavior of all objects is represented by their state variables, only one type of primitive sensors
needed to be configured. Thus, no matter how complex, an object can be monitored as long as its
state variables are declared. Therefore, no object specific sensor needs to be configured.
In order to monitor an object whose behavior is defined by a set of state or event variables,
derived sensors are defined. The derived sensors are an aggregation of a set of primitive and derived
sensors. Derived sensors belong to the object class DERIVED SENSOR, which is a subclass of SENSOR.
The DERIVED SENSOR is obtained based on the primitive sensors associated with the state and event
variables of an object to be monitored. An instance of DERIVED SENSOR created for monitoring
an object will contain the corresponding instance of STATUS SENSOR and/or EVENT SENSOR. The
DERIVED SENSOR maintains a list of primitive or derived sensors by the relationship attribute
DERIVED FROM. Since the behavior of an object is always expressed by its state and event variables,
the sensor that monitors an object is always a member of the subclass DERIVED SENSOR and is an
aggregate object containing the status and event sensors. Thus, in order to monitor the state of a
BUFFER, a DERIVED SENSOR will be created for the BUFFER and it is composed of a STATUS SENSOR
that monitors the state variable representing the BUFFER's state. If an object is an aggregation of
a set of objects, the DERIVED SENSOR for the aggregate object will consist of the DERIVED SENSOR
of the component object.
The DERIVED SENSOR may be specialized to represent object specific monitoring information
such as the sampling pattern. Thus, in order to obtain the behavior of an object NETWORK-
STATION, SENSOR NETWORK STATION, a subclass of DERIVED SENSOR, is defined. The relationship between
STATION and the corresponding object class, is established by MONITORING-
GENERIC-OBJECT and the relationship between the specific instances of the SENSOR NETWORK STATION
and the specific instance of NETWORK STATION, which is being monitored, is established by MONITORING-
OBJECT-INSTANCE.
Whenever a specific object in the network is to be monitored, an instance of the DERIVED SENSOR
is created in the sensor database. The value of the MONITORING-GENERIC-OBJECT of DERIVED SENSOR
specifies the class name of the object that the derived sensor is monitoring. The instance of the
DERIVED SENSOR and its association with the object in the configuration database is deleted when
the sensor is deactivated at the end of the monitoring period. The derived sensors only exist in the
sensor database. Unlike primitive sensors, no counterpart of derived sensor exists in the network.
Both primitive and derived sensor instances are stored in a database called sensor database, as
shown in Figure 3.
3.2 Sensor Installation
Sensor installation allows the selection of the measurement points in the network, i.e., the state
variables of the network objects that define the interface between the monitor and the network.
Since network object and state variables can be uniquely identified in the system, the events associated
with state variables and their performance parameters can be selected. In the modeling
process of the monitor in [21] state and event variables were identified based on the performance
management requirements. The sensor location was determined by the location of the identified
state variables in the network.
Along with sensor configuration, the installation of primitive sensors is the only manual process
associated with our monitoring scheme. Since the specification of state variables is to be decided
based on the performance objectives of the control tasks, these two steps will now be required
to be carried out during the specification and design step of the network. This is by itself a
manual process. Thus, the design of the network monitor has been shifted to the network design
phase. This design process results in a robust network design that requires very little tuning during
operations. The identification of the sensor location during the specification phase helps to alleviate
the reliability and the correctness problem of sensor operations.
3.3 Query Analysis Specification
Query analysis specification derives levels of abstraction of the collected information for performance
analysis. It also selects the performance analysis criteria and algorithms for various performance
measures. As shown in Figure 6(a), a transaction for a query has three parts: an identification
function (I) that selects the state or event variables of a specific object to be monitored, a data
transformation function performance evaluation through statistical inference, and an infer-
ence rule (R) to be applied to the abstracted data. The dataspace generated by monitoring a state
or an event variable is denoted by the circle F t in Figure 6(a). It is created after activating the
sensor associated with a state or an event variable. The data transformation function
from the set of statistical operators) is applied on the dataspace F t to abstract information from
the history of a state or an event variable. Application of F generates a dataspace G t that consists
of only statistical information. If the statistical abstraction is not needed then F is reduced to the
identity operator. The inference rule R operates on the dataspace of G t to further evaluate the
statistical information, e.g., for event detection through threshold crossing. If no such operation is
needed, then the rule R is reduced to the identity operator.
The deductive inference rule is used to decompose a query into a set of simple queries and then
aggregate the information received by servicing the simple queries. For example, in order to find
the total average time delay of a call, a set of sensors at the nodes along the route of the call is
to be activated to measure the time delay at each node. Once the average time-delay from every
node is available, they are aggregated to compute the total average time delay. Therefore, a query
for the average time delay of a call will be divided into multiple simple queries and appropriate
sensors will be activated to measure the average time delay at every node along the route of the call.
Figure
6(b) describes such a scheme, where SENSOR 1 through SENSOR N measure the time delay at
each node along the route of the call. The function f(D 1
Figure
6(b) represents the
deductive part of the query for data aggregation and it is applied after the data is collected from
the appropriate sensors.
We may also want to find out whether a certain average throughput-time delay condition, at a
buffer of a node, is satisfied. In this case, we need to activate sensors for both the throughput and
the average time-delay and send an event indication if the average throughput-time delay condition
is not met. In order to do that, the original query will be divided into two simple queries based on
the throughput and the time-delay to be computed. Figure 6(b) represents such cases where the
original query is divided into multiple simple queries identifying each of the state variables to be
indicates the function that generates an event if the throughput-time-
delay condition is not met.
Figure
6(c) represents the case when a state variable is monitored for status monitoring and
event reporting or multiple event reporting. f(D 1
any deduction to be done
after the data is collected. One such deduction scheme is the correlation between two events
generated from the same variable. This scheme can also be used to define higher level events based
on the history of event variables.
4. The Objective-Driven Measurement Strategy
In the process of performance management a set of objectives often lead to asking a specific set
of questions about the network. The questions of interest could be: does the network support a
specified performance or can the network provide enhanced performance? They can be answered
after appropriate monitoring functions are incorporated into the system and the observed data is
processed.
Monitoring as a process determined by objectives is called objective-driven monitoring. Objective-
driven monitoring is closely related to the concept of experimental frame of [17] that characterizes
modeling objectives by specifying the form of experimentation that is required to obtain answers
to the questions of interest.
A query to collect information from the network can be submitted to the monitor either by an
user from a terminal or by the various knowledge specialists responsible for network control and
management. Such agents are called Query Generators (as shown in Figure 2). The submitted
query can be of two types: real-time data query and non-real-time data query. The real-time
data query represents the query on those objects whose attributes are updated using the sensors
located in the various subsystems of the network. The non-real time data query represents the
query on those objects whose attributes' value do not depend upon the sensory information. The
non-real-time queries are handled based on the information available in the knowledge base of the
monitor.
The real-time data queries are handled by using the deductive query processing technique, where
the inference and retrieval phases of the query have been separated [22], [15]. The schematic of
such query processing is shown in Figure 2. Based on the relationships established in the knowledge
database between the various objects, the monitor decomposes the requested query into a set of
simple queries and analyzes them to determine specific state and event variables that need to be
monitored. Once the state and event variable is identified, corresponding sensors in the network are
activated. Activated sensors then collect information about the state and event variables and thus
update the dynamic database shown in Figure 3. If the query requires statistical abstraction of the
collected information, then the statistical inference processor applies the corresponding operators
on the state and event variables and updates the statistical database.
As described before, one way to find the total average time delay of a call is to activate sensors
to measure the time delay at the nodes along the route of the call and then aggregate the average
time-delays. A more elegant solution, however, is to first obtain a derived object that contains
all the state variables that exhibit the average time delay of the intermediate nodes along the
route of the call. A derived sensor can be attached to this object and finally adding up the time
delays leads to the required result. Thus, to answer a query only a restricted data space, called an
observation frame, is needed. The observation frame contains only the state and event variables,
the performance parameters, and the derived sensor and its components. In the next section a
general methodology for an objective-driven measurement strategy is described.
4.1 Deductive Inference
Real-time control algorithms for resource allocation operate based on a set of cost functions and
a set of constraints on the behavior of the variables of a system. These system variables could be
either describing the state of the system or a statistical abstraction of its state. Thus, a control task
first leads to monitoring the system variables. In the case of integrated networks, QOS parameters
define the target operating points and maintaining these QOS parameters near the operating point
becomes the control objective of network operations.
Based on the specified QOS parameters, a set of performance parameters are identified. The
difference between the QOS parameters and the performance parameters is that the latter depend
on the systems architecture of the network. From the specification of the QOS parameters, the corresponding
performance parameters are derived by the deductive processor based on the knowledge
about the system architecture of the network. The latter resides in the configuration database. As
an example, the maximum average end-to-end time delay might be a QOS parameter. The average
time delay experienced by a call in a given network is the performance parameter associated with
it. It is the aggregation of all the average time delays at nodes and links along the route of the call.
Thus, the request for monitoring a QOS parameter is a query consisting of the class name
of an object and the corresponding performance parameter. This general query can be made
specific by providing values for one or more key attributes of the object class. Based on the
submitted query, the deductive inference processor identifies specific objects and the performance
parameters that need to be monitored. First, all the objects in the knowledge base that contain
the appropriate performance parameter are identified. Second, the instances of the performance
parameters associated with the selected objects are identified. Third, the instances of state and
event variables associated with the selected instances of the performance parameters are identified.
For each of the selected objects a component object is created and relationships are established
between the new object and the selected performance parameters and the corresponding state
and event variables. In order to monitor the selected objects, a derived sensor is associated with
the each of the component objects. The creation of the derived sensor generates the instances
STATUS SENSOR and/or EVENT SENOR for each of the state and event variables associated with the
component object. Creation of the STATUS SENSOR or EVENT SENOR activates the corresponding
primitive sensors installed in the network. The component object, the associated derived sensor,
and the collected information represent a data space called observation frame. Thus, the observation
frame forms a restricted data space. The answer to queries is obtained by examining, processing,
and aggregating monitored information in this space. The statistical inference process takes place
only after data has been collected by the sensors.
4.2 An Algorithm for Objective-Driven Monitoring
The operation of the deductive inference processor described above can be formalized into an
algorithm consisting of the following steps:
1. identify the instances of the object class specified in the query;
2. identify the instances of the performance parameter specified in the query associated with
the selected objects;
3. for each of the selected objects, identify the instances of state and event variables associated
with the selected performance parameters;
4. create a new object and associate with it the selected performance parameters and the state
and event variables of all the selected objects;
5. associate a derived sensor with the new object and create sensors that monitor the state
and/or event variables;
6. activate the sensors in the network;
7. apply statistical inference procedures to evaluate the performance parameter
The above steps are in part adopted from the "objectives-driven" methodology for modeling of
systems [17].
The main goal of the above algorithm is to automatcally identify the network sensors to be
activated in order to compute a generic performance parameter associated with a given object
class.
Figure
7 describes the semantic network associated with an instance SWITCH BUFFER 0 1 I. In
the step 1, for a given object class name and and key-values, we identify the instances of the object
class that matches the description. Then in step 2, we identify the instance of the performance
parameters that matches the generic performance parameter specified in the query. For example, if
we are interested in the THROUGHPUT of a selected instance SWITCH BUFFER 0 1 I, then THROUGHPUT-
I will be selected. Given a specific instance of a performance parameter, in step
3 we identify the corresponding instance of variable associated with the object. In Figure 7, the
variable PACKET OUTOF BUFFER 0 1 I is associated with THROUGHPUT SWITCH BUFFER 0 1 I. In step
4, we create an instance of an object class OBJ VIEW, as shown in Figure 8 and associate the
selected performance parameters and the corresponding variables with the new object. In step 5,
we create an instance of DERIVED SENSOR and associate it with the instance of OBJ VIEW. A side-effect
of association of a derived sensor with an object is that it will create instances of primitive
sensors (STATUS SENSOR or EVENT SENSOR) to monitor the variables associated with the object.
The instances of OBJ VIEW and DERIVED SENSOR together form the OBSERVATION-FRAME. In step
6, instances of primitive sensor will send message to the network to activate sensors installed in
the network. In step 7, statistical inferences are applied to compute the value of the performance
parameter from the network measurement and sent to the observation frame in the knowledge base.
4.3 Examples
Two examples are given for objective-driven monitoring. The first shows the evaluation of the
throughput of a buffer while the second gives the evaluation of the time delay of a call.
4.3.1 Monitoring a Traffic Buffer
In the following example, it is shown how to monitor the average throughput of a buffer at a network
station of MAGNET II [23] network. Let us assume that the query requests the THROUGHPUT of
SWITCH BUFFER with following key attribute-value pair: buffer I at node no = 0 and st no
1.
Based on the specification of the attributes node no, st no, and buffer id, SWITCH BUFFER-
which is an instance of SWITCH BUFFER, is identified as the object to be monitored. Then,
the specific instance of THROUGHPUT for SWITCH BUFFER 0 1 I is
identified as the performance variable of the OBJECT-VIEW. Based on the relationship PERF-
OF-STATE-VAR between the THROUGHPUT and PACKET OUTOF BUFFER, the event variable PACKET-
OUTOF BUFFER SWITCH BUFFER 0 1 I is identified. Once the performance parameter and the event
variable are identified, an instance of the object class OBJ VIEW is created. The new object is
considered a weak entity of the SWITCH BUFFER 0 1 I and it is uniquely identified by its own class
name (OBJ VIEW), the class name of the object being monitored and key attributes of the latter. The
establishes the association between the new object
and the performance parameter THROUGHPUT SWITCH BUFFER 0 1 I. Similarly, the relationship type
HAS-OBJ-VIEW-STATE-VAR establishes the association between the new object and the state variable
Once the instance of OBJ VIEW is created, a derived sensor is attached to the object. Based on
the relationship OBJECT-VIEW-OF-GENERIC-OBJECT, the derived sensor is created as an instance of
SENSOR SWITCH BUFFER, which is a specialization of DERIVED SENSOR for SWITCH BUFFER. SENSOR-
SWITCH BUFFER contains the SWITCH BUFFER specific sampling information and it is a subclass of
DERIVED SENSOR. If the object class SENSOR SWITCH BUFFER does not exist then the derived sensor
is created as an instance of object class DERIVED SENSOR. The relationship MONITORING-OBJECT
establishes the association between the instances of OBJ VIEW and DERIVED SENSOR. The attributes
of SENSOR SWITCH BUFFER are shown in Figure 5(b). The existence of the derived sensor implies
the creation of an instance of EVENT SENSOR for the event variable PACKET OUTOF BUFFER SWITCH-
It also establishes the relationship MONITORING-EVENT-VAR between the event variable
and the instance of the EVENT SENSOR.
Creation of the sensor causes it to send a message for activation of the primitive sensor associated
with PACKET OUTOF BUFFER SWITCH BUFFER 0 1 I in the network and the activation of sensor causes
start of the measurement of the variable. Once the measurement is completed and the statistical
operators are applied, the value of the throughput performance parameter is sent back to the
knowledge base.
4.3.2 Monitoring the Time Delay of a Call
The maximum time delay of a call appears as a QOS constraint for the Class-I traffic of MAGNET
II. In order to guarantee that this requirement is met, the time-delays of Class I calls are requested.
Let CALL be the object class that represents a call with key attributes calling user id,
called user id, and traffic class. Let the two users of a call CALL A B I be A and B and
the values of attributes calling user id, called user id, traffic class be A, B, and I, respec-
tively. The association between a call and the corresponding nodes and links along the route of
the call is needed. Since a node has multiple access points, the description of the route needs to
include the name of the input-output buffers at all the nodes. The relationship type HAS-VCKT-
ROUTE associates the buffers and servers (links and switches) along the route with the call. Thus,
the route for the call between the users A and B (shown in Figure 9), will contain the following
objects: B j1 ;i 1;k , B j1 ;i 2;k , B j2 ;i 1;k , and B j2 ;i 2 ;k as buffers, N j1 and N j1 as switch fabrics, and L j1j2 as
links, where the indices j n indicate the node number, i n indicate the access points at the node, k
indicate the traffic class. The relationship type HAS-VCKT-ROUTE introduces another relationship
HAS-COMPONENT-OBJECT, to reflect the fact that buffers, switches, and links form part of the call.
Both relationship types HAS-VCKT-ROUTE and HAS-COMPONENT-OBJECT are included as multi-valued
attributes in the list of attributes of the object class CALL.
Since the maximum time delay of a call will be the aggregation of the maximum time delays of
the buffers and servers along the route of the call, an aggregate performance parameter is defined to
represent the maximum time delay of the call. The new object class is called AGGR-PERF-PARAMETER.
It is defined as a subclass of PERF-PARAMETER, but has an additional procedure associated with it
for aggregating the values of its component performance parameters. The time delay of the object
class CALL is defined as TIME DELAY CALL which is a subclass of AGGR-PERF-PARAMETER. The value
of the attribute max value of TIME DELAY CALL is the sum of the values of the attribute max value
of its component performance parameters.
In
Figure
10, the description of the CALL and its time delay performance parameter, TIME DELAY CALL,
are shown. The object TIME DELAY CALL is declared as a subclass of AGGR PERF PARAMETER and
associated with the generic object CALL.
In order to monitor the maximum time delay of the call, the following objective is defined:
Find the maximum time delay of a call between the pair of users A and B;
Based on the algorithm for objective driven monitoring, the steps for computation of the average
time delay of a call can described as follow:
1. Create an observation frame OBJ VIEW CALL A B I for CALL A B I
ffl HAS-PERF-PARAMETER TIME DELAY CALL A B I
2. For each TIME DELAY VAR SWITCH BUFFER jn ;i n ;I , send a message to the corresponding sensor
at the buffer to compute the maximum time delay. The location of the variable is given by
the values of j n and i n and the buffer is identified based on the buffer id (equal to I).
3. Once the value of TIME DELAY SWITCH BUFFER jn ;kn is computed, all the values are sent to
OBJ VIEW CALL A B I.
4. When all the TIME DELAY VAR SWITCH BUFFER jn ;i n ;I are available, TIME DELAY CALL A B I is
computed based on equation 4.1.
:time delay;
where R is the total number of nodes in the route of CALL A B and LINK jn ;j n+1
. time delay is
the fixed transmission time delay of the link LINK jn ;j n+1
5. Conclusion
A step-by-step design procedure of sensor configuration and activation for monitoring network
behavior has been presented. The sensor configuration uses the modeling approach for specifying
the attributes of the sensors and the procedures for sensor operations.
An objective driven measurement strategy has been presented that selectively activates the
sensors needed for collecting the required information. The objectives for monitoring are obtained
from the real time control task for resource management or operator submitted queries. The queries
are processed by a deductive inference processor that identifies the state variables that are to be
monitored. The role of the deductive inference processor is to set up an observation frame, i.e., a
data space in which only data relevant to the query is allowed. The answer to queries is obtained
by examining, processing, and aggregating monitored information in the data space. The sample
path and statistical operators are applied to compute the performance of the network.
Acknowledgments
The research reported here was supported in part by the National Science Foundation under Grant
CDR-84-21402 and in part by the New York State Center for Advanced Technology under Project
--R
"An Architecture for Integrated Networks that Guarantees Quality of Service,"
"Traffic Measurements in Data Networks, Recent Measurement Results, and Some Implications,"
"Network Management with Consistently Managed Objects,"
"NU: A Network Monitoring, Control, and Management Sys- tem,"
"A Multi-purpose, Distributed LAN Traffic Monitoring Tool,"
"A Distributed Approach to LAN Monitoring Using Intelligent High performance Monitors,"
"A Measurement Center for the NBS Local Area Computer Networks,"
"Management of Distributed Measurement over Interconnected Net- works,"
"Real-Time Traffic Measurements on MAGNET II,"
"NEMOS - The Network Management System for the AT&T Long Distance Network,"
"Event Driven Monitoring of Distributed Programs,"
"Monitoring Distributed Systems,"
"An Approach to High-Level Debugging of Distributed Systems,"
"Monitoring and Performance Measuring Distributed Systems During Operation,"
"A Relational Approach to Monitoring Complex Systems,"
"Knowledge-Based Monitoring of Integrated Networks,"
Academic Press
"Monitoring of Integrated Networks for Performance Manage- ment,"
"Objective-Driven Monitoring,"
"The Entity-Relationship Model - Toward a Unified View of Data,"
Knowledge Based Monitoring of Integrated Networks for Performance Manage- ment
"Deductive Query Processing in a Codasyl Data Base,"
"MAGNET II: A Metropolitan Area Network Based on Asynchronous Time Sharing,"
--TR
--CTR
Salvatore Gaglio , Luca Gatani , Giuseppe Re , Alfonso Urso, A Logical Architecture for Active Network Management, Journal of Network and Systems Management, v.14 n.1, p.127-146, March 2006 | sensor;monitoring;knowledge-based systems;quality of service;performance management;network |
627754 | Optimization of Parallel Execution for Multi-Join Queries. | AbstractIn this paper, we study the subject of exploiting interoperator parallelism to optimize the execution of multi-join queries. Specifically, we focus on two major issues: 1) scheduling the execution sequence of multiple joins within a query, and 2) determining the number of processors to be allocated for the execution of each join operation obtained in 1). For the first issue, we propose and evaluate by simulation several methods to determine the general join sequences, or bushy trees. Despite their simplicity, the heuristics proposed can lead to the general join sequences that significantly outperform the optimal sequential join sequence. The quality of the join sequences obtained by the proposed heuristics is shown to be fairly close to that of the optimal one. For the second issue, it is shown that the processor allocation for exploiting interoperator parallelism is subject to more constraintssuch as execution dependency and system fragmentationthan those in the study of intraoperator parallelism for a single join. The concept of synchronous execution time is proposed to alleviate these constraints. Several heuristics to deal with the processor allocation, categorized by bottom-up and top-down approaches, are derived and are evaluated by simulation. The relationship between issues 1) and 2) is explored. Among all the schemes evaluated, the two-step approach proposed, which first applies the join sequence heuristic to build a bushy tree as if under a single processor system, and then, in light of the concept of synchronous execution time, allocates processors to execute each join in the bushy tree in a top-down manner, emerges as the best solution to minimize the query execution time. | Introduction
There has been a growing interest in applying general purpose parallel machines to database applications
[7, 8, 12, 19, 34, 43, 50]. Several research systems have been developed to explore this
trend, including GAMMA [16], XPRS [49], DBS3 [4], GRACE [31], and BUBBA [6]. Relational
databases have a certain natural affinity to parallelism. Relational operations are set oriented and
this provides the query optimizer lots of flexibility in selecting the parallelizable access path. In
relational database systems, joins are the most expensive operations to execute, especially with the
increases in database size and query complexity [11, 27, 30, 39, 53]. For future database manage-
ment, parallelism has been recognized as a solution for the efficient execution of multi-join queries
[1, 17, 18, 25, 36, 42, 52, 54, 55].
As pointed out in [46], the methods to exploit parallelism in the execution of database operations
in a multiprocessor system can be divided into three categories. First, parallelism can occur in each
operator within a query in such a way that several processors can work, in parallel, on a single
database operation. This form of parallelism is termed intra-operator parallelism and has been
studied extensively. Various solutions for exploiting intra-operator parallelism in multiprocessor
database systems have been reported in the literature. Several algorithms were proposed for parallel
execution of two-way joins in multiprocessor systems [15, 38, 44, 45]. Some researchers further
concerned themselves with multiprocessors of particular architectures such as rings and hypercubes
[3, 40]. The effect of data skew on the performance of parallel joins has also been analyzed in
[14, 32, 51]. The second form of parallelism is termed inter-operator parallelism, meaning that
several operators within a query can be executed in parallel. Third, parallelism can be achieved
by executing several queries simultaneously within a multiprocessor system, which is termed inter-query
parallelism [48]. It can be seen that to exploit the third form of parallelism, one has to
resort to the results derived for inter-operator parallelism within a query. During the past few
years some light has been shed on this issue [13, 21, 22, 37, 42, 46]. As an effort toward this trend,
the objective of this paper is to study and improve the execution of multi-join queries, and devise
efficient schemes to exploit inter-operator parallelism to minimize the query execution time in a
multiprocessor-based database system 1 .
Note that different join execution sequences for a query will result in different execution costs
[47]. Also, the execution time of a join in a multiprocessor system strongly depends on the number
1 The execution time of a query in this paper, similar to that in most related work, means the response time to
complete the query, rather than the total execution time of all processors.
R 5
RR 4
RR 2
RRRR
(a) (b)
Figure
1: Illustration of different join sequences.
of processors allotted for the execution of that join [32]. For instance, a 40 second execution time
of a join on 4 processors may increase to 60 seconds if only 2 processors are used. Thus, the subject
of exploiting inter-operator parallelism for the execution of a multi-join query comprises two major
issues: (i) join sequence scheduling, or query plan generation, i.e., scheduling the execution sequence
of joins in the query, and (ii) processor allocation, i.e., determining the number of processors for
each join obtained in (i) so that the execution time required for the query can be minimized. Clearly,
the join method affects the optimization procedure to exploit parallelism. Under hash joins, we
have the opportunity of using pipelining to improve the performance [15, 21], whereas such an
opportunity is not available when a join method like the sort-merge join is employed. Note that
pipelining causes the effects on join sequence scheduling and processor allocation to be entangled,
and the resulting cost model of each join and the criteria for processor allocation in the presence
of pipelining will thus be intrinsically different from those developed without using pipelining.
As a result, join methods without and with pipelining have to be dealt with separately for best
optimization results. In this paper, we shall focus on the join methods without pipelining, such as
the sort-merge join that is in fact the most prevalent join method in existing database softwares,
and develop a specific solution procedure. Readers interested in optimization on pipelining multiple
joins, which calls for a different procedure due to its different problem formulation, are referred to
[9, 26, 35,
First, for the issue of join sequence scheduling, we develop and evaluate by simulation several
heuristics to determine the join sequence for a multi-join query with the focus on minimizing
the total amount of work required 2 . Specifically, we investigate two sorts of join sequences, namely
sequential join sequences and general join sequences. A join sequence in which the resulting relation
of an intermediate join can only be used in the next join is termed a sequential join sequence. An
example of a sequential join sequence can be found in Figure 1a where every non-leaf node (internal
node) represents the resulting relation from joining its child nodes. A join sequence in which the
resulting relation of a join is not required to be only used in the next join is termed a general join
sequence. For example, the sequence of joins specified by the join sequence tree in Figure 1b is a
general join sequence. Such an execution tree of a general join sequence is called a bushy tree [22],
or composite inners [41].
Note that the bushy tree join sequences did not attract as much attention as sequential ones
in the literature. As a matter of fact, it was generally deemed sufficient, by many researchers,
to explore only sequential join sequences for desired performance in the last decade. This can be
in part explained by the reasons that in the past the power/size of a multiprocessor system was
limited, and that the query structure used to be too simple to require further parallelizing as a
bushy tree. It is noted, however, that these two limiting factors have been phased out by the rapid
increase in the capacity of multiprocessors and the trend for queries to become more complicated
nowadays, thereby justifying the necessity of exploiting bushy trees. Consequently, we propose
and evaluate by simulation several join sequence heuristics in this paper to efficiently determine
general join sequences of good efficiency. As can be seen from our results, the heuristics proposed,
despite their simplicity, result in general join sequences which significantly outperform the optimal
sequential join sequence. This is especially true for complex queries. More importantly, it is shown
that the quality of the general join sequences obtained by the proposed heuristics is fairly close to
that of the optimal general join sequence, meaning that by employing appropriate heuristics we
can avoid excessive search cost and obtain join sequences with very high quality.
Next, we explore the issue of processor allocation for join operations. In the study of intra-operator
parallelism, the objective is usually to determine the processor allocation which achieves
the minimum execution time of a single join. Such a selection is referred to as operational point
selection in this paper. However, in exploiting inter-operator parallelism, we, in contrast, are dealing
with the execution of a complex query with multiple joins where different joins are allowed to be
2 Note that "minimizing the total amount of work in a join sequence" is only for join sequence scheduling, and
should not be confused with the overall objective to minimize the query execution time.
executed in parallel in different clusters of processors. As will be seen later, minimizing the execution
time of a multi-join query, in addition to the operational point selection as in the study of intra-operator
parallelism, requires more factors, such as execution dependency and system fragmentation,
to be considered. Execution dependency means that some joins cannot be performed until their
operands generated by prior joins are available. Also, after a sequence of processor allocation and
release, there might be a few processors left idle since they do not form a cluster large enough
to execute any remaining join efficiently. This phenomenon is termed system fragmentation [11].
Clearly, execution dependency and system fragmentation, as well as the operational point selection,
have to be taken into account for a better processor allocation strategy, thus complicating the
minimization procedure for the query execution time. To deal with this problem, we propose and
evaluate several heuristics to determine the number of processors for each join. The processor
allocation heuristics proposed can be divided into two categories: (1) the bottom up approach,
where the number of processors allocated to each internal node (join) in a bushy tree is determined
as the bushy tree is being built bottom up, and (2) the top down approach, which, in light of the
concept of synchronous execution time, determines the processor allocation based on a given bushy
tree. The concept of synchronous execution time is employed to deal with processor allocation so
that input relations for each join can be made available approximately the same time. It is shown
that the concept of synchronous execution time will significantly alleviate execution dependency
and system fragmentation, and hence improve the query execution time.
Note that to conduct performance study on the execution of a multi-join query, the schemes
on join sequence scheduling and processor allocation are integrated to form a final scheduler. As
shown by our simulation results, the join sequence scheduling is in general the dominating factor
for the query execution time whereas processor allocation becomes significant as the number of
processors and query complexity increase. Thus, as confirmed by our simulation, among all the
schemes investigated, the two-step approach of first applying the join sequence heuristics to build
a bushy tree as if under a single processor system, and then determining the processor allocation
in light of the concept of synchronous execution time for the bushy tree built emerges as the best
solution to minimize the query execution time.
This paper is organized as follows. The notation and assumptions used are given in Section
2. In Section 3, we study several join sequence heuristics. Sequential and general join sequences
are addressed in Sections 3.1 and 3.2 respectively, and simulation results are presented in Section
3.3. Processor allocation is dealt with in Section 4. Bottom up and top down approaches are
respectively developed in Sections 4.1 and 4.2, followed by their simulation results in Section 4.3.
This paper concludes with Section 5.
Preliminaries
In this study, we assume that a query is of the form of conjunctions of equi-join predicates and
all attributes are renamed in such a way that two join attributes have the same attribute name
if and only if they have a join predicate between them. A join query graph can be denoted by a
graph is the set of nodes and E is the set of edges. Each node in a join query
graph represents a relation. Two nodes are connected by an edge if there exists a join predicate on
some attribute of the two corresponding relations. An edge between R i and R j in a query graph
is said to be shrunken if that edge is removed from the graph and R i and R j are merged together.
Notice that when a join operation between the two relations R i and R j in a given query graph is
carried out, we can obtain the resulting query graph by shrinking the edges between R i and R j
and merging the two relations together to represent the resulting relation from the join operation.
We use jR i j to denote the cardinality of a relation R i and jAj to denote the cardinality of the
domain of an attribute A. The notation (R i ,R j ) is used to mean the join between R i and R j , and
denotes the resulting relation of (R i ,R j ). For notational simplicity, we denote the execution
tree in Figure 1a as ((((R 1 ,R 2 ),R 3 ),R 4 ),R 5 ), and that in Figure 1b as ((R 1 ,R 2 ),((R 3 ,R 4 ),R 5 )). As in
most prior work on the execution of database operations in multiprocessor systems, we assume that
the execution time incurred is the primary cost measure for the processing of database operations.
In that sense, it has been shown that the join is the most expensive operation and that the cost
of executing a join operation can mainly be expressed in terms of the cardinalities of the relations
involved. Also, we focus on the execution of complex queries, which becomes increasingly important
nowadays in real databases due to the use of views [53]. As mentioned earlier, different join
methods and different multiprocessor systems will result in different execution costs for a join, and
we shall address the join methods without utilizing pipelining, such as the sort-merge join, in this
paper. The effect of pipelining is examined in [9, 35]. It is worth mentioning that due to its stable
performance, the sort-merge join is the most prevalent join method used in some database products
to handle both equal and nonequal join queries [2, 24]. The hash-based join, though having good
average performance, suffers from the problem of hash bucket overflow and is thus avoided by many
commercial database products. The architecture assumed in this study is a multiprocessor-based
database system with shared disks and memory [5]. The cost function of joining R i and R j can then
be expressed by jR which is general and reasonable for joining large relations by
the sort-merge join [28, 29, 51]. Also, all the processors are assumed to be identical and the amount
of memory available to execute a join is assumed to be in proportion to the number of processors
involved.
To facilitate our discussion, the performance of a scheduling scheme is assessed by the average
execution time of plans generated by this scheduling scheme. The efficiency of the join sequence,
measured by its execution on a single processor system, is termed join sequence efficiency, and the
effectiveness of processor allocation, determined by the speedup achieved over the single processor
case, is termed processor allocation efficiency. The overall efficiency for dealing with the above
two issues then depends on the two factors. Note that to best assess the performance impact
of certain factors in a complicated database system, it is generally required to fix some factors,
and evaluate only the interesting ones. Similarly to most other work in performance, we adopt the
above approach in this paper and concentrate on investigating the effects of join sequence scheduling
and processor allocation. It is hence assumed that we have several shared disks and enough disk
bandwidth for I/O operations. The effect of resource (i.e., disk/network bandwidth) contention,
which is modeled in [18], is assumed to have similar effects on the schemes evaluated, and thus
not addressed in this paper. It is noted that even when the disk bandwidth is a bottleneck, join
sequence scheduling schemes generating smaller intermediate relations will in general tend to have
better performance. In that case, however, the data placement in disks will become an important
issue for performance improvement, which is beyond the scope of this paper. Also, we refer readers
interested in such issues as execution for a single sort-merge join and the use of indices to improve
one join operation to prior work on intra-operator parallelism [28, 29, 51]. Optimization on these
issues is system dependent, and in fact orthogonal to the relative performance among schemes
evaluated in this paper. Besides, we assume that the values of attributes are uniformly distributed
over all tuples in a relation and that the values of one attribute are independent of those in another.
The cardinalities of the resulting relations from join operations can thus be estimated according to
the formula in [10], which is given in the Appendix for reference. Note that this assumption is not
essential but will simplify our presentation. Also, all tuples are assumed to have the same size. In
the presence of certain database characteristics and data skew, we only have to modify the formula
for estimating the cardinalities of resulting relations from joins accordingly [20, 23] when applying
our join sequence scheduling and processor allocation schemes. Results on the effect of data skew
can be found in [27, 51].
3 Determining the Execution Sequence of Joins
In this section, we shall propose and evaluate various join sequence heuristics. Specifically, we focus
on sequential join sequences in Section 3.1 and general join sequences, i.e., bushy trees, in Section
3.2. Simulation results by different heuristics are given in Section 3.3. For the objective of showing
the effect of a join sequence on the total work incurred, we in this section consider the execution
of joins under a single processor system. The join sequence efficiencies of various join sequences
are compared with one another. Clearly, our results in this section on improving the join sequence
efficiency are applicable to both multiprocessor and single processor systems. The combined effects
of join sequence scheduling and processor allocation are discussed in Section 4.
3.1 Schemes for Sequential Join Sequences
First, we investigate the sequential join sequences resulted by the following two methods: (1) the
greedy method, denoted by SGD , and (2) the optimal permutation, denoted by SOPT , where S
means "sequential join sequence" and the subscripts mean the methods used.
The greedy scheme SGD can be outlined as follows. First, the scheme starts with the join which
requires the minimal execution cost. Then, the scheme tries to join the composite with the relation
which has the minimal-cost join with the existing composite. The above step is repeated until
all joins are finished. It can be seen that the complexity of SGD is O(jV j 2 ). Moreover, we also
investigate the optimal sequential join sequence which can be obtained by the optimal permutation
of relations to be joined. It can be seen that the number of different sequential join sequences for a
query of n relations is n!, which is half of the total number of permutations of n objects since the
first two relations can be interchanged. To evaluate the optimal sequential join sequence in Section
3.3 where different join sequences are compared by simulation, we implemented scheme SOPT in
which the technique of branch and bound is used to avoid exhaustive enumeration and reduce the
cost of search. For better readability, the implementation detail of SOPT , which is irrelevant to the
quality of the join sequence resulted, is not included in this paper.
To show the resulting join sequences by SGD and SOPT , consider the query in Figure 2a whose
profile is given in Table 1. From the operations of SGD and the formula in the Appendix, it
can be seen that the join between R 2 and R 4 is the one with the minimal cost among all joins.
After the join (R 2 ,R 4 ), the resulting query graph and its profile are given respectively in Figure
2b and
Table
2 where R 2 now represents the resulting composite. Then, it can be verified that
R 5 is the relation which will have the minimal-cost join with R 2 , and the execution of (R 2 ,R 5 )
R
RRR
A
ER
R
RRR
A
ER
G
(a) (b)
Figure
2: Two states of an example query graph: (a) the original graph, (b) the resulting graph
after joining R 2 and R 4 .
in Figure 2b is performed. Following the above procedure, we have the resulting join sequence
by SGD , (((((R 2 ,R 4 ),R 5 ),R 6 ),R 3 ),R 1 ) whose total cost is 45,246.43. On the other hand, it can
be obtained that for the query in Figure 2a, the optimal sequential join sequence by SOPT is
cost is 36,135.92, which is less than that required by SGD .
It is interesting to see that the first join performed by SOPT is (R 1 ,R 3 ), rather than (R 2 ,R 4 ) which
is the first one chosen by SGD .
cardinality 118 102 106 100 131 120
(a). Cardinalities of relations.
attribute A B C D E F G
cardinality 19 15 17 19
(b). Cardinalities of attributes.
Table
1: The profile of the query in Figure 2a.
cardinality 118 680 106 131 120
(a). Cardinalities of relations.
attribute A B C D E G
cardinality 19 15 17 19
(b). Cardinalities of attributes.
Table
2: The profile of the query in Figure 2b.
3.2 Schemes for General Join Sequences
It can be seen from the cost function presented in Section 2 that the joins whose operands are of
larger sizes usually have higher costs. This observation suggests the following heuristic to explore the
general join sequence in order to reduce the total cost incurred. First, we perform the minimal-cost
join, and then, from the resulting query, choose the minimal-cost join to perform. This procedure
repeats until all joins are finished. Note that this heuristic, though efficient, is greedy in nature
in that only "local optimality" is considered, and thus need not lead to a resulting join sequence
with the minimal cost. Based on this heuristic, scheme GMC , where G means that the resulting
sequence is a general join sequence and the subscript MC stands for "the join with minimal cost",
is outlined below. It can be seen that unlike SGD , the resulting composite of a join by GMC need
not participate in the next join.
scheme to execute the join with the minimal cost. */
begin
1. repeat until jV
2. begin
3. Choose the join R i 1 R j from G=(V,E) such that
4. Perform R i
5. Merge R i and R j to R min(i;j) . Update the profile accordingly.
For the example query in Figure 2a, it can be verified from Figure 2b and Table 2 that after
the first minimal-cost join (R 2 ,R 4 ) is performed, the next minimal-cost join to be executed by GMC
is (R 5 ,R 6 ), rather than (R 2 ,R 5 ) as in SGD . The resulting sequence is (((R 2 ,R 4 ),(R 5 ,R 6 )),(R 1 ,R 3
whose total cost is 13,958.62, significantly less than those required by SGD and SOPT . The execution
RRR 5
R 6
RR 2
RRRRRRRRRRR
(a)RRRRR 6
R
OPT
(b) MC
G
(d) OPT
G
(c) MR
G
Figure
3: Different execution trees resulted by different join sequence heuristics.
trees resulted by SOPT and GMC are shown in Figures 3a and 3b, respectively. It can be seen that
the complexity of the scheme is O(jV rather close to O(jV required by SGD 3 .
Note that in a sequence of joins, the cardinalities of intermediate relations resulting from the
early joins affect the costs of joins to be performed later. Since the objective taken is to minimize
the total cost required to perform a sequence of joins, one may want to execute the joins which
produce smaller resulting relations first. In view of this fact, we develop and evaluate the following
heuristic scheme which is a variation of GMC , namely the minimal resulting relation (GMR ). Instead
of finding the minimal-cost join as in GMC , the scheme GMR searches for the join which results
in the minimal resulting relation 4 . Clearly, the heuristic scheme GMR is of the same complexity,
O(jV jjEj), as scheme GMC . Algorithmic form of GMR is similar to the one of GMC , except that
the statement 3 in GMC is changed to 3A below.
3A. (for GMR Choose the join (R i ,R j ) from G=(V,E) such that jR i
3 For the simulation in [11], the run times required by the two schemes are almost the same.
4 Another heuristic choosing the join (R i ,R j ) with the minimal expansion (i.e., (jR i
evaluated [11], and found to provide mediocre performance. Its
results are thus not reported in this paper.
Following GMR , the resulting join sequence for the query in Figure 2a is ((((R 1 ,R 3 ),R 6 ),R 5 ),(R 2 ,R 4 )),
whose bushy tree is shown in Figure 3c. The associated cost is 13,288.38, showing a better join
sequence efficiency than the one obtained by GMC . This fact can be further justified by the simulation
results in Section 3.3. Moreover, to assess the performance of the heuristics, we implemented
scheme GOPT to determine the optimal general join sequence for a multi-join query. Same as in
SOPT , we enumerate possible candidate sequences in our implementation of GOPT and employ the
technique of branch and bound to prune the search. Using GOPT , we obtain that the optimal
general join sequence for the query in Figure 2a is ((R
shown in Figure 3d, requiring only a cost of 13,013.57, which is in fact rather close to those obtained
by GMC and GMR . Clearly, such an optimal scheme, though leading to the optimal solution se-
quence, will incur excessive computational overhead which is very undesirable in some applications
and might outweigh the improvement it could have over the heuristic schemes. As can be seen in
the following, the heuristic schemes GMC and GMR , despite their simplicity, perform significantly
better than SGD and SOPT , and result in join sequences whose execution costs are reasonably close
to that of the optimal one.
3.3 Simulation Results for Join Sequence Heuristics
Simulations were performed to evaluate the heuristic schemes for query plan generation. The
simulation program was coded in C, and input queries were generated as follows. The number of
relations in a query was pre-determined. The occurrence of an edge between two relations in the
query graph was determined according to a given probability, denoted by prob. Without loss of
generality, only queries with connected query graphs were deemed valid and used for our study. To
determine the structure of a query and also the cardinalities of relations and attributes involved, we
referenced prior work on workload characterization [53] and a workload obtained from a Canadian
insurance company. To make the simulation be feasibly conducted, we scaled the average number
of tuples in a relation down from one million to two thousand. The cardinalities of attributes were
also scaled down accordingly so that the join selectivities could still reflect the reality.
Based on the above, the cardinalities of relations and attributes were randomly generated from
a uniform distribution within some reasonable ranges. The number of relations in a query, denoted
by n, is chosen to be 4, 6, 8 and 10, respectively. For each value of n, 300 queries were randomly
generated. For each query, the five scheduling schemes, i.e., SGD , SOPT , GMC , GMR and GOPT , are
performed to determine join sequences to execute the query. When two relations not having join
predicates are to be joined together, a Cartesian product is performed. From our simulation, we
found that relative performance of these schemes is not sensitive to the density of the query graph,
i.e., the number of edges in the graph 5 . The average execution cost for join sequences obtained
5 Note that the "absolute" performance of these scheduling schemes is highly dependent uopn the query complexity.
relation no. SGD SOPT GMC GMR GOPT
Table
3: The average execution cost for join sequences obtained by each scheme.
from each scheme when prob=0.32 is shown in Table 3. Also, we divide the average execution costs
of the first four schemes by that of GOPT for a comparison purpose, and show the results associated
with
Table
3 in Figure 4.
From
Table
3 and Figure 4, it can be seen that except for GOPT , the join sequence efficiency of
join sequences obtained by GMR is the best among those obtained by the four remaining schemes,
and then, in order, those by GMC , SOPT and SGD . The join sequence efficiencies of the sequences
resulted by GMC and GMR are quite close to the optimal one and significantly better than those
by SGD and SOPT , especially when the number of relations increases. For the sizes of queries
simulated here, the run times of SGD , GMC and GMR under the RS/6000 environment are very
close to one another whereas those of SOPT and GOPT are larger than them by more than three
orders of magnitude due to their exponential complexity.
4 Processor Allocation for Executing Each Join
As pointed out in Section 1, to minimize the execution time of a multi-join query, it is necessary to
address the following three issues: operational point selection, execution dependency and system
fragmentation. Note that the execution time required for a join operation within a multiprocessor
system depends on the number of processors allocated to perform the join, and their relationship
can be modeled by an operational curve 6 , as evidenced in prior results on intra-operator parallelism
[32, 51]. Basically, increasing the number of processors will reduce the execution time of a join until
a saturation point is reached, above which point adding more processors to execute the join will,
on the contrary, increase its execution time. This is mainly due to the combining effects of limited
parallelism exploitable and excessive communication and coordination overhead over too many
processors. An example of an operational curve for this phenomenon is shown by the solid curve
in
Figure
5, where a dotted curve xy=30 is given for reference. In such a curve, the operational
point chosen from the curve, depending on the design objective, is generally between the point
Discussions on this issue can be found in [33, 41].
6 Note that every join has its operational curve.
n=4 n=6 n=8 n=1026n: the number of relations involved
to
the
minimal
cost
required S
1.07 1.07
1.08 1.061.76
1.26 1.112.82
Figure
4: Performance results of different join sequence heuristics.
which minimizes the execution time of the join, referred to as the minimum time point, denoted by
and the one which optimizes execution efficiency, i.e., minimizes the product of the number of
processors and the execution time, referred to as the best efficiency point, denoted by p B .
Formally, the execution efficiency of allocating k processors to execute a join is defined as
exe. time on one proc.
k * exe. time on k proc. to represent the efficiency of such an allocation. For example,
for the operational curve in Figure 5. To improve the processor allocation efficiency, we
not only have to utilize the information provided in the operational curve for the operational
point selection, but are also required to comply with execution dependency and avoid system
fragmentation as much as possible so as to minimize the execution time of the query.
Consequently, we propose and evaluate in the following several heuristics to determine the
number of processors allocated for the execution of each join. The heuristics proposed can be divided
into two categories: (1) the bottom up approach, which, presented in Section 4.1, determines the
join sequence and processor allocation at the same time, i.e., processors are allotted when a bushy
tree is being built, and (2) the top down approach, which, presented in Section 4.2, determines
the processor allocation based on a given bushy tree. The effectiveness of these heuristics will be
evaluated by simulation in Section 4.3.
Y
the number of processors
execution
time
Figure
5: An example operational curve of a join in a multiprocessor system.
4.1 Bottom Up Approach for Processor Allocation
We introduce below four heuristics for the bottom up approach to determine the processor allocation
(a). Sequential execution (SE):
This heuristic is to use all the processors in the system to execute each join in the query sequen-
tially. It can be seen that inter-operator parallelism is absent when this heuristic is used, and the
join sequence is the key factor to the performance in such a case.
(b). Fixed cluster size
This heuristic is to allocate a fixed number of processors for the execution of each join to avoid
system fragmentation. Clearly, by taking the total number of processors as the cluster size, we
have a special case equivalent to heuristic SE.
Note that by using the above heuristics, system fragmentation is avoided since a fixed number
of processors are always released together for a later use. Moreover, under heuristic SE, execution
dependency is inherently observed, since join operations are executed sequentially. However, the
two heuristics may suffer from poor operational point selection because the information provided
cardinality 100 85 93 106 102 90 101 94
(a). Cardinalities of relations.
attribute A B C D E F G H I J K
cardinality 9 8 7 9 9
(b). Cardinalities of attributes.
Table
4: The profile of the query in Figure 6.
by the operational curve is not utilized to determine the operational point of a join.
(c). Minimum time point (MT):
This heuristic is based on the minimum time point in the operational curve, i.e., the number of
processors used to execute the corresponding join operation is p M . Note that even though this
operational point obtains the minimum execution time for each join, it may not minimize the
execution time of a multi-join query as a whole due to the effect of execution dependency and
system fragmentation.
(d). Time-efficiency point (TE):
Recall that the best efficiency point is the operational point where processors are most efficiently
used to execute the join. However, as can be seen in Figure 5, a scheme based on the the best
efficiency point might suffer from execution dependency, since some join operating at its best
efficiency point might take a long execution time to complete due to a small number of processors
used to execute the operation, thus causing long waiting time for subsequent joins. On the other
hand, a scheme based on MT may not use processors efficiently since it may require too many
processors to reach the minimum time point. Clearly, the number of processors associated with
an operational point which can strike a compromise between the execution time and the processor
efficiency should be within the region [p B , p M ]. In view of this, we shall use a combination of
the minimum time point and the best efficiency point, termed as the time-efficiency point, as a
heuristic for our study, i.e., the number of processors, k*p M +(1\Gammak)*p B , is used to execute each
join operation, where 0-k-1.
Note that the above heuristics for processor allocation can be combined with the schemes
for scheduling join sequences developed in Section 3 to form a final scheduler which handles the
scheduling and processor allocation of a multi-join query in a multiprocessor system. That is, we
use a join sequence heuristic, say GMR , to determine the next join to be considered and employ the
A
F
RRR
R
JR
G
Figure
A query to show processor allocation.
appropriate processor allocation heuristic to determine the number of processors to be allocated
for the execution of that join. The operations for the processor allocation and deallocation can be
outlined as follows where the processor allocation heuristic, denoted by h P , can be any of SE, FS,
MT and described above and h P (J) is the number of processors allocated to execute a join J
under the heuristic h P .
Processor Allocation:
/* P is the number of processors available and initialized as the total numbers of processors. */
1: Use the join sequence heuristic to determine the next join operation J such that h P (J)-P
and execution dependency is observed, i.e., the two input relations of J are available then. If
no such a join exists, go to processor deallocation.
Step 2: Allocate h P (J) processors to execute the join J. P:=P\Gammah P (J).
Step 3: Update the profile by marking J as an ongoing join.
Step 4: Determine the completion time of J and record it in the completion time list of ongoing
joins.
Step 5: Go to Step 1.
Processor Deallocation:
1: From the completion time list, determine the next completion of an ongoing join, say J.
Step 2: Update the profile to reflect that J is completed. P:=P+h P (J).
Step 3: If there is any executable join in the updated query profile, go to processor allocation.
Step 4: Go to Step 1.
It can be seen that using the above procedures, the execution tree can be built bottom up. To
demonstrate the processor allocation and deallocation, we shall show the operations using heuristics
SE and TE. The operations by FS and ME follow similarly. Consider the query in Figure 6 with
the profile in Table 4. In light of the results on parallelizing sort and join phases [29, 51], the
operational curve of a join can be modeled as a hyperbolic function below,
where N p is the number of processors employed, parameters a, b and c are determined by the path
length of the system in processing and joining tuples [28, 32, 51], and parameter d is determined
by the inter-processor communication protocol. Also, as observed in [32], for sort-merge join, runs
for sorting are usually memory intensive. In view of this and the fact that the amount of memory
available is in proportion to the number of processors involved, we have, for each join, the minimal
number of processors required for its execution according to the sizes of its operands. p B for each
operational curve formulated above can thus be determined for our study. We then ignore the
operational area where the number of processors is less than p B , and consider only the operational
region [p B ,p M ] for efficient execution. Without loss of generality, GMR is used to determine the
next join operation to be executed 7 . Then, for heuristics SE in a multiprocessor system of 32 nodes
with a=b=c=1 and d=20, we have the execution sequence as shown in Table 5a, where the column
is the cumulative execution cost of R i , and will be used in Section 4.2 to implement top
down approaches. The bushy tree and its corresponding processor allocation by SE is shown in
Figure 7a. The execution scenarios using the time-efficiency point are shown in Table 5b, where the
time-efficiency point used is determined by 0.3p B +0.7pM 8 . The bushy tree and its corresponding
processor allocation by TE is shown in Figure 7b. Note that though the same scheme GMR is used
to determine the next join to be performed in both cases, the resulting join sequences are different
from each other due to different processor allocation scenarios. It can be seen that the bushy tree
in Figure 7b is different from the one in Figure 7a.
4.2 Top Down Approach for Processor Allocation
It can be seen that when an execution tree is built bottom up, the following two constraints have
to be followed: (1) execution dependency is observed, i.e., the operands of the join selected to be
7 The corresponding simulation results by using GMC do not provide additional information, and are thus omitted
in this paper.
8 Different values for k have been evaluated. The choice for k=0.3 is made for its reasonably good performance.
join sequence proc. no. starting time end time resulting R i
(R
(R 4 ,R 5 )
(R
(R
(R
(R
(R
(a) SE.
join sequence proc. no. starting time end time resulting R i
(R
(R
(R 6 ,R 8
(R 4 ,R 5
(R
(R
(R
(b) TE.
join sequence proc. no. starting time end time resulting R i
(R
(R 4 ,R 5
(R
(R
(R
(R
(R
(c) ST SE .
join sequence proc. no. starting time end time resulting R i
(R
(R
(R 6 ,R 8
(R 4 ,R 5
(R
(R
(R
(d) ST
Table
5: Execution sequence for different heuristic.
(a) SER 3
R 6
R 8
RR 7
RR 5
R
(b) TER 7
RR 3
RR 5
RRR
(20)RRRRRRRRRRRRRR
Figure
7: Bottom up processor allocation.
performed next do not depend on the resulting relation of any ongoing join, and (2) the processor
requirement is satisfied according to the processor allocation heuristic employed, i.e., the number
of processors required by that join is not larger than the number of processors available then. As
can be seen in Table 5a and 5b, the above two constraints lengthen the execution time of a query
and degrade the performance of a scheduler since the first constraint causes long waiting time for
some operands, and the second can result to the existence of idle processors. In view of these, one
naturally wants to achieve some degree of execution synchronization, meaning that processors are
allocated to joins in such a way that the two input relations of each join can be made available
approximately the same time. Also, idleness of processors should be avoided. As a result, we
propose the top down approach for the processor allocation which uses the concept of synchronous
execution time to alleviate the two constraints and improve the query execution time.
To describe the processor allocation using the synchronous execution time, consider the bushy
tree in Figure 7a for example. Recall that every internal node in the bushy tree corresponds to a
join operation, and we determine the number of processors allocated to each join in the manner of
top down. Clearly, all processors are allocated to the join associated with the root in the bushy
tree since it is the last join to be performed. Then, those processors allocated to the join on the
root are partitioned into two clusters which are assigned to execute the joins associated with the
two child nodes of the root in the bushy tree in such a way that the two joins can be completed
approximately the same time. The above step for partitioning the processors for the root is then
applied to all internal nodes in the tree in a top down manner until each internal node (join) is
assigned with a number of processors. More formally, define the cumulative execution costs of an
internal node as the sum of the execution costs of all joins in the subtree under that internal node.
Also, define the cumulative execution cost of a leaf node (an original relation) as zero. Let R i
be a relation associated with an internal node in the bushy tree and R x and R y be the relations
corresponding to its two child nodes. Then, the cumulative execution cost of the node with R i ,
denoted by W(R i ), is determined by,
(R
Note that the cumulative execution cost of each node can be determined when the bushy tree is
built bottom up. The cumulative execution costs of internal nodes for the bushy trees in Figures
7a and 7b can be found in Tables 5a and 5b, respectively. Then, it is important to see that to
achieve the synchronous execution time, when partitioning the processors of a node into two clusters
for its child nodes, one has to take into account the cumulative execution costs of the two child
nodes, rather than the execution costs of the two joins associated with the two child nodes. Let R i
be a relation associated with an internal node in the bushy tree and R x and R y be the relations
corresponding to its two child nodes such that W(R x Denote the number of processors
allocated to perform the join generating R i as P(R i ). Then, P(R x ) and P(R y ) are determined,
respectively, by,
(R x
(R (R y ) e and P (R y (R x
Since W(R y )=0 if R y is an original relation, we know that when only one child node corresponds
to a join and the other is a leaf node, the former inherits all processors. Note that if the number
of processors allocated to an internal node (join) of a bushy tree, say r processors, exceeds that
required for the minimum time point, we shall employ p M processors to perform that join whereas
using r processors for the subsequent partitioning for the subtree under that internal node. Also,
when the number of processors passed to an internal node in a lower level of the tree is too few to
be further partitioned for efficient execution of joins, sequential execution for the joins in its child
nodes is employed for a better performance. Clearly, there are many different bushy execution trees
for a query. It can be seen that the problem of determining the optimal bushy tree to minimize the
execution time by the concept of synchronous execution time is of exponential complexity. For an
efficient solution, we apply the concept of synchronous execution time to the bushy trees obtained
by the heuristics introduced in Section 4.1.
As pointed out before, different bottom up processor allocation heuristics used may result in
different bushy trees even when the same join sequence heuristic is applied. It is important to see
that although execution time for the sequence in Table 5a (by SE) is larger than that in Table 5b
(by TE), the join sequence efficiency of the bushy tree in Figure 7a is in fact better than that of
the tree in Figure 7b, as shown by their cumulative execution costs in Tables 5a and 5b. Note that
the constraints on execution dependency can get introduced when a bushy tree is being built by
heuristic TE, as well as by FS and MT. Such constraints are absent when heuristic SE is employed
to form the bushy tree. (This explains why the tree in Figure 7a is different from that in Figure 7b.)
Thus, the bushy tree by SE is in fact superior to those by other heuristics in that the former has a
better join sequence efficiency owing to full exploitation of the join sequence heuristics. Therefore,
we shall apply the concept the synchronous execution time to the bushy tree built by SE, denoted
by ST SE . For a comparison purpose, we also investigate the use of the synchronous execution time
on the bushy tree built by TE, denoted by ST
The execution scenario using the heuristic ST SE is shown in Table 5c, and the corresponding
bushy tree and processor allocation is shown in Figure 8a. In spite of the fact that the bushy
tree in Figure 8a is the same as that in Figure 7a, the resulting execution times differ due to the
difference in processor allocation. It can be seen that under ST SE , processors are allocated to the
execution of each join in such a way that two joins generating the two operands for a later join can
be completed approximately the same time, thus alleviating execution dependency. Moreover, since
the processors allocated to a node in a bushy tree are partitioned for the allocation to its child
nodes, system fragmentation is eased. This explains why ST SE outperforms SE despite both of
them have the identical bushy trees and the same join sequence efficiency. The execution scenario
using the heuristic ST execution time is shown in Table 5d. The bushy tree and its processor
allocation by ST is shown in Figure 8b which has the same bushy tree as the one in Figure 7b, but
differs from the latter in processor allocation. It is important to see that despite outperforms
SE, ST SE performs better than ST in fact is the best one among the processor allocation
heuristics evaluated in Section 4.3.
4.3 Simulation Results for Processor Allocation
The query generation scheme employed in Section 3.3 is used to produce input queries for simulation
in this subsection. As in Section 3.3, 300 queries with a given number of relations involved were
randomly generated with the occurrence of an edge in the query graph also determined by a given
probability prob. For each query, the six scheduling schemes, according to the heuristics of SE, FS,
MT, TE, ST SE and ST respectively, are performed to determine the number of processors for
each join to execute the query. As in Section 3.3, the simulation results here also indicate that the
above heuristics are not sensitive to different values of prob. Thus, we shall only show the results for
prob=0.30 in the following. For a multiprocessor of 48 nodes, the average execution times obtained
(a)R 3
R 6
R 8
RRR 5
R
RR 5
RRR
ST (b)
STR 4
RRRRRRRRRRRRRRR
Figure
8: Top down processor allocation (synchronous execution time).
by each heuristic for queries of 10, 15, 20 and 25 relations are shown in Table 6a. It can be seen
that heuristic SE, i.e., the one using intra-operator parallelism only, performs well when the number
of relations is 10, but performs worse when the number of relations increases. This agrees with
our intuition since as the number of relations increases, the opportunity to exploit inter-operator
parallelism increases and the constraint imposed by execution dependency becomes relatively less
severe. Also, heuristic FS is in general outperformed by others due mainly to execution dependency
and poor operational points selection. Among the heuristics on bottom up approaches, the shortest
execution time is usually achieved by heuristic TE, especially when the number n is large. This
can be explained by the same reason as mentioned above, i.e., that execution dependency is eased
when the number of relations is large, and thus performs best for its best usage of processors.
Also, from 300 randomly generated queries, the average execution times obtained by the six
heuristics for a query of 15 relations is shown in Table 6b where the number of processors in the
system is varied from 16 to 64. It can be seen that when the number of processors increases,
heuristic SE suffers from the inefficient use of processors, and is thus outperformed by heuristics
MT, TE, ST SE and ST by a wide margin. It can also be observed that heuristic which uses
processors efficiently to achieve a nearly minimum execution time performs well when the number
of processors is large. Clearly, the more processors are in the system, the more parallelism can
be exploited by heuristic TE. However, MT performs better than when pn=64, which can
relation no. SE FS MT
n=15 9041.1 20828.7 7659.9 7135.4 5990.2 6284.5
(a) When the number of processors is 48.
proc. no. SE FS MT
pn=48 9041.1 20828.7 7695.9 7135.4 5990.2 6284.5
(b). When the number of relations is 15.
Table
The average execution time for each heuristic.
be explained by the fact that when the supply of processors is sufficient, achieving minimum time
point (by MT) becomes a better heuristic than using processors efficiently (by TE). In all, when the
number of processors is small, utilizing intra-operator parallelism (i.e., SE) will suffice to provide
a reasonably good performance. On the other hand, for a large multiprocessor system, one has
to resort to inter-operator parallelism to fully exploit the resources in the system. Note, however,
that without using synchronous execution time, ME and TE, though having a good operational
point selection for each join, cannot improve the query response time in a global sense due to the
nature of a bottom up approach, and are thus outperformed by ST SE and ST This fact strongly
justifies the necessity of taking execution dependency and system fragmentation into consideration
when inter-operator parallelism is exploited.
As mentioned earlier, although SE is outperformed by due to its poor operational point
selection, ST SE remedies this defect by properly reallocating processors using the concept of synchronous
execution time. ST SE can thus outperform ST TE . It is worth mentioning that the
sequential join sequences, such as the one shown in Figure 3a, will not benefit from the concept of
synchronous execution time, since in this case, joins have to be executed sequentially and there is
no inter-operator parallelism exploitable. This fact, together with the fact that the sequential join
sequences usually suffer from poor join sequence efficiency, accounts for the importance of exploring
the general join sequences.
Note that similar to the heuristics in Section 3, the heuristics we investigated here are very
straightforward and require little implementation overhead. In all, our results showed that the join
sequence efficiency is in general the dominating factor for the query execution time whereas the
processor allocation efficiency becomes significant as the number of processors and query complexity
increase. This suggests that for an efficient solution, one can attempt to optimize the join sequence
efficiency by building a good bushy tree first and then improve the processor allocation efficiency by
appropriately allocating processors for the execution of each join. This is in fact how the heuristic
ST SE is constructed.
5 Conclusion
In this paper we dealt with two major issues to exploit inter-operator parallelism within a multi-join
query: (i) join sequence scheduling and (ii) processor allocation. For the first issue, we explored
the general join sequence so as to exploit the parallelism achievable in a multiprocessor system.
Heuristics GMC and GMR were derived and evaluated by simulation. The heuristics proposed,
despite their simplicity, were shown to lead to general join sequences whose join sequence efficiencies
are close to that of the optimal one (GOPT ), and significantly better than what is achievable by the
optimal sequential join sequence (S OPT ), particularly when the number of relations in the query is
large.
Moreover, we explored the issue of processor allocation. In addition to the operational point
selection needed for intra-operator parallelism, we identified and investigated two factors: execution
dependency and system fragmentation, which are shown to be important when exploiting inter-operator
parallelism. Several processor allocation heuristics, categorized by bottom up and top
down approaches, were proposed and evaluated by simulation. To form a final scheduler to perform
a multi-join query, we combined the results on join sequence scheduling and processor allocation.
Among all the schemes evaluated, the two-step approach by ST SE , which (1) first applies the join
sequence heuristic to build a bushy tree to minimize the total amount of work required as if under
a single processor system, and then, (2) in light of the concept of synchronous execution time,
allocates processors to the internal nodes of the bushy tree in a top down manner, is shown to be
the best solution to minimize the query execution time.
--R
Parallel Database Systems
An Overview of DB2 Parallel Edition.
Database Operations in a Cube-Connected Multiprocessor System
A Parallel Database System for Shared Store.
A Performance Comparison of Two Architectures for Fast Transaction Processing.
Prototyping Bubba
The Development of the CROSS8 and HC16-186 (Data- base) Computers
Parallel Features of NonStop SQL.
Applying Segmented Right-Deep Trees to Pipelining Multiple Hash Joins
Combining Join and Semijoin Operations for Distributed Query Processing.
Scheduling and Processor Allocation for Parallel Execution of Multi-Join Queries
Informix Parallel Data Query.
Practical Skew Handling in Parallel Joins.
Multiprocessor Hash-Based Join Algorithms
Parallel Database Systems: The Future of High Performance Database Systems.
Query Optimization for Parallel Execution.
On the Effect of Join Operations on Relation Sizes.
Dataflow Query Processing Using Multiprocessor Hash-Partitioned Algorithms
Sequential Sampling Procedures for Query Size Estimation.
Exploiting Inter-Operator Parallelism in XPRS
On Parallel Execution of Multiple Pipelined Hash Joins.
Considering Data Skew Factor in Multi-way Join Query Optimization for Parallel Execution
System Issues in Parallel Sorting for Database Systems.
Percentile Finding Algorithm for Multiple Sorted Runs.
Query Optimization in Database Systems.
Architecture and Performance of Relational Algebra Machine GRACE.
Effectiveness of Parallel Joins.
On the Effectiveness of Optimization Search Strategies for Parallel Execution Spaces.
Oracle Parallel RDBMS on Massively Parallel Systems.
On Optimal Processor Allocation to Support Pipelined Hash Joins.
Exploiting Database Parallelism In A Message-Passing Multiprocessor
Optimization of Multi-Way Join Queries for Parallel Execution
Join Processing in Relational Databases.
Measuring the Complexity of Join Enumeration in Query Opti- mization
Parallelism in Relational Data Base Systems: Architectural Issues and Design Approaches.
The Kendall Square Query Decomposer.
Design and Evaluation of Parallel Pipelined Join Algorithms.
A Performance Evaluation of Four Parallel Join Algorithms in a Shared-Nothing Multiprocessor Environment
Tradeoffs in Processing Complex Join Queries via Hashing in Multiprocessor Database Machines.
Access Path Selection in a Relational Database Management System.
Multiple Query Optimization.
The Design of XPRS.
A Parallel Sort Merge Join Algorithm for Managing Data Skew.
A Hierarchical Approach to Parallel Multiquery Scheduling.
On Workload Characterization of Relational Database Environments.
Parallel Query Processing.
Parallel Query Processing in DBS3.
--TR
--CTR
Anindya Datta , Debra VanderMeer , Krithi Ramamritham, Parallel Star Join Efficient Query Processing in Data Warehouses and OLAP, IEEE Transactions on Knowledge and Data Engineering, v.14 n.6, p.1299-1316, November 2002
Ming-Syan Chen , Hui-I Hsiao , Philip S. Yu, On applying hash filters to improving the execution of multi-join queries, The VLDB Journal The International Journal on Very Large Data Bases, v.6 n.2, p.121-131, May 1997
Wen-Chih Peng , Ming-Syan Chen, Query Processing in a Mobile Computing Environment: Exploiting the Features of Asymmetry, IEEE Transactions on Knowledge and Data Engineering, v.17 n.7, p.982-996, July 2005
Shortening Matching Time in OPS5 Production Systems, IEEE Transactions on Software Engineering, v.30 n.7, p.448-457, July 2004
Chang-Hung Lee , Ming-Syan Chen, Processing Distributed Mobile Queries with Interleaved Remote Mobile Joins, IEEE Transactions on Computers, v.51 n.10, p.1182-1195, October 2002
Julian R. Ullmann, Partition search for non-binary constraint satisfaction, Information Sciences: an International Journal, v.177 n.18, p.3639-3678, September, 2007 | multi-join query;system fragmentation;synchronous execution time;execution dependency;bushy trees |
627765 | A Knowledge-Based Approach for Retrieving Images by Content. | AbstractA knowledge-based approach is introduced for retrieving images by content. It supports the answering of conceptual image queries involving similar-to predicates, spatial semantic operators, and references to conceptual terms. Interested objects in the images are represented by contours segmented from images. Image content such as shapes and spatial relationships are derived from object contours according to domain-specific image knowledge. A three-layered model is proposed for integrating image representations, extracted image features, and image semantics. With such a model, images can be retrieved based on the features and content specified in the queries. The knowledge-based query processing is based on a query relaxation technique. The image features are classified by an automatic clustering algorithm and represented by Type Abstraction Hierarchies (TAHs) for knowledge-based query processing. Since the features selected for TAH generation are based on context and user profile, and the TAHs can be generated automatically by a clustering algorithm from the feature database, our proposed image retrieval approach is scalable and context-sensitive. The performance of the proposed knowledge-based query processing is also discussed. | Introduction
Retrieving images by content is a key technology for image databases. Pixel matching methods
employed for content-based retrieval are time-consuming and of limited practical use
since little of the image object semantics is explicitly modeled. QBIC [18] uses global shape
features such as area and circularity to retrieve similarly shaped objects. However, due to
the limited precision of global shape features [15], such an approach has limited expressiveness
for answering queries with conceptual terms and predicates. VIMS [1] retrieves similar
images by relaxing feature values of the target image based on the standard deviation of the
features. Independent of the target data values, the same amount of relaxation is applied on
the target data values to represent the similarity of data. Such interpretation of similarity is
not sensitive to the location of the target data values inside their value range. In an image
data space, many features are based on multiple attributes. For example, location requires
at least two attributes (i.e., positions on x-axis and y-axis). Using a standard deviation
to interpret the variation of multi-attribute features lacks the consideration of correlation
among different attributes.
In addition to the shape features of image object, spatial relationships between objects
are also important. For example, Chang et al. [4] models the distribution of image objects
using orthogonal spatial relationships. Chu et al. [7] models both the orthogonal and
topological spatial relationships. To support image retrieval and ranking based on spatial
relationship similarity, we need models that allow images with similar spatial relationships
to be further compared and ranked.
Currently, images cannot be easily or effectively retrieved due to the lack of a comprehensive
data model that captures the structured abstracts and knowledge needed for image
retrieval. To remedy such shortcomings, we propose a Knowledge-based Spatial Image
Model (KSIM) which supports queries with semantic and similar-to predicates. Semantic
predicates contain semantic spatial relationship operators (e.g., INSIDE, NEARBY, FAR AWAY,
etc.) and/or conceptual terms (e.g., large, small, etc. The similar-to predicates allow
users to retrieve images that are closely correlated with a given image based on a prespecified
set of features.
We use an instance-based knowledge discovery technique, MDISC [6], to cluster similar
images based on the user-specified image features (e.g., shape descriptors and spatial rela-
tionships). The knowledge required for resolving the meaning of similar-to and semantic
operators is called image content interpretation knowledge, and is represented based on the
generated clustering knowledge. MDISC can acquire more comprehensive image content interpretation
knowledge than that acquired by other multi-dimensional indexing techniques,
such as K-D-B-tree (used in FIBSSR [17]) and R tree (used in QBIC [18]). This is because
MDISC classifies images based on conceptual difference of the feature values, while K-D-
B-tree and R tree cluster data based on minimizing the number of disk access per data
retrieval. In addition, these clustering techniques do not consider the semantic difference
of image features; thus no global conceptual view of the image clustering can be provided
to represent conceptual predicates such as LARGE tumor and tumor NEARBY an organ.
This paper is organized as follows: Section 2 presents the Knowledge-Based Spatial Image
Model (KSIM) which integrates the image representations, extracted image features,
and knowledge representing image semantics and similarity. Section 3 discusses the methodology
of extracting image object features, such as shape features and spatial relationships,
from the object contours. Section 4 presents a methodology to extend existing query languages
for including the proposed operators, and Section 5 describes the required intelligent
interpretation and access. Section 6 presents our knowledge-based query processing tech-
nique, and Sections 7 and 8 present the performance results and our conclusions.
2 The Knowledge-Based Spatial Image Model (KSIM)
A three-layered image model is used to integrate the image representations and image
features together with image content interpretation knowledge. The three layers are the
Representation Layer (RL), the Semantic Layer (SL), and the Knowledge Layer (KL). Each
layer consists of its own constructs, and these constructs are linked for cross-reference.
Raw images are stored in the RL where multiple representations of the same image
objects may exist (e.g., X-ray images, magnetic resonance images, CT images, etc. Image
objects that can be queried are represented by contours in the RL. The contours can be
segmented manually, semi-automatically (e.g., using techniques like snake [12] and flooding
in [18]), or automatically [25, 24] depending on the contrast and separability of the image
objects. Computing image features based on known object contours rather than based
for
TAH
LateralVentricle
Symmetry
Brain
Frontal Lobe Tumor
Lateral Ventricle
SR:spatial
relationship
ventricle
object
:Type
abstraction
hierarchy
:link between
layers
:spatial
relationship
for
TAH
for
TAH
Tumor.size
for
TAH
SR(t, l)
for
TAH
SR(t, f)
Layer (SL)
Representation
Layer (RL)
Knowledge
Layer (KL)
Figure
1: An example representing the brain tumors in KSIM. SR(t,b), SR(t,l), and SR(t,f)
represent the spatial relationships between tumor and brain, tumor and lateral ventricle, and
tumor and frontal lobe. The detailed TAH for lateral ventricle is shown in Figure 3, and the
TAH for SR(t,l) is shown in Figure 6.
on raw images results in features of high certainty. Features of high certainty avoid the
probabilistic interpretation of image features [21]. Contour segmentation routines [25, 12,
14, 24] are available to assist in identifying object contours from raw images.
Despite the enormous efforts toward automatic segmentation of medical images, success
has been limited to only a few types of medical objects. These objects, in general, have
high contrast with respect to their background (e.g., bones in projectional X-rays and
computed tomography, and arteries with contrast agents in X-ray angiography), relatively
simple shapes (breast outline in a mammogram), sizes that are not too small, and little or
no overlap with other objects (e.g., central cross-sectional slice of lateral ventricle of the
brain). In general, large medical image repositories (e:g:, radiological picture archiving and
communication systems) contain diverse instances of complex image objects (anatomy and
pathology), and thus automated segmentation of these objects are the bottleneck for the
large-scale deployment of our technique. The emergence of more intelligent segmentation
routines that use various physical models of the target objects (e.g., lungs and bronchial
tree) [2, 20, 23] to assist in object delineation may result in a greater number of robust and
automated medical image object identification programs.
In the SL, an object-oriented technique is used to model the image content extracted
from the image representations in the RL. Image objects are modeled as feature objects.
Spatial relationships among objects are represented by their spatial relationship features
such as distance of centroids, ratio of overlapping area, etc. Features in the SL are computed
from image object contours by the shape model and spatial relationship model. The shape
model computes the required shape features, and the spatial relationship model computes
the required spatial relationship features. Object-oriented inheritance hierarchies are used
to organize similarly related objects.
In the SL, features are classified into derived features, composite features, and conceptual
features. Derived features are features extracted from the corresponding contour(s) (e.g.,
area of an object contour) or derived from other features (e.g., the ratio of perimeter to area
of a contour). A composite feature combines several features into a multi-attribute feature
to reflect the specific content of an object. For example, the composite feature location
of an image object consists of the x location and y location of the contour's centroid. A
conceptual feature is a composite or derived feature with appended knowledge to represent
the image semantics or similarity based on the feature.
The knowledge layer (KL) contains the logic for interpreting image semantics and image
similarity based on the extracted image feature values. Type abstraction hierarchies (TAHs)
[8, 5, 9], which represent general image concepts in the higher levels and specific concept
in the lower levels, are used to represent the knowledge of the selected object features and
spatial relationships. TAHs provide a way to represent the image semantics and similarity.
Figure
1 illustrates the three-layered modeling and the linking among the representation
of image objects (i.e., contours), semantic relationships among the objects, and knowledge
required for representing brain tumors.
The features of contoured image objects in a database are extracted according to the
shape model and spatial relationship model and stored as a feature database. These features
are then classified by a conceptual clustering algorithm, MDISC [6], and the feature
classification hierarchy is represented in TAHs which provide a multi-level knowledge representation
of the image content based on analyzed features. Such TAHs are used to process
queries with semantic operators (e.g., "Find a large tumor NEARBY the lateral ventricle")
and queries with similar-to operator (e.g., "Find patients with similar brain tumors to pa-
Brain midline divides
the lateral ventricle into
left and right protursions.
The horizontal line across
the midpoint of the lateral
ventricle divides the upper
and lower protrusions of
the lateral ventricle.
The midpoint of the lateral
ventricle is the intersection
of the lateral ventricle and
the brain midline.
ur
ul
lr
ur
ll
width of upper left protrusion
ul
ur
ll
lr
ul
ur
ll
lr
width of lower right protrusion
width of lower left protrusion
width of upper right protrusion
height of upper left protrusion
height of lower right protrusion
height of lower left protrusion
height of upper right protrusion
The tips at the front and
back of the brain can be
identified by the generated
-s function on the chain
code of brain contour.
The rapid changes on the
-s function are the tips.
Figure
2: The shape model decomposes a lateral ventricle into four natural sub-structures for
more precise shape description: upper left protrusion, upper right protrusion, lower left protru-
sion, and lower right protrusion.
tient with id 'P000-001' based on the tumor size and the location of the tumor NEARBY
the lateral ventricle"). The conceptual terms (e.g., large and NEARBY) can be translated
to value ranges of relevant features via TAHs. For example, the value range representing
large-sized tumor can be derived from the TAH for tumor size, and the value ranges
representing NEARBY can be derived from the TAH that specifies the spatial relationship
between tumor and lateral ventricle (i.e., SR(t,l)). For similar-to operator, based on the
query context and user behavior, a set of relevant features representing the similarity of
the target image is selected. The appropriate TAHs that represent these selected features
can be used to derive the feature value ranges of the images that are most similar to the
target image. These derived value ranges are used as the query constraints for retrieving
the similar images. The methodology for extracting features and spatial relationships from
object contours is presented in Section 3, and the methodology for generating the required
knowledge is presented in Section 5.
Capturing Object Shape and Spatial Relationship
The shape model and spatial relationship model in the SL are used to extract image features
from contours.
object feature conceptual terms
tumor.size small, medium, large
tumor.roundness circular, non circular
lateral ventricle.left to right symmetry symmetric
upper protrusion pressed to the right
upper protrusion pressed to the left
lower protrusion pressed to the right
lower protrusion pressed to the left
Table
1: A shape feature description table for the brain
3.1 Modeling Shape
Shape of a contour can be described quantitatively using numeric shape descriptors such
as roundness, curveness, rectangularity, compactness, direction, elongatedness, and
eccentricity [22]. These descriptors are called shape features of the image objects. These
shape descriptors provide a global description of object shape, but lack detailed variations
[15]. We propose a two-staged approach to capture the shape content. In the first stage,
complex contours are decomposed into context-dependent natural sub-structures based on
the fundamental line and curve segments identified by the generated function from
the chain code of the relevant object contours [16, 19]. For example, the lateral ventricle
is decomposed into four protrusions based on the two tips of the brain contour found by
the function from the brain contour as shown in Figure 2. In the second stage,
these more elementary contour components are characterized by their shape features such
as area, height, and width. Thus, we can express the shape and spatial relationships
among these decomposed contours to reflect the specific shape content of the image object.
This two-staged shape description allows more specific and detailed shape description using
numerical shape descriptors rather than applying shape descriptors directly [18]. For exam-
ple, in
Figure
2 the height and width of the four components of a lateral ventricle are used
to construct a multi-attribute shape feature to describe the left to right symmetry of the
lateral ventricle as (upperLRWidthRatio (w ul =w ur ), upperLRHeightRatio (h ul =h ur ), low-
erLRWidthRatio (w ll =w lr ), lowerLRHeightRatio (h ll =h lr )). Grouping features (e.g., length,
width, height, area, etc:) from the decomposed components forms a composite feature that
describes the detailed shape characteristics of the contour.
Brain midline divides
the lateral ventricle into
left and right protursions.
The horizontal line across
the midpoint of the lateral
ventricle divides the upper
and lower protrusions of
the lateral ventricle.
The midpoint of the lateral
ventricle is the intersection
of the lateral ventricle and
the brain midline.
ur
ul
lr
ur
ll
width of upper left protrusion
ul
ur
ll
lr
ul
ur
ll
lr
width of lower right protrusion
width of lower left protrusion
width of upper right protrusion
height of upper left protrusion
height of lower right protrusion
height of lower left protrusion
height of upper right protrusion
The tips at the front and
back of the brain can be
identified by the generated
-s function on the chain
code of brain contour.
The rapid changes on the
-s function are the tips.
Figure
3: Multi-attribute Type Abstraction Hierarchy (generated by MDISC based on the decomposed
four protrusions) representing the left to right symmetry of the lateral ventricles
d s
r
r
O c
d s
r
d s
r
O c
O c
Figure
4: An example showing that using semantic operators (e.g., non overlapping) and/or single
measurement (e.g., the shortest distance (d s )) is insufficient to capture the spatial relationship
of two objects. We need additional features such as angle of coverage (' c ) and ratio of area (r a )
to classify the illustrated spatial relationship.
Decomposition provides an effective quantitative shape description when the image objects
have limited numbers of shape components. This description provides sufficient image
content to retrieve similarly or specifically shaped image objects. Conceptual terms can be
defined on a shape feature. The shape feature description table (Table 1) lists the available
conceptual terms for the shape features in the system. Thus, users can ask queries with
conceptual terms for a specific shape feature such as "retrieving lateral ventricles whose
upper protrusion are pressed to the right" (see Query 3 in Section 4).
3.2 Modeling Spatial Relationships
Modeling spatial relationships merely by simple semantic constructs such as separated and
connected is insufficient to compare real-life spatial relationships (as illustrated in Figure 4).
SPECIALIZED
CONSTRUCTS/
OPERATORS
REQUIRED
CONSTRUCTS/
OPERATORS
JOINED
Circumjacent
OCCUPIED
EXTREMELY_
OCCUPIED
OCCUPIED
EXTREMELY_
OCCUPIED
a
r
r
a
r
c
d
O c
DISJOINED
FAR_AWAY NEARBY
a
r
r
s
d c
d c
a
i a
Bordering Invading
c
r
Centrally
Circumjacent
Peripherally
Circumjacent
ENGULFED
BULGING_
INTO
IMPINGING_
INTO
Bordering
Margins
Partially
Surround
With
Bordering
Fully
Surround
With
Bordering
O c
Bordering
Ext-Ext
Margins
Slightly
Touching Intimately
Touching
c
r NEARBY
but not
Surrounding
and
Surrounding
O c
Partially
Surround
Bordering
Fully
Surround
Bordering
O c
distance of the centroids of the
two contours on x-axis
distance of the centroids of the
two contours on y-axis
s
shortest distance of the two
contours
angle of coverage, an angle
centered at the centroid of one
contour and spanned wide
enough to cover the whole area
of the other contour
c
d c
distance of the two centroids.
c
ratio of the length of the contacted
edge to the perimeter of a particular
contour
ratio of the joined area to that of
a particular contour
a
of area of the two contours
a
area of the inner contour
Figure
5: Semantic spatial relationship operators for different topological categories between
two objects (with the representative icons shown). The parameters under a branch classify the
sub-types under that category.
spatial relationship representative features defined semantic terms
Table
2: A spatial relationship description table for the brain tumor
Additional parameters are needed to more precisely describe the spatial relationships. A
set of required spatial relationship features should be specified by domain experts, and the
values of these spatial relationships are stored in the database. In Figure 5, useful parameters
are illustrated with their importance in distinguishing the topoligical relationships
between two objects. More important parameters for distinguishing the sub-types under a
category are placed first in the list, and parameters appearing at higher branches may also
be used in their decendant branches. In Figure 5, BORDERING means that only the surfaces
of the two objects are joined (i.e., r c ? 0; r implies that their areas are
joined (i.e., one of the object is deformed by the other,
implies that r 100%. The required operators are necessary for every spatial relationship.
In an image with a tumor and lateral ventricle, for example, the spatial relationship
instance between the tumor and lateral ventricle is classified as an instance of the class
SR(t,l). This spatial relationship requires ' c , d c , x c , and y c to represent it. These values
are computed based on the object contours. The spatial relationship description table (as
shown in Table 2) lists the representative parameters and available semantic terms for the
spatial relationships in the system.
Figure
6 is an image classification hierarchy of images in the database which is generated
by MDISC based on spatial relationship features of SR(t,l) where two operators NEARBY
and FAR AWAY are defined. With this spatial relationship modeling, a richer set of spatial
relationship parameters not only enhances the quality of the (context-senstive) semantic
spatial relationship operators, but also provides suitable parameters to be considered for
resolving SIMILAR TO operators in comparing spatial relationships.
Figure
The MDISC-generated TAH for representing the spatial relationship between tumor
and lateral ventricle. The TAH is generated based on d c , ' c , x c , and y c ( denoted as centroidDist,
angleOfCoverage, xCordOfCentroids, and yCordOfCentroids in the figure).
Extending Query Language with Knowledge-based
Spatial Query Constructs
We shall now present the BNF specification for extending an object-oriented query language,
such as OQL-93 [3], to include the proposed three types of predicates: (1) SIMILAR TO
predicates, (2) semantic spatial relationship predicates, and (3) predicates with conceptual
terms. A similar extension for SQL was explored in CoBase [10, 9] for transportation and
GIS applications.
The SIMILAR TO operator is used to search for objects similar to a specified target object
BASED ON a set of features specified in the query. The syntax of the SIMILAR TO predicate
in BNF is:
similar-to-pred ::= object SIMILAR-TO object (target-obj-condition)
BASED-ON obj-features -
image SIMILAR-TO image (target-image-condition)
spatial-aspects ::= spatial-aspect ["," spatial-aspects]
spatial-aspect ::= spatial-relationship-feature - obj-feature
FULLY-SURROUND-without-BORDERING-JOINED-BORDERING-
FULLY-SURROUND-with-BORDERING-INTIMATE-TOUCHING-
INVADING-IMPINGING-INTO-BULGING-INTO-NEARLY-ENGULFED-
CENTRALLY-CIRCUMJACENT-SLIGHT-OCCUPIED-EXTREMELY-OCCUPIED
target-obj-condition literal
target-image-condition ::= image-pathlist = literal - image SELECTED-ON-THE-SCREEN
The object, obj feature and spatial relationship feature correspond to the semantic
object, object features, and spatial relationship features in the SL. The image refers
to an image from which a collection of image objects are extracted for querying and compar-
ison. The BASED ON subclause specifies the shape features (i.e., obj feature) and/or specific
spatial relationships between objects (i.e., object spatial relationship object)
that represent the intended similarity of the query. If no BASED ON subclause is specified,
the knowledge in the KL determines the features that represent the similarity based on the
query context and user type. target object condition and target image condition
specify the path condition (e.g., image.patient.ID) to select a distinct target object or
image to be compared with where literal is a constant. SELECTED ON THE SCREEN is a
special function used to specify an image on the screen as the target image for matching.
The syntax for the semantic spatial relationship predicates is:
object spatial-relationship object
To avoid ambiguity in specifying the operators, a pull-down menu is available that
display the available specialized operators as in the spatial relationshp description table
Table
for the user to select a suitable operator to be used in the query.
The syntax for the predicate expressed with conceptual term(s) is:
obj-feature IS conceptual-term
Likewise, a pull-down menu is also used to display the available conceptual terms
for the specified obj feature as in the shape feature description table (Table 1). The
conceptual term is interpreted by the knowledge residing in the KL [5, 9].
Example Queries
Query 1: "Find patients with similar brain tumors to the patient with id 'P000-001' based
on the tumor size and tumor location NEARBY lateral ventricle."
select patientWithImage( patient: i1.patient, image: i1.image)
from Images i1, it
BASED-ON (it.tumor.size,
it.[tumor,lateral
patientWithImage is a constructed type for displaying query results [3].
Query 2: "Find large tumor NEARBY the lateral ventricle."
select patientWithImage( patient: t.patient, image: t.image)
from Tumors t, Lateral-Ventricles l
where t NEARBY l and
t.size IS 'large'
Query 3: "Find the lateral ventricle whose upper protrusion is pressed to the right."
select patientWithImage( patient: l.patient, image: l.image)
from Lateral-Ventricles l
where l.left-to-right-symmetry
IS 'upper-protrusion-pressed-to-the-right'
The knowledge representing upper protrusions pressed to the right is provided in
Figure
3.
A brain surgeon wishes to retrieve images of patients in the database with similar spatial
characteristics as the presented MR image. The textually expressed query is shown in Query
4, and a graphical expression of the same query is illustrated in Figure 11 in Section 6.
Query 4: "Find images in the database that have similar spatial characteristics as the given
image on the screen."
select patientWithImage( patient: p1, image: p1.image)
from Patients p1, Patients pt
where p1.image SIMILAR-TO pt.image (pt.image SELECTED-ON-THE-SCREEN)
The intended features and spatial relationships of Query 4 are derived by the knowledge
layer based on the image content in PT.image and the user type (i.e., brain surgeon).
5 Intelligent Interpretation and Access
The criteria of our image feature clustering algorithm is to minimize the averaged pair-wise
euclidean distance of image feature values in a cluster. Such a measure, known as the
relaxation error [6], considers both the frequency of the value occurrence and the difference
between values. Based on minimizing the summed relaxation error of all the new partitioned
clusters in each iteration, the clustering algorithm, MDISC, recursively partitions the data
set to generate a multi-attribute feature type abstraction hierarchy (MTAH). As both the
feature value distribution and the correlation among different attributes of a feature are
considered, our clustering algorithm provides better image feature classification than those
using standard deviation to represent image similarity [1].
Query
Conceptual
Query
generalization specialization
More Conceptual
Query
Conceptual
Query
Query
Query
Query
Query
Query Processing
Satisfactory
Answers?
Relaxation Manager
Query Modification
Post-Processing
Answers
TAHs,
User Model
(a) Generalization and specialization via TAH (b) The flow diagram of query processing with relaxation
Figure
7: Knowledge-based query relaxation
5.1 Query Interpretation via TAH
The image classification hierarchies are represented in type abstraction hierarchies [8, 5,
9] for processing similar-to and semantic predicates. The concept in the TAH nodes is
represented as the value ranges of the features (see Figure 3 and Figure 6). These value
ranges can be used to retrieve similar images. As shown in Figure 7(a), higher nodes in the
TAH represent more generalized concepts (i.e., wider range of feature values) than that of
the lower nodes (i.e., narrower range of the feature values). The TAH nodes can be labeled
with conceptual terms (e.g., large, small, upper protrusion pressed to the right) to
represent the specific knowledge. These available conceptual terms are listed in Table 1 to
provide a pull-down menu for assisting users during query specification.
The knowledge of the semantic spatial relationship operators can also be represented
by the TAH. Based on the topological relationships of two objects [13], useful semantic
operators are shown in Figure 5. MDISC is used to classify image features for defining these
semantic spatial relationship operators based on the values of the representative spatial
relationship features. The resultant TAH nodes can be labeled with an appropriate subset of
the detailed operators (e.g., NEARBY, FAR AWAY) to represent the value ranges representing
the semantic spatial relationship operators. These value ranges are used as the query
constraint to retrieve images satisfying the conceptual predicates.
To solve a similar-to query whose intended similarity includes the features or spatial
relationship classified by a TAH, the lower TAH nodes are attached with more specific value
ranges. In solving the similar-to query, we shall first locate the TAH node that has a value
range closest to that of the target image based on the selected features. By traversing up
(i.e., generalizing) and down (i.e., specializing) the selected TAH, the feature value range
in the finalized TAH node is used to modify the query constraints for retrieving similar
images from the database, as shown in Figure 7(b). The TAH traversal is controlled either
by user input or by relaxation policy provided in the user model.
There is a TAH directory in the system that stores such information as object names,
sets of features, spatial relationships, user type, explanation about the emphasis or purpose
of the TAH, etc. Based on this information, the system (or user) selects and retrieves the
appropriate TAHs for processing the query. If the retrieved TAH does not match user's
specification, it can be edited by the user to meet his/her application.
The time complexity to generate a multi-attribute hierarchy by MDISC is O(m(n(log(n)))),
where m is the number of attributes, and n is the number of distinct instances used in generating
the TAH [6]. Our experiment reveals that to generate a MTAH with about one
hundred images based on four features takes a fraction of a second's processing time on a
5.2 User Model
In our knowledge-based query processing, user behavior is characterized by his/her concerns
(including image objects, set of features, and spatial relationships), object matching policy,
and the policies for relaxing query conditions when no satisfactory answer is found. These
behaviors can be represented by a user model to customize the query processing. Different
types of users can be represented by different user profiles in the model. Objects in the
user profile are divided into mandatorily matched objects and optional matched objects.
Mandatorily matched objects of a user profile must be matched with the query context for
the user profile to interpret the query. Optionally matched objects provide guidance for
additional matched features to enhance the query constraints. Such an option permits a
partial matching of the user model and increases the matching occurrences. The relaxation
policy describes how to relax the selected TAHs when no satisfactory answers are found,
SR(l,lv) and SR(l,f) (specified by (1)) are
more important than SR(l,b)
(specified by (2))
relaxation order:
user : brain surgeon
mandatorily matched objects:
Lesion and Brain (highlighted
by thick-lined box)
optional mathed objects
Lateral Ventricle and Frontal Lobe
(2)
(1)
(1)
Lesion
Brain Frontal
Lobe
Lateral
Ventricle
Figure
8: A user profile for brain surgeons
where each MTAH (such as SR(t,l) and SR(t,b)) represents different knowledge about the
image objects. The relaxation policy specifies the relaxation order (e.g., which MTAH
should be relaxed first), relaxation level, non-relaxable objects, etc. For more discussion on
relaxation operators, interested readers should see reference [9].
In an MR brain image with tumor(s), for example, a brain surgeon's concerns regarding
the brain tumors are their locations and the spatial relationships with other objects in the
brain, as shown in Figure 8. The information in this user profile can be used for processing
queries such as "retrieve similar images as the brain tumor shown on the screen." Different
types of users (e.g., radiologists, surgeons, and clinicians) may have a different emphasis.
Thus, different user profiles can be represented in the user model for the same set of images.
6 Knowledge-Based Query Processing
6.1 Query Processing
Query processing can be divided into three phases, as shown in Figure 9: the query analysis
and feature selection phase, the knowledge-based content matching phase, and the query
relaxation phase. In the query analysis and feature selection phase, based on the target
image, query context, and user type, the system analyzes and selects the relevant features
and spatial relationships for processing the query. For similar-to queries (i.e., path 1 in
Figure
9 is selected), the features and spatial relationships specified in the BASED ON
subclause are the features representing the intended image similarity. If no BASED ON
subclause is specified, the user type and objects contained in the target image are used
to select the features and spatial relationships representing the intended image similarity
according to the matched user profile. After the intended features are selected, the shape
selected features and
target images
semantic
operators,
conceptual
terms
query specification (textually or graphically expressed)
Query Analysis
and Feature
Selection
Knowledge-Based
Content Matching
Query
Relaxation
matched user
profile
matched
relaxation
policy
answers
matched TAH nodes on the selected TAH,
relaxation policy
user inputs
(relaxation
policy)
Relaxation
Manager
query
modification
Feature Extraction
Query Analysis
Knowledge-Based
Content Matching
Spatial Relationship
Model
Shape Model
TAH Directory
(path
(path
User Model
Figure
9: The flow diagram of knowledge-based query processing
and spatial relationship models extract their values from the object contours in the target
image. For semantic queries (i.e., path 2 in Figure 9 is selected), the semantic spatial
relationship predicates and conceptual terms in the query provide the selected features and
spatial relationships.
In the knowledge-based content matching phase, the spatial relationship operators and
conceptual terms are used to select the matched TAH(s) and TAH node(s) for processing
the semantic queries. For similar-to queries, the selected features, spatial relationships, and
user types are used to match TAH(s). The matched TAHs are traversed to locate the node
with a value range closest to that of the target image. The set of images contained in the
TAH nodes that has the closest matched value ranges represents the set of images similar
to the target image.
In the query relaxation phase, the query is processed by traversing up and down the
TAH(s) starting from the matched TAH nodes based on the relaxation policy provided in
the matched user profile and user input. In every relaxation iteration, the query constraints
are modified by the value ranges specified in the selected TAH nodes to retrieve the similar
images. This relaxation process repeats until it reaches user satisfaction (e.g., number of
similar images, relaxation error, etc. [5]). The returned images can be ranked based on the
selected features. For the queries with semantic operators and/or conceptual terms, the
value ranges in the finalized TAH nodes (i.e., the TAH nodes whose labels best match the
semantic operators and/or conceptual terms) are used as the query constraints to retrieve
the intended images. Since TAHs are user- and context-sensitive, the user can select the
appropriate TAHs for his/her applications.
Figure
illustrates the query processing for a query with a similar-to operator where
the target image is shown in the target image canvas of Figure 11. No BASED ON subclause
is provided in this example query, and the user model in Figure 8 is matched. The system
allows user input to control the relaxation process which may overwrite the relaxation policy
provided by the selected user model. According to the relaxation control specified in the
user model, SR(t,l) is the first candidate TAH to be relaxed. Based on the TAH of SR(t,l)
in
Figure
6, the resulting value ranges for retrieving similar images are:
Objects extracted from the target image
Select the matched user profile from
the user model (mandatory matched
objects are highlighted by thick-lined
box).
Retrieve the TAH(s) from the TAH
directory that match the selected
features. Locate the TAH nodes in
the TAHs such that their value ranges
are most close to the target data
values to start the query relaxation.
The query constraints are relaxed based
on user input or the relaxation policy
from the user model. The value ranges
in the finalized TAH nodes are used to
retrieve similar images.
The matched user profile is used to
select the features and spatial
relationship for representing tumor
similarity.
Lesion Lateral
Ventricle
Brain
Tumor
Lesion Lateral
Ventricle
Lateral
Ventricle
(1)
SR(l, lv)
(1)
SR(t, f) Frontal
Lobe
(2)
Brain
Tumor SRtl
Lesion
Lateral
Ventricle
(1)
SR(l, lv)
(2)
Brain
Tumor SRtl
Lesion Lateral
Ventricle
(1)
SR(l, lv)
(2)
Brain
TAH for
TAH for
SR(l, lv)
Tumor SRtl
Lesion
Lateral
Ventricle
(1)
SR(l, lv)
(2)
Brain
TAH for
(0.85 <= Oc <= 1.54,
43.91<= dc <= 71.31,
TAH for
SR(l, lv)
Figure
10: The query processing of Query 4 (the TAH of SR(t.l) is shown in Figure 6)
Figure
11: The graphical user interface (GUI) of the knowledge-based query answering
These value ranges correspond to the value range of the TAH node two levels higher from
the matched leaf node.
The retrieved images are shown and ranked on the GUI with the relaxation error attached
to each retrieved image. There is an explanation window which displays the selected
features and spatial relationships used for the matching, the relaxation level, and the number
of instances matched on the TAH node. During the relaxation process, if the relaxation
of a TAH reaches a certain relaxation error threshold provided by the user model, then the
system selects the next TAH for relaxation according to the relaxation policy. Users can
also selectively combine the TAHs with logical operations (e.g., AND, OR, etc.) to retrieve
the (desired) images.
7 Performance of the Knowledge-Based Query Pro-
cessing
The TAH generation is based on the set of features used to classify objects in the images. For
example, size and location are used in classifying images of brain tumors. The instances
covered by the selected TAH node are candidates for matching the target image. Thus
the set of features used for classifying affect the precision of the retrieval (i.e., retrieved
relevant answers/all relevant answers). Using irrelevant features in classification will reduce
the precision of the retrieval. For query with a SIMILAR TO operator, the set of features
used to compare the similarity affects the precision value. The weights assigned to the
features reflect their relative importance in computing the similarity measure for ranking
the retrieved images.
As the relationship among the objects in the image becomes more complex, more features
are needed to specify the target images. For example, in specifying the characteristics of
an object in an image, in addition to size, we can also include the shape and position of
the object. In specifying the spatial relationship between two objects, in addition to their
relative location and angle of coverage, the ratio of joining area or volume, and longest or
shortest distance of the two objects can also be used in specifying additional characteristics
of the target image. Therefore, using more precise specifications increases precision of the
retrieval.
The recall of retrieval (retrieval relevant answers/all retrieval answers) depends on the
relaxation error of the TAH node(s) of the referenced TAH(s) (i.e., the larger the relaxation
error of a node, the lower recall value the TAH node yields) as well as the importance of
the features in characterizing objects in the image. To increase the recall value, the range
of the TAH nodes should be small (small relaxation error) and the selected TAH(s) for
query processing should contain important attributes for characterizing the objects and
their interrelationship in the image. Since TAHs can be customized based on user type and
context, the user can select the set of features for generating the TAH(s) for processing
a specific query and control the performance of the retrieval based on the complexity of
objects in the image and the available features of the objects for classification.
We have collected image and computed features for brain tumor examples as described
TAH(size) TAH(size, location, angle of coverage)
Precision without ranking 32.92% 73.33%
with ranking 33.75% 82.96%
Recall without ranking 27.43% 52.52%
with ranking 28.13% 59.41%
Table
3: Performance of the knowledge-based query processing (in terms of precision and recall)
for Query 4 based on the two different TAHs
in query 4 in our prototype system. The images database consists of
(MR) images (256 x 256 x 8 bits) containing brain tumors. Using the DISC algorithm, the
images are classified into two TAHs: one based on tumor size and the other based on size,
location, and the angle of coverage relative to the lateral ventricle. The relevant answers for
each target instance are determined by exhaustively ranking all the images in the database
by the similarity measurement based on the features selected by the domain expert (e.g.,
radiologists). Using the best-10 retrieving strategy (i.e., the generalization steps continue
until the TAH node covers at least 10 instances) and taking each of the images in the
database as the target image, the average precision and recall values are shown in Table 3.
This illustrates that the number of features used to specify the target image as well the
ranking plays an important role in the performance of the retrieval.
The query response time includes the time for parsing, feature computation (this is
needed only in the case when the features of the target image are not pre-computed), query
processing, image retrieval, and image display. Our testbed uses the GemStone object-oriented
database and VisualWorks as the application development tools running on a
Workstation. The query response time for Query 4 is as follows: parsing
takes less than 1 second, feature computation takes around 12 seconds (for extracting
features of the target image shown on the screen), knowledge-based query processing (i.e.,
selecting TAH nodes to match with features) takes about 1 to 2 seconds, image display
takes about 3 to 5 seconds (depending on the number of returned images). Each relaxation
processing (i.e., generalize and specialize TAH nodes to obtain sufficient number of images)
takes about 0.5 seconds. Thus the time of the knowledge-based query processing is about
2 to 3 seconds which is relatively small compared to the time for feature extraction and
image display.
Conclusions
In this paper, we present a knowledge-based approach for retrieving images by image
features and content. The model supports semantic operators (e.g., JOINED, NEARBY,
FAR AWAY), similar-to operators, and references to conceptual terms (e.g., LARGE, SMALL)
in the image queries.
The proposed KSIM model consists of three layers: the Representation Layer, the Semantic
Layer, and the Knowledge Layer. These layers integrate the image representation
(i.e., image contours) together with the knowledge required to capture image content and
interpret the captured content to provide domain- and user-specific query.
Our model considers shape structure and shape features as well as spatial relationship
features. These features can be automatically or semi-automatically extracted from
the image contours and stored in a feature database. Based on the specified features and
spatial relationships, the knowledge of image semantics and image similarity can be automatically
generated by our conceptual clustering algorithm using the extracted features
in the database. The knowledge is represented in a special knowledge structure, Type
Abstraction Hierarchy (TAH), which is used in the query processing through a generaliza-
tion/specialization process on the TAHs. The value ranges of the finalized TAH node are
used to modify the query conditions for retrieving images. A user model is introduced to
allow users to customize their requirement of query answering. The system also presents
the quality of the answers measured in relaxation error to the user. Since the feature
computation and knowledge acquisition are automated, our proposed technique is scalable.
A prototype image database system, KMeD [11], based on the proposed model has been
implemented at UCLA using the GemStone/VisualWorks platform. Our preliminary result
indicates that such a knowledge-based technique is a feasible and effective approach to
retrieve images by features and content.
9
Acknowledgement
The authors would like to thank John David N. Dionisio for implementation of the graphical
user interface of the query language, Christine Chih for her assistance in image segmen-
tation, Kuorong Chiang and Timothy Plattner for developing the programs for generating
TAHs for images, and Prof. Alfonso Cardenas for his stimulating discussions during the
course of writing this paper.
--R
A visual information management system for the interactive retrieval of faces.
The Object Database Standard: ODMG - 93 (Release 1.2)
An intelligent image database system.
A structured approach for cooperative query answering.
Abstraction of high level concepts from numerical values in databases.
A semantic modeling approach for image retrieval by content.
The design and implementation of Cobase.
A scalable and extensible cooperative information system.
A cooperative geographical information system.
A knowledge-based multimedia medical distributed database system - KMeD
Interactive outlining: An improved approach using active contours.
Reasoning about binary topological relations.
Segmentation and feature extraction for magnetic resonance brain image analysis.
Vision in Man and Machine.
Computer analysis of dynamic scenes containing currilinear figures.
The QBIC project: Querying images by content using color
A model-based vision system for industrial parts
3D bronchial tree model and fractal analysis as tools for performance evaluation of different CT acquisi- tion/reconstruction schemes
An information retrieval approach for image databases.
Validation of an enhanced knowledge-based method for segmentation and quantitative analysis of intrathoracic airway trees from three-dimensional CT images
Multimodality tumor delineation via fuzzy fusion and deformable modelling.
A recurrent coopera- tive/competitive field for segmentation of magnetic resonance brain images
--TR
--CTR
S. Nepal , M. V. Ramakrishna , J. A. Thom, A research prototype image retrieval system, Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, p.386, August 24-28, 1998, Melbourne, Australia
J. Chamorro-Martnez , J. M. Medina , C. D. Barranco , E. Galn-Perales , J. M. Soto-Hidalgo, Retrieving images in fuzzy object-relational databases using dominant color descriptors, Fuzzy Sets and Systems, v.158 n.3, p.312-324, February, 2007
Kenneth W. Tobin , Thomas P. Karnowski , Lloyd F. Arrowood , Regina K. Ferrell , James S. Goddard , Fred Lakhani, Content-based image retrieval for semiconductor process characterization, EURASIP Journal on Applied Signal Processing, v.2002 n.1, p.704-713, January 2002
Chi-Ren Shyu , Christina Pavlopoulou , Avinash C. Kak , Carla E. Brodley , Lynn S. Broderick, Using human perceptual categories for content-based retrieval from a medical image database, Computer Vision and Image Understanding, v.88 n.3, p.119-151, December 2002
Wasfi Al-Khatib , Y. Francis Day , Arif Ghafoor , P. Bruce Berra, Semantic Modeling and Knowledge Representation in Multimedia Databases, IEEE Transactions on Knowledge and Data Engineering, v.11 n.1, p.64-80, January 1999
Hau-San Wong , Horace H. Ip , Lawrence P. Iu , Kent K. Cheung , Ling Guan, Transformation of Compressed Domain Features for Content-Based Image Indexing and Retrieval, Multimedia Tools and Applications, v.26 n.1, p.5-26, May 2005
Simone Santini , Ramesh Jain, Similarity is a Geometer, Multimedia Tools and Applications, v.5 n.3, p.277-306, November 1997
Y. Alp Aslandogan , Clement T. Yu, Techniques and Systems for Image and Video Retrieval, IEEE Transactions on Knowledge and Data Engineering, v.11 n.1, p.56-63, January 1999
G. Petraglia , M. Sebillo , M. Tucci , G. Tortora, Virtual Images for Similarity Retrieval in Image Databases, IEEE Transactions on Knowledge and Data Engineering, v.13 n.6, p.951-967, November 2001
Mohand-Sad Hacid, Representing and Reasoning on Conceptual QueriesOver Image Databases, Journal of Intelligent Information Systems, v.14 n.2-3, p.131-154, March-June
Wesley W. Chu , Chih-Cheng Hsu , Alfonso F. Crdenas , Ricky K. Taira, Knowledge-Based Image Retrieval with Spatial and Temporal Constructs, IEEE Transactions on Knowledge and Data Engineering, v.10 n.6, p.872-888, November 1998
Atsuo Yoshitaka , Tadao Ichikawa, A Survey on Content-Based Retrieval for Multimedia Databases, IEEE Transactions on Knowledge and Data Engineering, v.11 n.1, p.81-93, January 1999
Elisa Bertino , Ahmed K. Elmagarmid , Mohand-Sad Hacid, A Knowledge-Based Approach to Visual Information, Journal of Intelligent Information Systems, v.19 n.3, p.319-341, November 2002
Arnold W. M. Smeulders , Marcel Worring , Simone Santini , Amarnath Gupta , Ramesh Jain, Content-Based Image Retrieval at the End of the Early Years, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.12, p.1349-1380, December 2000 | spatial relationship model;knowledge-based spatial image model;cooperative query processing;shape model;retrieve image by feature and content;spatial query processing;medical Image database;knowledge-based query processing |
627767 | Tries for Approximate String Matching. | AbstractTries offer text searches with costs which are independent of the size of the document being searched, and so are important for large documents requiring spelling checkers, case insensitivity, and limited approximate regular secondary storage. Approximate searches, in which the search pattern differs from the document by k substitutions, transpositions, insertions or deletions, have hitherto been carried out only at costs linear in the size of the document. We present a trie-based method whose cost is independent of document size. Our experiments show that this new method significantly outperforms the nearest competitor for are arguably the most important cases. The linear cost (in k) of the other methods begins to catch up, for our small files, only at 2. For larger files, complexity arguments indicate that tries will outperform the linear methods for larger values of k. Trie indexes combine suffixes and so are compact in storage. When the text itself does not need to be stored, as in a spelling checker, we even obtain negative overhead: 50% compression. We discuss a variety of applications and extensions, including best match (for spelling checkers), case insensitivity, and limited approximate regular expression matching. | Introduction
The need to find an approximate match to a string arises in many practical
problems. For example, if an optical character reader interprets a "D" as
an "O", an automatic checker would need to look up the resulting word, say
"eoit" in a dictionary to find that "edit" matches it up to one substitution. Or
a writer may transpose two letters at the keyboard, and the intended word,
worst-case run preproc. time extra space ref.
naive mn
Shift-or O(n) O(m+ j\Sigmaj) O(j\Sigmaj) [4]
Patricia O(m) O(n log n) O(n) [10]
Figure
1: Exact Match Algorithms
say "sent", should be detected instead of the error, "snet". Applications
occur with strings other than text: strings of DNA base pairs, strings of
musical pitch and duration, strings of edge lengths and displacements in a
diagram, and so on. In addition to substitutions and transpositions, as above,
errors can include insertions and deletions.
The approximate match problem in strings is a development of the simpler
problem of exact match: given a text, W n , of n characters from an alphabet
\Sigma, and a string, Pm , of m characters, of P in
W . Baeza-Yates [2] reviews exact match algorithms, and we summarize in
Figure
1.
Here, all algorithms except the naive approach require some preprocess-
ing. The Knuth-Morris-Pratt (KMP), Boyer-Moore (BM), and Shift-or algorithms
all preprocess the search string, P , to save comparisons. The Boyer-Moore
algorithms are sublinear in practice, and better the bigger m is, but
depend on n. The Patricia method builds a trie and is truly sublinear. 1 The
preprocessing is on the text, not the search strings, and although substantially
greater than for the linear algorithms, need be done only once for a
text. Note that tries of size n can be built in RAM in time O(n), but that
on secondary storage, memory differences make it better to use an n log n
method for all practical sizes of trie. So we quote that complexity.
Trie-based methods are best suited for very large texts, which require
secondary storage. We emphasize them in this paper, but will compare our
trie-based method experimentally with the linear methods.
Approximate string matching adds a parameter to the above, k: the
algorithm reports a match where the string differs from the text by not
1 The term "sublinear" in this literature has two meanings, which we distinguish as
sublinear and truly sublinear. Truly sublinear in n means O(f(n)) where f is a sublinear
function, e.g., log n or 1. Sublinear means truly sublinear or O(n) where the multiplicative
constant is less than 1.
more than k changes. A change can be a replacement (or substitution),
an insertion, or a deletion. It can also be a transposition, as illustrated
above. Such operations were formulated by Damerau [8] and the notion
of edit distances was given by Levenshtein [15]. A dynamic programming
(DP) algorithm was shown by Wagner and Fischer [26] with O(mn) worst
case. Ukkonen [24] improved this to O(kn) (and clearly k - m) by finding
a cutoff in the DP. Chang and Lawler [7] have the same worst case, but get
sublinear expected time, O((n=m)k log m)) and only O(m) space, as opposed
to O(m 2 ) or O(n) for earlier methods. This they do by building a suffix tree
[27, 16], which is just a "Patricia" trie (after Morrison [19]), on the pattern
as a method of detecting common substrings. Kim and Shawe-Taylor [12]
propose an O(m log n) algorithm with O(n) preprocessing. They generate n-grams
for the text and represent them as a trie for compactness. Baeza-Yates
and Perlberg [5] propose a counting algorithm which runs in time independent
of k, O(n +R), where R is bounded O(n) and is zero if all characters in Pm
are distinct. Figure 2 summarizes this discussion. Agrep [28] is a package
based on related ideas, which also does limited regular expression matching,
i.e., Pm is a regular expression.
(Regular expression matching and k-approximate string matching solve
worst-case run preproc. time extra space ref.
cutoff O(kn) O(k) O(kn) [24]
suffix tree O(kn) O(m) O(m) [7]
n-gram O(m log n) [12]
Figure
2: k-Approximate Match Algorithms
different problems. The problem areas overlap - e.g., P
# is a one-place wildcard, can be written as a regular expression, but is also
a 3-approximate match - but they do not coincide.)
A recent review of these techniques is in the book by Stephen [23]. Hall
and Dowling [11] give an early survey of approximate match techniques. The
work is all directed to searches in relatively small texts, i.e., those not too
large to fit into RAM. For texts that require secondary storage, O(n) is far
too slow, and we need O(log n) or faster methods, as with conventional files
containing separate records [17]. The price we must pay is to store an index,
which must be built once for the whole text (unless the text changes). If we
are interested in the text as an ordered sequence of characters, we must store
the text as well, and the index represents an additional storage requirement.
If we are interested in the text only for the substrings it contains, as in a
dictionary for spelling check, then we need only store the index, and we can
often achieve compression as well as retrieval speed.
Tries have been used to index very large texts [10, 18] and are the only
known truly sublinear way to do so. Tries are trees in which nodes are empty
but have a potential subtree for each letter of the alphabet, \Sigma, encoding the
data (e.g., 0 and 1 for binary tries). The data is represented not in the nodes
but in the path from root to leaf. Thus all strings sharing a prefix will be
represented by paths branching from a common initial path, and considerable
compression can be achieved. 2 Substring matching just involves finding a
path, and the cost is O(m log n) plus terms in the number of resulting
matches. (The log n component reflects only the number of bits required to
store pointers to the text, and is unimportant.) Regular expression matching
2 Note that this compression is on the index, which may still be larger than the text.
Typically, if we index every character in the text, as we do in Section 4, the index will
be five times the size of the text. If we index only every word, the index is smaller and
compression results.[18] If we do only dictionary searches, as in Section 6, there is great
compression.
simulates the regular expression on the trie, [9] and is also fast O(log m (n) n ff )
where ff!1.
This paper proposes a k-approximate match algorithm using Damerau-
Levenshtein DP on a text represented as a trie. The insight is that the trie
representation of the text drastically shortens the DP. A m \Theta n DP table is
used to match a given Pm with the text, W n . There would have to be a new
table for each suffix in W (of length n; But the trie representation
of W compresses these suffixes into overlapping paths, and the corresponding
column need be evaluated only once. Furthermore, the Ukkonen cutoff can be
used to terminate unsuccessful searches very early, as soon as the differences
exceed k. Chang and Lawler [7] showed Ukkonen's algorithm evaluated O(k)
columns, which implies searching a trie down to depth O(k). If the fanout
of a trie is \Sigma, the trie method needs only to evaluate O(k j\Sigmaj k ) DP table
entries.
We present this method in terms of full-text retrieval, for which both the
index and the text must be stored. In applications such as spelling checkers
[14], the text is a dictionary, a set of words, and need not be stored separately
from the index. These are special cases of what we describe. In such cases,
our method offers negative storage overhead, by virtue of the compression,
in addition to the very fast performance.
We compare our work experimentally with agrep [28], and show that tries
outperform agrep significantly for small k, the number of mismatches. Since
agrep complexity is linear in k, and trie search complexity is exponential in k,
agrep is expected to become better than tries for large k. Our experiments
show that the breakeven occurs beyond the practically important case of
1. Since the authors of agrep compare their work thoroughly with other
approximate search techniques [28], we make no other comparisons here.
This paper is organized as follows. The next section introduces Damerau-
Levenshtein DP for approximate string matches. Section 3 briefly describes
data structures, and gives our new algorithm for approximate search
on text tries. Then we give experimental results comparing approximate trie
methods with agrep. Sections 5 and 6 discuss extensions and advanced applications
of our method, including the important case of dictionary checking,
where we attain both speedup and compression. We conclude and discuss
further possible research.
Programming
be a pattern and a target string
respectively. We use D(Pm distance, the minimum number of
edit operations to change Pm to W ' . Here, an edit operation is either to
or to transpose two adjacent
symbols in Pm . We assume symbols are drawn from a finite alphabet, \Sigma.
Given an example example. We have D(P 7
3 since changing P 7 to W 7 needs to: (1) delete
l. The edit distance,
be recursively defined as follows:
@
A
else
(the null character), and
else
else
To evaluate D(Pm ; W ' ), we need to invoke D four times with both subscripts
decreasing by no more than two. Thus, a brute force evaluation must
take O(2 min(m;') ) calls. However, for D(Pm ; W ' ), there are only (m+1)\Theta('+1)
possible values. DP evaluates D(Pm ; W ' ) by storing each possible D value in
a m\Theta' table. Table 1 shows a 3\Theta4 DP table for P 2 =ab and W 3 =bbc.
Table
1: Dynamic Programming
Furthermore, it is not necessary to evaluate every D values (DP table
entries). Ukkonen [24] proposed an algorithm to reduce the table evalua-
tions. His algorithm works as follows: Let C j be the maximum i such that
for the given j (C j =0 if no such i). Given
and then set C j to the largest i (0-
such that D(P i proved that this algorithm evaluates
expected entries. As shown in Table 2, for P 4 =adfd and W 7 =acdfbdf
of 5\Theta8=40 entries, Ukkonen's algorithm evaluates only 23 entries for k=1.
Ukkonen's algorithm sets D(P 1 at initial
time. It evaluates the first column up to row C 0 +1=2. Since the largest
entry value of this column is at row 2, it sets C 1 =2. Then, it evaluates the
second column up to row C 1 +1=3. Since the largest entry value of this column
is at at row 2, it sets C 2 =2. Similarly, it evaluates the third column
up to row C 2 +1=3 to get C 3 =2, the fourth column to get C 4 =3, and the
fifth column to get C 5 =0, which indicates that it is impossible to change
any prefix of adfd to acdfb in less than one edit operation. Thus, we know
We can stop the evaluation if we do not want to know the
exact value of D(P 4
3 Trie and Approximate Search
We follow Gonnet et al. [9] in using semi-infinite strings, or sistrings. A
sistring is a suffix of the text starting at some position. A text consists
of many sistrings. If we assume sistrings start at word boundaries, the
text, "echo enfold sample enface same example," will have six sistrings
of this kind. Figure 3 shows these sistrings and an index trie constructed
over these sistrings. To make Figure 3 simpler, we truncate sistrings after
the first blank. To index full size sistrings, we simply replace leaf nodes by
sistring locations in the text. To prevent a sistring being a proper suffix of
another, we can append either arbitrary numbers of the null symbol after
the text or a unique end-of-text symbol. The index trie has many distinctive
properties:
ffl When conducting a depth-first traverse, we not only get all sistrings,
but also get them in lexicographical order.
ffl When searching a string, say example, branching decisions at each node
are given by each character of the string being sought. As the trie in
Figure
3, we test the first letter e to get to the left branch, and the
second letter x to get to the right branch. As a result, search time is
proportional only to the length of the pattern string, and independent
of the text size.
echo enfold sample enface same example
Sistrings:
echo enfold sample enface same example
enfold sample enface same example
sample enface same example
enface same example
same example
example
e s
a
f
ho
c
ce ld
ample
x
a
le
Trie:
Figure
3: Text, Sistring and Index Trie
ffl The common prefixes of all sistrings are stored only once in the trie.
This gives substantial data compression, and is important when indexing
very large texts.
Trie methods for text can be found in [10, 18, 22]. Here we describe them
only briefly. When constructing a trie over a large number of and extremely
long sistrings, we have to consider the representation of a huge trie on secondary
storage. Tries could be represented as trees, with pointers to subtrees,
as proposed by Morrison [19], who first came up with the Patricia trie for
text searches. Orenstein [21] has a very compact, pointerless representation,
which uses two bits per node and which he adapted for secondary storage.
Merrett and Shang [18, 22] refined this method and made it workable for
Patricia tries with one bit per node. Essentially, both pointerless representations
would entail sequential searches through the trie, except that the bits
are partitioned into secondary storage blocks, with trie nodes and blocks
each grouped into levels such that any level of nodes is either entirely on or
entirely off a level of blocks. With the addition of two integers per block, the
sequential search is restricted to within the blocks, which may be searched
as a tree. For more details of this representation, see [22].
3.1 Two Observations
Before introducing our approximate search algorithm, we give two observations
which will link the trie method with the DP technique.
Observation I
Each trie path is a prefix shared by all sistrings in the subtrie. When evaluating
DP tables for these sistrings, we will have identical columns up to the
prefix. Therefore, these columns need to be evaluated only once.
Suppose we are searching for string sane in a trie shown in Figure 3. To
calculate distances to each word, we need to evaluate six tables. Table 3
shows three of them. For each table, entries of the ith column depend only
on entries of the j-i th column, or the first i letters of the target word.
Words sample and same have the same prefix sam, and therefore, share the
table entries up to the third column. And so does the first column of words
echo, enface, enfold and example, the first three columns of words enface
and enfold. In general, given a path of length x, all DP entries of words in
the subtrie are identical up to the xth column.
This observation tells us that edit distances to each indexed word (sistring
in general) can be calculated by traversing the trie, and in the meantime,
storing and evaluating one DP table. Sharing of common prefixes in a trie
structure saves us not only index space but also search time.
Observation II
If all entries of a column are ? k, no word with the same prefix can have a
distance - k. Therefore, we can stop searching down the subtrie.
For the last table of Table 3, all entries of the second column are ? 1.
If searching for words with differences, we can stop evaluating strings
in the subtrie because for sure D(sane; en:::) ? 1. For the same reason,
after evaluating the fourth column of table sample, we find all entries of the
are ? 1, and therefore, stop the evaluation.
This observation tells us that it is not necessary to evaluate every sistring
in a trie. Many subtries will be bypassed. In an extreme case, the exact
search, all but one of the subtries are trimmed.
3.2 Search Algorithm
The algorithm of Figure 4 shows two functions: DFSearch( T rieRoot, 1)
traverses an index trie depth-first, and EditDist( j) evaluates the jth column
of the DP table for pattern string P and target string W . For the purpose
of illustration, we start and stop evaluation at the word boundary in the
following explanation.
Essentially, this algorithm is a trie walker with cutoffs (rejects before
reaching leaves). Given a node c, its root-to-c path, w 1 w 2 :::w x , is a prefix
shared by all strings in SubT rie(c). If changing w 1 w 2 :::w x to any possible
prefix of P costs more than k, there will be no string in SubT rie(c) with
:array [\Gamma1::max; \Gamma1::max] of integer; /* [i;
:array [0::max] of integer; /* variables for Ukkonen's cutoff,
:array [0::max] of character; /* pattern and target string, W
number of allowable errors */
Procedure DFSearch( T rieNode :Anode, Level :integer);
begin /* depth-first trie search */
if (T rieNode in a leaf node) then
for each character in the node do /* retrieve characters one by one */
W[Level] := the retrieved character;
find a target word */
output W[1]W[2].W[j-1];
return;
if (EditDist(
return;
Level
else
for each child node do /* retrieve child node one by one */
ChildNode := the retrieved node;
W[Level] := the retrieved character;
find a target word */
output W[1]W[2].W[j-1];
return;
if (EditDist( search subtrie down */
return;
DFSearch( ChildNode, Level+1) /* search down the subtrie */
Function
begin /* evaluate one column of DP table */
for i:=1 to Min( C[j-1]+1, length(p)) do
evaluate one table entry */
r := if (P[i-1]=W[j] and P[i]=W[j-1]) then 1 else
return (if (C[j]=0) then 1 else T[i-1,j]);
Figure
4: Approximate Trie Search Algorithm
mismatches. Hence, there is no need to walk down Subtrie(c). A cutoff
occurs. Each letter w j (1-j-x) on the path will cause a call to EditDist(j).
We use Ukkonen's algorithm to minimize row evaluations.
Suppose we have a misspelled word P=exsample and want all words with
mismatches. Figure 5 shows the index trie and some intermediate results
of the search. After evaluating D(P; ech), we find that entries on the third
column are all -2. According to observation II, no word W with the prefix
ech can have We reject word echo and continue traversing.
After evaluating D(P; enf), we know, once again, no word W with prefix enf
can have and therefore, there is no need to walk down this
subtrie. We cut off the subtrie. Since ech and enf share the same prefix e,
we copy the first column of ech when evaluating enf (observation I). After
evaluating path 3, we find accept the word. The
search stops after cutting at path 4, sa. Figure 5 shows some intermediate
results of the search.
Pattern String:
Search Path 2:
Search Path 3:
Search Path 4:
String Distance Action
exsample
ech
enf
example
sa
reject
cutoff
accept
cutoff
Depth First
e s
a
fho
c
ce ld
ample
x
a
le
Figure
5: Approximate Trie Search Example
4 Experimental Results
We built tries for five texts: (1) The King James' Bible retrieved from ak-
bar.cac.washington.edu, (2) Shakespeare's complete works provided by Oxford
University Press for NeXT Inc., (3) section one of UNIX manual pages
from Solbourne Computer Inc., (4) C source programs selected randomly
from a departmental teaching machine, and (5) randomly selected ftp file
names provided by Bunyip Information System. Sistrings start at any character
except the word boundary, such as blank and tab characters. Table 4
shows the sizes of the five texts and their index tries.
4.1 Search Time
We randomly picked up 5 substrings from each of the five texts, and then
searched for the substrings using both agrep [28] and our trie algorithm. Both
elapsed time and CPU time are measured on two 25MHz NeXT machines,
one with 28MB RAM and the other with 8MB RAM. Table 5 shows measured
times, averaged on the five substrings, in seconds.
The testing results show that our trie search algorithm significantly out-performs
agrep in exact match and approximate match with one error. For
the exact match, trie methods usually give search time proportional only to
the length of the search string. Our measurements show that trie search
times for exact match do not directly relate to the text size. It requires
few data transfers (only one search path), and therefore, is insensitive to the
RAM size.
Let ae(k) be the average trie search depth. It is the average number of
columns to be evaluated before assuring that k. It has been
proven that ae(k) ? k if k is less than the target string length, and
[24, 7]. For a complete trie, the worst case of a text trie, the trie search
algorithm can find all substrings with k mismatches in O(k j\Sigmaj k ) expected
time: there are j\Sigmaj k paths up to depth k, and each column of the DP table
has k rows. The time is independent of the trie size. In fact the trie algorithm
is better than the agrep for small k, but not for large k, because agrep scans
text linearly but the trie grows exponentially. For our measured texts, which
are relatively small, the trie search brings more data into RAM than agrep
When RAM size is larger than data size, measured CPU times are closer
to the elapsed times. Since each query is tested repeatedly, most of data (text
and trie) are cached in RAM, and therefore, the searches are CPU-bound.
However, for a smaller RAM size (or larger text data), the searches have to
wait for data to be transferred from secondary storage. Since agrep scans the
entire text, its search time is linearly proportional to the text size.
File names are different from the other tested texts. File names are all
distinct. Any two substrings resemble each other less, which helps
agrep to stop evaluation more quickly. This does not help the trie search
because it makes the trie shallow (toward a complete trie) and takes more
time to scan the top trie levels.
Extensions
Our trie search algorithm can be extended in various ways. For example,
spelling checkers are more likely to ask for the best matches, rather than
the words with a fixed number of errors. The optical character recognizers
may search for words with substitutions only. When searching for telephone
license numbers, postal codes, etc., users require not only penalties
for certain types of edit operations, but also a combination of the exact search
and the approximate search because they often remember some numbers for
sure. In text searching, patterns are more often expressed in terms of regular
expressions. Extensions described in this section (except Section 5.5) have
been discussed in [28]. We present them here using DP.
5.1 Best Match
In some applications, we do not know the exact number of errors before
a search. We want strings with the minimal number of mismatches, i.e.,
strings with 0-k mismatches and no other string in the text having k 0 !k
mismatches.
To use our algorithm, we define a preset k, which is a small number but
no less than the minimal distance, i.e., there exists a string, s, in the text
such that D(pattern; s) - k. A simple method to set k is to let s be an
arbitrary string in the text, and then set better way
is to search for the pattern using deletions (or insertions, or substitutions)
only. This is to traverse the trie by following the pattern string. Whenever
no subtrie corresponds to a character of the pattern, we skip the character
in the pattern and look for a subtrie for the next character, and so on. The
number of skipped characters will be used as an initial k.
During the traverse, we will have k
s is the path from the root to the leaf node. Whenever we have k ? k 0 , we set
clear the strings that have been found. For best match searching,
decreases monotonically.
5.2 Weighted Costs
The distances evaluated before are assumed to have cost 1 for any edit op-
eration. Sometimes, we may want to have a different cost. For example, to
have substitution costs at least the same as one deletion and one insertion,
or to disallow deletions completely.
To make edit operations cost differently, we need only to modify the
distance function. Let I, D, S and R be the costs of an insertion, a deletion,
a substitution, and a transposition respectively. We assume costs are all
To disallow an operation, say insertions, we set I = 1. As before,
Otherwise, we redefine
@
A
D, and
else
else
Furthermore, we may add a cost, C, for changing the case. For example,
for case insensitive searches, we set case sensitive searches, we
set We may even disallow case changes by setting
be checking the case difference, and let a - b mean that a and
b are of the same case. Now, we define, C ij
C else
, and replace:
else
else
The concept of changing cases can be extended even more generally. For
example, when searching a white page for telephone numbers, we don't want
an apartment number, such as 304B, to be recognized as a telephone number,
i.e., do not replace a character unless it is a digit to a digit. For the same
reason, we may not want to mix letters, digits and punctuation with each
other when searching for license plates, such as RMP-167, or postal codes,
such as H3A 2A7. For those applications, we can use above definitions for
but give a new interpretation of C. We will not elaborate them
here.
5.3 Combining Exact and Approximate Searches
We sometimes know in advance that only certain parts of the pattern may
have errors. For example, many spelling checkers may give no suggestions
for garantee. But suppose we knew the suffix rantee was spelled right. In
this case, we want to search part of the pattern exactly. By following agrep
standards [28], we denote this pattern as ga!rantee?. Characters inside a
!? cannot be edited using any one of the four operations.
To support both exact and approximate searches for the same pattern,
we need only modify I ij be a predicate
that determines whether p i is a member character inside an exact match !?.
Let function ? p i be a predicate that tells whether p i is the last character
inside a !?. The new definitions are:
I else
else
else
else
R P
else
By above definitions, string guarantees also matches ga!rantee? with
two insertions. To disallow insertions at the end of an exact match, we
introduce an anchor symbol, $ (borrowed from Unix standards). Pattern
ga!rantee?$ means that target strings must have the suffix rantee. What
needs to be changed is to set ?p i false when there is a
i.e., a pattern looks like In a similar way, we introduce another
anchor symbol, -, to prevent insertions at the beginning of an exact match.
For example, -!g?a!rantee?$ means that target strings must start with the
letter g and ended with the suffix rantee. This time, we set j p 0 true.
5.4 Approximate Regular Expression Search
The ability to match regular expressions with errors is important in prac-
tice. Regular expression matching and k-approximate string matching solve
different problems. They may overlap but do not coincide. For example, the
regular expression a#c, where # is a one-place wildcard, can be written as a
1-approximate match with substitutions and insertions on the second character
only. Baeza-Yates [5] proposed an search algorithm for the full regular
expression on tries.
In this section, we will extend our trie algorithm to deal with regular
expression operators with errors. However, the extension operators work
only for single characters, i.e., there is no group operator. For example, we
may search for a*b with mismatches, but not (ab)*. Searching tries for the
full regular expression with approximation is an open problem.
5.4.1 Alternative Operator
Suppose we want to find all postal codes, H3A 2A?, where ? is either 1, 3,
or 7. First, we introduce the notation, [137] (once again, borrowed from
Unix standard), to describe either 1, 3, or 7. Formally, operator [] defines a
set of alternative characters. Thus, H3A 2A7 matches pattern H3A 2A[137]
exactly; while H3A 2A4 matches the pattern with one mistake.
Substituting one character with a set of allowable characters can be easily
achieved by redefining the = and ' operators of Section 2 and Section 5.2
respectively. For pattern P 7 =H3A 2A[137], we have
as either 1= w j , or 3= w j , or 7= w j . In other
words, if p i is a set of allowable characters, matches one of
the characters defined by the [] operator. ' is the case insensitive version
of =.
As syntactic sugar (Unix standards), we may denote [a-z] for all lower
case letters, i.e., a range of characters; [-aeiou] for anything but vowels,
i.e., a complement of the listed characters; and . for all characters, i.e., the
wild card.
5.4.2 Kleen Star
The kleen star allows its associated characters to be deleted for free, or to
be replaced by more than one identical character for free. For example, ac,
abc, abbc and abbbc all match pattern ab*c exactly. a[0-9]*c means that
an unbounded number of digits can appear between a and c.
Let function \Lambdap i be a predicate which says there is a Kleen star associated
with the pattern character p i . To support the Kleen star operator, we need
only to change I ij and D ij . Remember, p i means that we can delete p i at
no cost, and insert any number of at no cost. We now give
the new definition as follows:
I else
else
5.5 Counter
Our algorithm can also be extended to provide counters. Unlike a Kleen star,
e.g., ab*c, which means that unbounded number of bs can appear between
a and c, pattern ab?c says that only ac and abc match exactly. If we want
these strings abbc, abbbc, abbbbc and abbbbbc, i.e., two to five bs between a
and c, we can write the pattern as abbb?b?b?c, or abf2,5gc (Unix syntax).
To support counters, we need only to modify D ij since p? means character
can deleted for free. Let us define a function ?p i which says there is
a counter symbol, ?, associated with the pattern character p i . The new
definition is:
else
6 Dictionary Search
By a dictionary, we mean a text file which contains keywords only, i.e., a
set of strings that are pairwise distinguishable. For dictionary searches, we
are only interested in those keywords that relate to the pattern by some
measurements (in our case, the edit distance). The orders (or locations) of
those keywords are not important to us. For such applications, the text file
can be stored entirely in a trie structure. The trie in Figure 3 is a dictionary
trie. Experimental results in [22] show that dictionary trie sizes are about
50% of the file sizes for English words. In other words, we are providing
not only an algorithm for both exact and approximate searches, but also a
data structure for compressing the data up to 50%. Searches are done on the
structure without decompression operations.
Searching soundex codes [20] is an example of the dictionary search. By
replacing English words with their soundex codes and storing the codes in
the dictionary trie, we are able not only to search any given soundex code
efficiently (exact trie search) but also to reduce the soundex code size by half.
Searching an inverted file is another example of dictionary search. An
inverted file is a sorted list of keywords in a text. The trie structure keeps
the order of its keys. By storing keywords in the dictionary trie, we can either
search for the keywords or for their location. Furthermore, our trie algorithm
provides search methods for various patterns with or without mismatches.
7 Conclusion
Tries have been used to search for exact matches for a long time. In this
paper, we have expanded trie methods to solve the k approximate string
matching problem. Our approximate search algorithm finds candidate words
with k differences in a very large set of n words in O(k j\Sigmaj k ) expected worst
time. The search time is independent of n. No other algorithm which achieves
this time complexity is known.
Our algorithm searches a trie depth first with shortcuts. The smaller k
is, the more subtries will be cut off. When irrelevant subtries are
cut off, and this gives the exact string search in time proportional only to the
length of the string being sought. The algorithm can also be used to search
full regular expressions [3].
We have proposed a trie structure which uses two bits per node and
has no pointers. Our trie structure is designed for storing very large sets
of word strings on secondary storage. The trie is partitioned by pages and
neighboring nodes, such as parents, children and siblings, are clustered in
terms of pages. Pages are organized in a tree like structure and are searched
in time logarithmic the file size.
Our trie method outperforms agrep, as our results show, by an order of
magnitude for k=0, and by a factor of 4 for k=1. Only when k-2 does the
linear worst case performance of agrep begin to beat the trie method for the
moderately large documents measured.
8 Future Work
Spelling checkers based on searching minimal edit distance performs excellently
for typographic errors and for some phonetic errors. For example,
exsample to example has one difference, but sinary to scenery has three
differences. To deal with phonetic misspellings, we may follow Veronis's work
[25] by giving weights to edit operations based on phonetic similarity, or using
non-integer distances to obtain finer grained scores on both typographic
and phonetic similarities. Another solution is to follow the convention which
assumes no mistakes in the first two letters, or gives higher penalty for the
first few mistakes. Excluding the first few errors allows us to bypass many
subtries near the trie root. This not only gives quicker search time, but also
reduces the number of possible candidates. With a small set of candidate
words, we can impose a linear phonetic check.
Even with one difference, a short word, say of 2 letters, matches many
English words. There are more short words than long words. This type of
error is difficult to correct out of context.
Acknowledgments
This work was supported by the Canadian Networks of Centres of Excellence
(NCE) through the Institute of Robotics and Intelligent Systems (IRIS) under
projects B-3 and IC-2, and by the Natural Sciences and Engineering
Research Council of Canada under grant NSERC OGP0004365.
--R
The myriad virtues of suffix trees.
String searching algorithms.
Efficient text searching of regular expressions.
A new approach to text searching.
Fast and practical approximate string matching.
A fast string searching algorithm.
Approximate string matching in sublinear- expected time
A technique for computer detection and correction of spelling errors.
Efficient searching of text and pictures.
New indices for text: PAT trees and PAT arrays.
Approximate string matching.
An approximate string-matching algo- rithm
Fast pattern matching in strings.
Techniques for automatically correcting words in text.
Binary codes capable of correcting deletions
A space economical suffix tree construction algorithm.
Relational Information Systems.
Trie methods for representing text.
Patent Numbers
Multidimensional tries used for associative searching.
Trie Methods for Text and Spatial Data on Secondary Stor- age
String Searching Algorithms.
Finding approximate patterns in strings.
Computerized correction of phonographic errors.
The string-to-string correction problem
Linear pattern matching algorithms.
Fast text searching.
--TR
--CTR
Johan Rnnblom, High-error approximate dictionary search using estimate hash comparisons, SoftwarePractice & Experience, v.37 n.10, p.1047-1059, August 2007
Eike Schallehn , Kai-Uwe Sattler , Gunter Saake, Advanced grouping and aggregation for data integration, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
R. W. P. Luk, Time-Space Trade-Off Analysis of Morphic Trie Images, IEEE Transactions on Knowledge and Data Engineering, v.13 n.6, p.1028-1032, November 2001
Kimmo Fredriksson, On-line Approximate String Matching in Natural Language, Fundamenta Informaticae, v.72 n.4, p.453-466, December 2006
Sreenivas Gollapudi , Rina Panigrahy, A dictionary for approximate string search and longest prefix search, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Liang Jin , Chen Li , Nick Koudas , Anthony K. H. Tung, Indexing mixed types for approximate retrieval, Proceedings of the 31st international conference on Very large data bases, August 30-September 02, 2005, Trondheim, Norway
Eike Schallehn , Kai-Uwe Sattler , Gunter Saake, Efficient similarity-based operations for data integration, Data & Knowledge Engineering, v.48 n.3, p.361-387, March 2004
Gonzalo Navarro , Ricardo Baeza-Yates , Joo Marcelo Azevedo Arcoverde, Matchsimile: a flexible approximate matching tool for searching proper names, Journal of the American Society for Information Science and Technology, v.54 n.1, p.3-15, January
Jung-Im Won , Sanghyun Park , Jee-Hee Yoon , Sang-Wook Kim, An efficient approach for sequence matching in large DNA databases, Journal of Information Science, v.32 n.1, p.88-104, February 2006 | approximate matching;text search |
627771 | The Starburst Active Database Rule System. | AbstractThis paper describes our development of the Starburst Rule System, an active database rules facility integrated into the Starburst extensible relational database system at the IBM Almaden Research Center. The Starburst rule language is based on arbitrary database state transitions rather than tuple- or statement-level changes, yielding a clear and flexible execution semantics. The rule system has been implemented completely. Its rapid implementation was facilitated by the extensibility features of Starburst, and rule management and rule processing are integrated into all aspects of database processing. | Introduction
Active database systems allow users to create rules-rules specify data manipulation operations
to be executed automatically whenever certain events occur or conditions are met.
Active database rules provide a general and powerful mechanism for traditional database features
such as integrity constraint enforcement, view maintenance, and authorization check-
active database rules also support non-traditional database features such as version
management and workflow control. Because active database rules are similar to the forward-chaining
production rules used by Artificial Intelligence applications, active database systems
also provide a convenient and efficient platform for large and efficient knowledge bases and
expert systems.
In this paper we describe our development of the Starburst Rule System, an active
database extension to the Starburst prototype extensible relational database system at the
IBM Almaden Research Center. We cover both our design of the rule language and our
implementation of rule processing as an extension to Starburst. The Starburst rule language
This work was performed while the author was at the IBM Almaden Research Center, San Jose, CA.
differs from most other active database rule languages in that it is based on arbitrary database
state transitions rather than tuple- or statement-level changes, permitting an execution semantics
that is both cleanly-defined and flexible. The implementation of the Starburst Rule
System was completed rapidly and relies heavily on the extensibility features of Starburst.
The Starburst rule processor differs from most other active database rule systems in that
it is completely implemented, and it is fully integrated into all aspects of database process-
ing, including query and transaction processing, concurrency control, rollback recovery, error
handling, and authorization.
The paper proceeds as follows. In Section 2 we survey other active database rule systems.
In Section 3 we describe the syntax of the Starburst rule language and in Section 4 we specify
the semantics of rule execution; examples are given in Section 5. The architecture of the
rule system implementation is described in Section 6. Section 7 covers several implementation
features in more detail, including transition information maintenance, concurrency
control, authorization, and error handling. Section 8 concludes and provides a retrospective
discussion of the Starburst Rule System, highlighting what we feel are the successes and the
failures of our language design and implementation. Finally, in Section 9 we mention several
applications of the Starburst Rule System, and we discuss future directions of this work.
Related Work
Numerous other active database systems have been designed and some have been imple-
mented. The three systems closest to the Starburst Rule System are Ariel [31], the second
version of the POSTGRES Rule System [42], and Chimera [12,14]. The Ariel system has a
rule language and execution semantics based closely on OPS5 [9], a production rule language
originally designed for expert systems. The Ariel project has focused on the design of an
OPS5-like rule language for the database setting, and on methods for highly efficient rule
condition testing using variations on the Rete and TREAT algorithms designed for OPS5
[44]. The Ariel rule language is fully implemented using the Exodus database toolkit [31].
The POSTGRES Rule System, sometimes referred to as PRS2 to distinguish it from an earlier
proposal [41], focuses in both its language and its implementation on providing several
different classes of rules, each appropriate for a particular suite of applications. There are
two implementations of the POSTGRES Rule System, one based on run-time marking of
tuples affected by rules, the other based on compile-time rewriting of queries to incorporate
the effects of rules [42]. The Chimera system combines object-oriented, deductive, and active
database technology. Its active rule language is based on Starburst's, with extensions for
object-orientation and for "configurable" rule semantics (see Section 4). A first prototype of
Chimera has been implemented, employing some techniques adapted from Starburst [12].
There are several other relational active database systems, not as closely related to Starburst
as the systems described above. Two projects, DATEX [8] and DIPS [38], implement
the OPS5 rule language using an underlying database system and special indexing techniques
to support efficient processing of large rule and data sets. The PARADISER project also
uses a database system for efficient processing of expert system rules, but in PARADISER
the focus is on distributed and parallel rule processing [23]. RPL (for Relational Production
Language) was an early project in relational active database systems; RPL includes
an OPS5-like rule language based on relational queries and a prototype implementation in
which rule processing is loosely coupled to a commercial relational DBMS [22]. A-RDL is an
extension to the RDL deductive database system that supports active rules [39]. The Alert
project explores how active rules can be supported on top of a passive database system with
minimal extensions [37]. Finally, Heraclitus is a relational database programming language
with delta relations as first-class objects; a primary goal of the Heraclitus language is to
simulate and support active rule processing [29].
One early project and numerous recent efforts (including Chimera) consider active object-oriented
database systems. Although some issues in active database systems are common
to both relational and object-oriented environments, there are many significant differences;
furthermore, to date most object-oriented active database systems do not have implementations
that are as advanced as their relational counterparts. HiPAC was a pioneering project
in the area of active database systems; HiPAC includes a very powerful rule language for an
object-oriented data model, a flexible execution semantics, and several main-memory experimental
prototypes [20]. Recently there has been an explosion of projects in object-oriented
active database systems-many of these projects are still preliminary; see e.g. [3,4,6,7,10,11,
Several previous papers have described language, implementation, or application development
issues related to the Starburst Rule System. An initial proposal for the Starburst
rule language appears in [49]. [48] describes how the extensibility features of the Starburst
prototype are used in implementing the rule system. Details of Starburst's rule priority system
are given in [1]. A series of papers describe how rules in the Starburst language can be
generated automatically from specifications for particular applications: integrity constraints
are considered in [15], view maintenance in [16], deductive databases in [19], and heterogeneity
management in [18]. A denotational semantics for the Starburst rule language is given
in [45], while [2] describes methods for static analysis of Starburst rules. Finally, [17] discusses
how the Starburst Rule System can be extended for parallel and distributed database
environments. Except for a short overview in [46] and an unpublished user's guide [47], this
is the first paper to provide a complete description of the final, operational, Starburst Rule
System.
3 Syntax of Rule Language
The syntax of the Starburst rule language is based on the extended version of SQL supported
by the Starburst database system [30]. The Starburst rule language includes five commands
for defining and manipulating rules: create rule, alter rule, deactivate rule, activate
rule, and drop rule. In addition, rules may be grouped into rule sets, which are defined
and manipulated by the commands create ruleset, alter ruleset, and drop ruleset. We
describe each of these eight commands below. The Starburst Rule System also includes some
simple user commands for querying and displaying rules, which we omit from this paper (see
[47] for details), and commands for user or application initiation of rule processing, which
we describe in Section 4.
3.1 Rule Creation
Rules are defined using the create rule command. The syntax of this command is:
create rule name on table
when triggering-operations
then action-list
rule-list
rule-list
The name names the rule, and each rule is defined on a table. Square brackets indicate
clauses that are optional.
The when clause specifies what causes the rule to be triggered. Rules can be triggered by
any of the three relational data modification operations: inserted, deleted, and updated.
The updated triggering operation may include a list of columns; specifying updated without
a column list indicates that the rule is triggered by updates to any column. Each rule
specifies one or more triggering operations in its when clause; any of the specified operations
on the rule's table will trigger the rule.
The if clause specifies a condition to be evaluated once the rule is triggered. A rule condition
is expressed as an unrestricted select statement in Starburst's SQL. The condition is
true if and only if the select statement produces at least one tuple. The if clause may be
omitted, in which case the rule's condition is always true. Note that using an unrestricted
SQL select statement as the condition part of a rule is equivalent to using an unrestricted
SQL predicate: any SQL predicate can be transformed into an equivalent SQL select statement
(on a dummy table), while any SQL select statement can be transformed into an
equivalent SQL predicate (using exists).
The then clause specifies a list of actions to be executed when the rule is triggered and its
condition is true. Each action may be any database operation, including data manipulation
commands expressed using Starburst's SQL (select, insert, delete, update), data definition
commands (e.g. create table, drop rule), and rollback. The actions are executed
sequentially in the order listed.
The optional precedes and follows clauses are used to specify priority orderings between
rules. When a rule R 1
specifies a rule R 2
in its precedes list, this indicates that if both
rules are triggered at the same time, then R 1 will be considered first, i.e. R 1 precedes R 2 . If
specifies R 2
in its follows list, this indicates that if both rules are triggered at the same
time, then R 2
will be considered first. Cycles in priority ordering are not permitted.
Rule conditions and actions may refer to arbitrary database tables; they also may refer
to special transition tables. There are four transition tables: inserted, deleted, new-
updated, and old-updated. If a rule on a table T specifies inserted as a triggering
operation, then transition table inserted is a logical table containing the tuples that were
inserted into T causing the rule to be triggered; similarly for deleted. Transition table new-
updated contains the current values of updated tuples; old-updated contains the original
values of those tuples. 1 A transition table may be referenced in a rule only if it corresponds
to one of the rule's triggering operations.
Note that Starburst rules do not include a for each row option, or a before option for
triggering operations. (Readers may be familiar with these options from commercial SQL-based
trigger systems [32,34].) Neither option is appropriate in the context of rules that are
evaluated over arbitrary transitions; this issue is addressed further in Section 8.
3.2 Other Rule Commands
The components of a rule can be changed after the rule has been defined; this is done using
the alter rule command. The syntax of this command is:
alter rule name on table
action-list
rule-list
rule-list
rule-list
1 If a rule is triggered by updated on any column, then transition tables new-updated and old-updated
contain tuples for which any column was updated. If a rule is triggered by updated on particular columns,
then transition tables new-updated and old-updated contain the entire tuples for which at least one of
the specified columns was updated.
The if, then, precedes, and follows clauses in this command use the same syntax as
the corresponding clauses in the create rule command. The if clause specifies a new rule
condition that replaces the existing one. Similarly, the then clause specifies a new list of
actions that replaces the existing list. The precedes and follows clauses specify rules to
be added to the existing precedes and follows lists, while the nopriority clause is used
to remove priority orderings. Notice that the when clause of a rule may not be altered; to
change triggering operations, a rule must be dropped and then re-created (this restriction is
due to implementation details).
An existing rule can be deleted by issuing the drop rule command:
drop rule name on table
Sometimes it is useful to temporarily deactivate rules (particularly for debugging pur-
poses). When a rule is deactivated, it will not be triggered and its actions will not be
executed, even if its triggering operations occur. A deactivated rule behaves as if the rule
were dropped, except it remains in the system and can easily be reactivated. A rule is
deactivated by issuing the command:
deactivate rule name on table
To reactivate a rule that has been deactivated, the following command is issued:
activate rule name on table
3.3 Rule Sets
We have provided a basic facility in the Starburst Rule System for grouping rules into sets.
Rule sets can be used to structure rule applications in conjunction with the process ruleset
command, described in Section 4.3. 2 A rule set is defined using the create ruleset command:
create ruleset name
Rules are added to and deleted from a rule set using the alter ruleset command:
alter ruleset name
rule-list
rule-list
Each rule may be in any number of rule sets (including none), and each set may contain any
number of rules. A rule set is deleted by issuing the command:
drop ruleset name
2 Rule sets might also be used to group rules for the purposes of shared priorities, activation/deactivation
of multiple rules, or inheriting common components, but such features are not provided in the current
Starburst Rule System.
4 Semantics of Rule Execution
In this section we explain the semantics of rule execution in Starburst, including the relationship
of rule processing to query and transaction processing. 3 For the descriptions of rule
behavior in this section, we assume that some number of rules already have been created,
and we assume that these rules are not altered, deactivated, activated, or dropped. (The
subtle interactions between transactions in which rules are changed and other concurrently
executing transactions are discussed in Section 7.3.)
Rules are processed automatically at the end of each transaction that triggers at least one
rule. In addition, rules may be processed within a transaction when special user commands
are issued. The semantics of rule execution is closely tied to the notion of database state
transitions. Hence, we begin by describing transitions, then we describe end-of-transaction
rule processing, and finally we describe command-initiated rule processing.
4.1 Transitions
When we determine whether a rule is triggered, and when we evaluate a rule's transition
tables, this is based on a precise notion of database state transition. A transition is the
transformation from one database state to another that results from the execution of a
sequence of SQL data manipulation operations. Since rule processing always occurs within
a transaction and is defined with respect to the operations performed in that transaction
only, we need not consider issues such as concurrent transactions and failures in defining
rule semantics. Furthermore, since rules are triggered by data modification only, and not by
data retrieval, execution of SQL select statements also need not be considered.
Suppose a sequence of SQL data modification operations (insert, delete, and/or up-
date) is executed, transforming the database from a state S 0
to a state S 1
. We depict the
resulting transition - as:
Rather than considering the individual operations creating a transition, rules consider
the net effect of transitions. The net effect of a transition consists of a set of inserted tuples,
a set of deleted tuples, and a set of updated tuples. Considering transition - above, we
associate with each inserted tuple its value in state S 1 , with each deleted tuple its value in
state S 0
, and with each updated tuple its (old) value in S 0
and its (new) value in S 1
. If a
tuple is modified more than once during a transition, it still appears in at most one set in
the net effect of the transition. Specifically:
3 More detailed and formal treatments of Starburst's rule execution semantics can be found in [45,49].
ffl If a tuple is inserted and then updated, we consider this as an insertion of the updated
tuple.
ffl If a tuple is updated and then deleted, we consider this as a deletion of the original
tuple.
ffl If a tuple is updated more than once, we consider this as an update from the original
value to the newest value.
ffl If a tuple is inserted and then deleted, we do not consider it in the net effect at all.
For clarity, we use dashed arrows to denote transitions that result from user- or application-
generated data manipulation operations, while we use solid arrows denote transitions that
result from rule-generated operations. For example, the following depicts a user-generated
transition followed by three rule-generated transitions:
Rules often consider composite transitions. For example, a rule might be triggered by a
composite transition - that is the net effect of transitions - 1
, and - 3
. We depict this as:
4.2 End-of-Transaction Rule Processing
Suppose a transaction X is executed and suppose that the net effect of the data modification
operations performed by X includes at least one operation that triggers at least one rule;
then rule processing is invoked automatically at the end of transaction X, before X commits.
Transaction X itself creates the initial triggering transition. As rules are executed, they
create additional transitions that may trigger additional rules or may trigger the same rules
again. If a rule action executes rollback, then the entire transaction aborts. Otherwise, the
entire transaction commits when rule processing terminates.
Rule processing itself consists of an iterative loop. In each iteration:
1. A triggered rule R is selected for consideration such that no other triggered rule has
priority over R (details of rule selection are discussed in Section 4.4 below).
2. R's condition is evaluated.
3. If R's condition is true, R's actions are executed.
For step 1, a rule is triggered if one or more of its triggering operations occurred in the
composite transition since the last time the rule was considered, or since the start of the
transaction if the rule has not yet been considered.
As illustration, suppose a user transaction creates transition - 1
. Suppose a rule R is
triggered by transition - 1 , it is selected for consideration, its condition is true, and its actions
are executed:
At this point, any rule that was not considered in state S 1
is triggered if one or more of its
triggering operations occurred in the composite transition - ; R is triggered (again) if one or
more of its triggering operations occurred in transition - 2
We have chosen this particular semantics for rule execution in part because it has the
useful property that every rule considers every change exactly once. 4 This property is illustrated
by the following example, which shows the (composite) transitions considered by a
rule R during several steps of rule processing:
(R
(R)
(R)
(R
(R
The first time rule R is considered, at state S 2 , R uses the changes since initial state S 0 ,
i.e. the changes made by the initial user transaction and subsequent execution of a rule R 0 .
In its second consideration, at state S 3
uses the changes since S 2
. If R is considered
a third time, at state S 6 , it uses the changes since state S 3 . The upper arrows depict the
(composite) transitions used by rule R each time it is considered, illustrating clearly that R
considers every change exactly once.
Finally, note that during condition evaluation and action execution, the contents of a
rule's transition tables always reflect the rule's triggering transition.
4 Certainly there are many other possible choices for the semantics of rule execution. Our choice seems
appropriate for many applications; however, it is our belief that for every choice of semantics it is possible to
concoct a reasonable example for which that semantics is inconvenient or inappropriate. The recent Chimera
active rule system addresses this issue by allowing its users to choose between a number of alternative
semantics [14].
4.3 Rule Processing Commands
While end-of-transaction rule processing is sufficient for many applications, we have found
that in some cases it is useful for rules to be processed within a transaction (for example,
to verify consistency after some operations have been executed but before the transaction is
complete). For this, the Starburst Rule System provides three commands:
process rules
process ruleset set-name
process rule rule-name
Execution of the process rules command invokes rule processing with all rules eligible to
be considered and executed. The behavior of rule processing in response to a process rules
command is identical to end-of-transaction rule processing. In particular, recall from the
previous section that a rule is triggered if one or more of its triggering operations occurred
in the composite transition since the last time the rule was considered, or since the start of
the transaction if the rule has not yet been considered. This behavior is valid even if rules are
processed multiple times within a transaction as well as at the end of the transaction, and
this behavior retains the semantic property that every rule considers every change exactly
once.
Execution of the process ruleset command invokes rule processing with only those
rules in the specified set eligible to be considered and executed. Again, the behavior of rule
processing is identical to end-of-transaction rule processing, except in this case any rules that
are not in the specified set will not be considered for execution during rule processing, even
if they are triggered. (Such rules eventually will be considered for execution, however, at
end-of-transaction rule processing if not sooner.) The process ruleset command is useful,
for example, when rules are used to maintain integrity constraints [15] or materialized views
[16]. In this case, the rules associated with a particular constraint or view are grouped into
one set S. Whenever the constraint should be checked or the view refreshed (before the end
of a transaction), a process ruleset command is issued for set S.
Execution of the process rule command invokes rule processing with only the specified
rule eligible to be considered and executed. Once again, the behavior of rule processing is
identical to end-of-transaction rule processing, except in this case any rules other than the
specified rule will not be considered for execution. Note that although only one rule is eligible
to be considered and executed, rule processing still may involve several rule executions if the
rule triggers itself.
Since process rules, process ruleset, and process rule are executable Starburst
commands, these commands may be used in rule actions. Execution of such rule actions
results in "nested" invocations of rule processing. This behavior is acceptable and well-
defined, and it may be useful in certain scenarios, however we have found that it can be
difficult to understand and frequently it results in infinite rule triggering.
4.4 Rule Selection
The precedes and follows clauses in rules allow them to be ordered in any way, as long as a
cycle is not produced. During rule processing, these user-specified priorities influence which
rule is selected for consideration when more than one rule is triggered (recall step 1 of the rule
processing algorithm in Section 4.2). Since the user-specified ordering on rules may be only
a partial ordering (indeed, no ordering is required), it still may be necessary for the system
to choose between multiple triggered rules. This selection is performed deterministically
by using an algorithm that induces a total ordering on all currently defined rules. The
total ordering is consistent with the user-specified partial ordering, and consequently also is
consistent with any ordering transitively implied by the user-specified ordering. (That is, if
rule R 1 is specified to precede rule R 2 , and rule R 2 is specified to precede rule R 3 , then R 1
will precede R 3
.) As a "tie-breaker", rules that have no user-specified or transitively implied
ordering are ordered based on rule creation time (i.e. R 1 is ordered before R 2 if and only
was created before R 2
), unless this ordering is impossible given the user-specified and
transitively implied orderings. Details and a formalization of this deterministic rule ordering
strategy can be found in [1].
5 Examples
We now provide examples to illustrate the syntax of rule creation and the semantics of rule
execution. Our examples are relatively simple and contrived, but they serve to compactly
illustrate the salient features of the Starburst rule language syntax and semantics. For more
comprehensive examples making up a full rule application, the reader is referred to [15].
We use the following generic employee-department relational database schema:
emp(emp-no, name, salary, dept-no)
dept(dept-no, mgr-no)
Our first example rule, cascade, implements a variation on the cascaded delete method
of enforcing referential integrity constraints. The rule is triggered whenever managers are
deleted; its action deletes all employees in departments managed by deleted employees, then
deletes the departments themselves. We assume a hierarchical structure of employees and
departments, and we assume that employee numbers are not immediately reused-that is, a
single transaction will not delete an employee and then insert a new employee with the same
employee number.
create rule cascade on emp
when deleted
then delete from emp
where dept-no in
(select dept-no from dept
where mgr-no in (select emp-no from deleted));
delete from dept
where mgr-no in (select emp-no from deleted)
Notice in particular that this rule has no condition (i.e. its condition is always true), it has
two actions to be executed in order, and it references transition table deleted. As will be
shown below, the self-triggering property of this rule under the semantics specified in Section
correctly reflects the rule's recursive nature.
Our second example rule, sal-control, controls employee salaries: Whenever employees
are inserted or salaries are updated, the rule checks the average salary. If the average salary
exceeds 50, then the rule deletes all inserted or updated employees whose salary exceeds 80.
create rule sal-control on emp
when inserted, updated(salary)
if (select avg(salary) from emp) ? 50
then delete from emp
where emp-no in (select emp-no from inserted
union select emp-no from new-updated)
and salary ? 80
precedes cascade
Notice in particular that this rule has two triggering operations (either of which will trigger
the rule), it has a condition, it references transition tables inserted and new-updated, and
it is specified to have priority over rule cascade.
Now consider rule processing when both of these rules are defined. Let the initial state
of the database include six employees-Jane, Mary, Jim, Bill, Sam, and Sue-with the
following management structure:
Bill Sam Sue
Mary Jim
Jane
@
@
@
@
@
@
Refer to Figure 5. Suppose the initial user transaction - 1 deletes employee Jane, and the
same transaction updates Mary's salary to exceed 80 so that the average salary exceeds 50.
Both rules cascade and sal-control are triggered in state S 1
note that cascade is triggered
deletes Jane, sets Mary salary ? 80, average salary ? 50
deletes Mary
deletes Bill and Jim
deletes Sam and Sue
nothing
Figure
1: Transitions for example rules
with respect to set fJaneg of deleted employees. Since rule sal-control has priority over
rule cascade, sal-control is chosen for consideration. Its condition is true so it executes
its action, deleting employee Mary and creating transition - 2
sal-control is not triggered
again. Now, in state S 2 , rule cascade is triggered by the composite transition since the initial
state (transitions - 1
so its set of deleted employees is fJane, Maryg. Rule cascade
executes its actions, deleting all employees and departments whose manager is either Jane
or Mary. Employees Bill and Jim are deleted, creating transition - 3 , and rule cascade is
triggered a second time. Now, in state S 3
, the rule considers only the most recent transition
so the set of deleted employees is fBill, Jimg. The rule's actions delete all employees and
departments managed by either Bill or Jim-employees Sam and Sue are deleted. Finally,
cascade executes a third time for transition - 4
with deleted employees fSam, Sueg, but no
additional employees are deleted.
6 System Architecture
The Starburst rule language as described in Sections 3 and 4 is fully implemented, with all
aspects of rule definition and execution integrated into normal database processing. The
implementation took about one woman-year to complete; it consists of about 28,000 lines
of C and C++ code including comments and blank lines (about 10,000 semicolons). Along
with the core capabilities of rule management and rule processing, we also have included
considerable infrastructure for program tracing, debugging, and user interaction.
The implementation relies heavily on three extensibility features of the Starburst database
system: attachments, table functions, and event queues. We describe these extensibility
features here only in enough detail to understand how they are used by the rule system
further details on these and other extensibility features of Starburst can be
found in [30].
ffl The attachment feature is designed for extensions that require procedures to be called
after each tuple-level database operation on certain tables. An extension creates a new
attachment type by registering a set of procedures: a procedure to be invoked when an
attachment instance is created on a table, a procedure to be invoked when an instance
is dropped, a procedure to be invoked when an instance is altered, and procedures to
be invoked after each tuple-level insert, delete, or update operation on a table with one
or more attachment instances. Once an attachment type is established by registering
these procedures, instances of that type may be created, dropped, and altered on any
table. When an attachment instance is created on a table T , the procedure registered
for creation may build an attachment descriptor. This data structure is stored by the
system and provided to the extension whenever subsequent attachment procedures are
invoked for T .
ffl A table function is a virtual table whose contents are generated at run time by a
host language procedure, rather than stored in the database. A new table function is
created by registering a name along with a procedure for producing the tuples of the
table. The procedure may perform any computations as long as it generates tuples
of the appropriate schema. Any table listed in the from clause of a Starburst select
operation may be a table function. When a query referencing a table function is
processed, the table function's registered procedure is called to produce the contents
of the table.
ffl The event queue feature is designed for deferred execution of procedures. Once an
event queue has been declared, arbitrary procedures can be placed on the queue at any
time, to be executed the next time that queue is invoked. The rule system uses two
built-in event queues: one for procedures to be executed during the prepare-to-commit
phase of each transaction, and one for procedures to be executed in the case of rollback.
Figure
2 illustrates the general architecture of the rule system, showing most of the
execution modules and data structures, how they fit together, and how they interact with
Starburst itself. In the diagram, Starburst, its query processor, and its data repository
appear on the left. The ovals in the center column indicate execution modules of the rule
system. The rectangles on the right represent memory-resident data structures maintained
Figure
2: Architecture of the Starburst Rule System
by the rule system. An arrow from an execution module to data indicates that the execution
module creates the data, while the reverse arrow indicates that the execution module uses
the data. A (double-headed) arrow from one execution module to another indicates that the
first module calls the second. When these arrows pass through or originate from a star, this
indicates that the call is made through an extensibility feature of Starburst. The invocation
arrows are labeled by the event causing a call to occur:
(a) Tuple-level insert, delete, or update on a table with one or more rules
(b) Reference to a transition table (transition tables are implemented as table functions)
(c) Evaluation of a rule condition or execution of a rule action
(d) Prepare-to-commit (event queue) or execution of a process rules, process ruleset,
or process rule command
Execution of a rule definition command (create rule, alter rule, drop rule, etc.)
The data maintained by the rule system is divided into:
ffl Rule Catalog: The Rule Catalog resides in the database; it stores all information about
the currently defined rules and rule sets.
ffl Global Rule Information: For efficiency, some information regarding rules and rule
sets also is stored in main memory. This information is shared by all user processes,
and includes facts such as each rule's triggering operations, the sets rules belong to,
priorities between rules, and whether rules have been deactivated.
Transition Log: This is a highly structured log of those operations occurring within a
transaction that are relevant to the currently defined rules. It is stored in main memory
and is called a transition log since, during rule processing, information about triggering
transitions is extracted from the log. The log also is used to produce transition tables.
This data structure is local, i.e. one Transition Log is maintained for each user process. 5
Further details on the Transition Log are given in Section 7.1.
ffl Rule Processing Information: This also is local to each process. It includes all information
pertinent to executing rules within a given transaction, including which rules
have been considered and when, and which rules are potentially triggered at a given
point in time.
In addition, we have registered an attachment type Rule in Starburst. A table has one
instance of this attachment type if and only if at least one rule is defined on the table. The
attachment descriptor for an instance contains an indicator of what information needs to
be written to the Transition Log when operations occur on the table (see Section 7.1 for
details).
The execution modules depicted in Figure 2 are:
ffl Rule Definition Module: This component processes all eight rule definition commands
described in Section 3. (Here we use "rule definition" generically to mean any command
that manipulates rules or rule sets.) The Rule Definition Module is responsible for
maintaining the Rule Catalog and updating the Global Rule Information. It also
creates, deletes, and modifies rule attachment instances and descriptors as appropriate.
ffl Rule Attachment Procedures: This set of procedures writes to the Transition Log whenever
relevant table modifications occur. A rule attachment procedure is called automatically
whenever an insert, delete, or update operation occurs on a table with at
least one rule.
5 In Starburst, each user or application corresponds to one process, and each such process is comprised of
a sequence of transactions.
ffl Transition Table Procedures: This set of procedures produces transition tables at run
time when they are referenced in rule conditions and actions. Transition tables are implemented
as table functions, so we have registered procedures for inserted, deleted,
new-updated, and old-updated with Starburst; these four procedures produce transition
tables by extracting appropriate tuples from the Transition Log.
ffl Rule Execution Module: This component is responsible for selecting and executing
triggered rules. It is invoked automatically at the commit point of every transaction
for which a rule may have been triggered; it also is invoked whenever the query processor
encounters a process rules, process ruleset, or process rule command. To
determine which rules are triggered, the Transition Log, the Global Rule Information,
and the local Rule Processing Information are examined to see which operations have
occurred and which rules are triggered by these operations. In the case of process
ruleset and process rule commands, the Rule Execution Module considers only the
specified subset of rules. Rule conditions are checked and actions are executed by
calling the Starburst query processor. Further details on rule execution are given in
Section 7.2.
The rule system also contains several components not illustrated in Figure 2:
ffl System Start-Up: Whenever Starburst is started or restarted, the rule system initializes
the Global Rule Information from the Rule Catalog. Rule attachments are initialized
automatically by Starburst.
ffl Process Start-Up and Transaction Clean-Up: At process start-up, the rule system
allocates its local data structures-the Transition Log and the Rule Processing Infor-
mation. Initially, these structures are empty. They are used during the course of each
transaction, then reset after end-of-transaction rule processing.
ffl Rollback Handler: The rule system must correctly handle a partial or complete rollback
at any time. The Rule Catalog and attachment information are rolled back automatically
by Starburst. However, the rule system must ensure that all memory-resident
data structures are modified to undo any changes made during the portion of the
transaction being rolled back. This is achieved by having each modification place an
appropriate undo operation on the rollback event queue.
7 Implementation Features
In the previous section we described the general architecture of the Starburst Rule System; in
this section we cover five specific and important features of the implementation in more detail:
transition information management, rule execution, concurrency control, authorization, and
error handling. Efficient transition information management and rule execution are crucial
for system performance, while concurrency control, authorization, and error handling are
necessary for full integration with database processing. 6
7.1 Transition Information
The attachment procedures that write to the Transition Log save information during query
processing so that the Rule Execution Module can determine which rules are triggered and
so the transition table references in rule conditions and actions can be evaluated. Since
the effect of rule action execution also is considered by rules, the Transition Log must be
maintained during rule processing as well; this happens automatically since rule actions are
executed by the Starburst query processor (recall Figure 2).
The semantics of rule execution dictates that, at any certain time, different rules may
need to be considered with respect to different transitions. To do this, we include a (logical)
time-stamp with each entry in the Transition Log. We also include with the Rule Processing
Information the most recent time at which each rule has been considered; the transition for
a given rule is then computed based on entries in the Transition Log occurring after that
time.
The triggering operations and transition table references in rules determine which operations
and what information must be written to the Transition Log. As an example, suppose a
rule R is triggered by inserted on a table T , but does not reference the inserted transition
table. It is necessary to log the times at which insertions occur on T ; it also is necessary to
log the times at which deletions occur for tuples in T that were previously inserted, since
the net effect of an insert followed by a delete is empty. Now suppose R does reference the
inserted transition table. In this case, the values of the inserted tuples must be logged.
In addition, the new values of updated tuples must be logged for those tuples that were
previously inserted, since the inserted transition table must contain current values for its
tuples. Finally, suppose R also is triggered by updated, and suppose it references transition
table new-updated but not old-updated. Now, the new values of all updated tuples
must be logged; the old values need not be logged since transition table old-updated is
not referenced. Clearly there are many cases to consider, and we do not enumerate them
6 Readers satisfied with the implementation overview provided in Section 6 may skip this section without
sacrificing the flow of the paper.
here. From the set of rules on each table, the composite set of triggering operations and
transition table references is computed. Based on this set, an information code is stored
in the table's rule attachment descriptor. When attachment procedures are invoked, they
use this code to determine what information should be written to the Transition Log. This
approach guarantees that all and only the necessary information is saved in the Transition
Log.
The data structure we use for the Transition Log is a "double hash table" storing lists of
records. Each record represents one tuple-level operation and contains the tuple identifier,
operation, time-stamp and, when necessary, new and/or old values for the tuple. Often it is
necessary to access all records representing a certain operation on a certain table occurring
after a certain time (e.g. all tuples inserted into T since a rule was last considered). For
this, a hash is performed on the operation and table to obtain a linked list of the relevant
records in descending order of time-stamp. It sometimes is necessary to consider the history
of a given tuple to form the net effect of a transition (e.g. to merge updates, or to detect
if a deleted tuple was previously inserted). For this, records with the same tuple identifier
also are linked in descending order; these lists can be traversed from a given record or can
be obtained for a particular tuple by hashing on the tuple identifier. We have developed a
number of efficient algorithms for maintaining and traversing the Transition Log structure.
7.2 Rule Execution
The Rule Execution Module is invoked by the query processor whenever a process rules,
process ruleset, or process rule command is encountered. The Rule Execution Module
also must be invoked at the commit point of every transaction for which rules may have been
triggered. For end-of-transaction rule processing, the first time a rule attachment procedure
is called during a transaction-indicating that a relevant operation has occurred-the attachment
procedure places the Rule Execution Module on the prepare-to-commit event queue.
Then, when the transaction is ready to commit, rule execution is invoked automatically. 7
An important advantage of this approach (over the straightforward approach of invoking the
Rule Execution Module at the end of every transaction) is that no overhead is incurred by
transactions for which no rules are triggered.
During rule processing, we maintain a data structure called Potential-Rules as part of
the local Rule Processing Information; this data structure contains references to those rules
potentially triggered at each point in time. The rules in this structure are only "potentially"
7 If other procedures placed on the prepare-to-commit event queue may modify the database, then it is
important for the Rule Execution Module to be invoked after such procedures during queue processing.
Currently in Starburst no other prepare-to-commit procedures modify data, so the execution order of the
Rule Execution Module relative to other queued procedures is unimportant.
triggered because they are a conservative estimate-every triggered rule is in the set, but
there may be rules in the set that actually are not triggered: At the end of each transition,
all rules triggered by operations that occurred during the transition are added to Potential-
Rules without considering the net effect of the transition. Hence, for example, if tuples
were inserted into table T during the transition, then all rules triggered by inserted on T
are added to Potential-Rules, regardless of whether the inserted tuples subsequently were
deleted.
In practice, it is rare for operations in a transition to be "undone" in the net effect, so
Potential-Rules usually is not overly conservative. However, before processing a rule from
Potential-Rules, the net effect must be computed to verify that the rule is indeed triggered.
Note that by maintaining the potentially triggered rules, rather than the actually triggered
rules, we compute the net effect for only one rule in each "cycle" of rule execution, rather
than for all triggered rules.
When a rule is fetched from Potential-Rules for consideration, it must be chosen such
that no other rule with higher priority also may be triggered. This is achieved by maintaining
Potential-Rules as a sort structure based on the total ordering of rules described in Section
4.4.
7.3 Concurrency Control
Since Starburst is a multi-user database system, we must ensure that all aspects of the rule
system behave correctly in the presence of concurrently executing transactions. 8 For most
transactions, including those with triggered rules, concurrency control is handled automatically
by the database system since rule conditions and actions are executed through the
Starburst query processor. However, since rules themselves may be manipulated on-line, the
rule system must enforce concurrency control for transactions that perform rule definition
(i.e. transactions that create, delete, or modify rules or rule sets).
As examples of consistency issues involving rule definition, consider the following scenarios
ffl Suppose a transaction X modifies a table T while a concurrent transaction deactivates
rule R on T . Should R be triggered by X?
ffl Suppose a transaction X triggers rules R 1 and R 2 while a concurrent transaction alters
the relative priority of R 1
and R 2
. Which ordering should be used by X?
8 Note, however, that Starburst is not a distributed database system, so issues of distributed access to
shared data and main memory structures are not relevant.
ffl Suppose a transaction X executes "process ruleset S" while a concurrent transaction
adds rule R to set S. Should R be triggered by X?
We address these issues in the Starburst Rule System by ensuring that transactions are
serializable not only with respect to data but also with respect to rules (including rule
triggering and rule sets). Furthermore, we ensure that the equivalent serial transaction
schedule with respect to rules is the same as the equivalent serial schedule with respect to
data.
be transactions such that X 1 precedes X 2 in the serial schedule induced
by Starburst's concurrency control mechanism for data. Serializability of X 1 and X 2 with
respect to rules is guaranteed by enforcing the following three consistency requirements:
(1) Triggering consistency: If X 1
performs rule definition on a table T (i.e. X 1
in some way
modifies rules pertaining to T ), and X 2 modifies data in T , then X 2 's rule processing
sees the effect of X 1
's rule definition. If X 2
performs rule definition on a table modified
by
's rule processing does not see the effect of X 2
's rule definition.
(2) Rule set consistency: If X 1 modifies a rule set S and X 2 includes "process ruleset S",
then processing sees the effect of X 1 's rule set modification. If X 2 modifies
includes "process ruleset S", then X 1
's rule processing does not see the
effect of X 2 's rule set modification.
(3) Update consistency: If X 1 and X 2 both modify the same rule or rule set, then X 2 sees
the effect of X 1
's modification and X 1
does not see the effect of X 2
's modification.
In addition, the Starburst Rule System ensures consistency within a transaction by enforcing
the following two requirements:
Intra-transaction triggering consistency: If a transaction X modifies a table T then X
cannot subsequently perform rule definition on T .
Intra-transaction rule set consistency: If a transaction X executes a "process ruleset
S" operation then X cannot subsequently modify rule set S.
Lastly, the Starburst Rule System ensures consistency of rule ordering:
Ordering consistency: If X is a transaction that triggers rules R 1
and R 2
, then the
ordering between R 1 and R 2 does not change during X from the first time this ordering
is used in rule selection.
All six consistency requirements are ensured by protocols that check and/or set locks on data,
rules, or rule sets. In Starburst, locks are acquired throughout a transaction as needed and
are held until the transaction commits or rolls back. Hence, the equivalent serial schedule of
transactions with respect to data is based on commit time.
We enforce consistency requirements (1) and (4) as follows. When a transaction X
executes a rule definition command on table T , X first checks to see if it has modified T
(by checking if it holds any exclusive locks on data in T ). If so, then the rule definition
command is rejected. Otherwise, X obtains a table-level shared lock on T . This forces X
to wait until all transactions currently modifying T have committed, and it disallows future
modifications to T by other transactions until X commits.
Consistency requirement (2) is enforced by locking rule sets. Before modifying (creating,
altering, or dropping) rule set S, a transaction must obtain an exclusive lock on S. Before
processing rule set S, a transaction must obtain a shared lock on S. To enforce consistency
requirement (5), shared rule set locks cannot be upgraded to exclusive rule set locks.
Consistency requirement (6) is enforced by locking rules. When a rule R is added to data
structure Potential-Rules (recall Section 7.2), a shared lock is obtained on R. When a rule
definition command that affects rule ordering is executed (create rule, alter rule, or drop
rule), an exclusive lock is obtained on every rule whose ordering relative to other rules is
affected by the command. Note that even the ordering between unchanged rules may be
reversed, since transitive relationships may be introduced or dropped. To prevent ordering
relationships from changing within a transaction, shared rule locks cannot be upgraded to
exclusive rule locks.
Consistency requirement (3) is enforced automatically since rule and rule set modifications
are reflected in the Rule Catalog, and the Rule Catalog is subject to Starburst's
concurrency control mechanisms for data.
Further details of these locking protocols and proofs of their correctness appear in [21].
7.4 Authorization
In the authorization component of the Starburst Rule System we address a number of distinct
issues, including authorization to create rules on a given table, authorization to create rules
with given conditions and actions, authorization to alter or drop given rules, authorization
for rule sets, and authorization at rule execution time. In Starburst, lattices of privilege types
can be defined for arbitrary database objects, with higher types subsuming the privileges of
lower types. For example, for database tables the highest privilege is control; below this are
privileges write, alter, and attach; below write are privileges update, delete, and insert; below
update and delete is privilege read. When a table is created, its creator automatically obtains
control privilege on the table, which includes the ability to grant and revoke privileges on it.
For rules we have defined a simple linear lattice of privilege types: the highest privilege
is control, below this is alter, and privilege deactivate/activate is lowest. As with tables, a
rule's creator automatically obtains control privilege on the rule and may grant and revoke
privileges on it. To create a rule R on table T , R's creator must have both attach and
read privileges on T . 9 During rule creation, R's condition and actions are checked using
the creator's privileges. If the condition or actions contain commands the creator is not
authorized to execute, then the create rule command is rejected. To drop a rule R on table
T , we require either control privilege on T or attach privilege on T with control privilege
on R. To alter a rule, privilege alter is required; to deactivate or activate a rule, privilege
deactivate/activate is required. During rule processing, each rule's condition and actions
are executed using the privileges of the rule's creator (not the privileges of the transaction
triggering the rule).
We have defined two privilege types for rule sets, control and alter, with control subsuming
alter. A rule set's creator obtains control privilege on the rule set and may grant and revoke
privileges on it. Privilege control is needed to drop a rule set; privilege alter is needed to
add or delete rules from a rule set. No privileges on rules are needed to add or delete them
from rule sets, and no privileges are needed to execute process rules, process ruleset, or
process rule statements.
The Starburst Rule System currently does not enforce any authorization requirements
when users examine the rules or rule sets in the system-all rules may be queried and
inspected by any Starburst user. It is clear, however, that authorization requirements for
reading rules should ultimately be included in any complete active database system.
Handling
If an error occurs during the execution of a Starburst rule definition command (due to,
e.g., the creation of cyclic priorities, the inclusion of an action the creator is not authorized
to execute, or a syntactic flaw), then the rule definition command is rejected. During rule
processing, two types of errors can occur: an error may be generated during the evaluation of
a rule's condition or execution of a rule's action, or rules may trigger each other or themselves
indefinitely. In the first case, if an error is generated by the query processor when it executes
a rule condition or action, then the rule system terminates rule processing and aborts the
current transaction. For the second case, the rule system includes a "timeout" mechanism:
Once more than some number n of triggered rules have been considered, rule processing
terminates and the transaction is aborted; limit n is established by a system administrator.
9 In general, attach privilege on a table indicates that the user is permitted to alter the performance of
that table. We require read privilege on table T since rule R can implicitly read the contents of T through
transition tables without accessing T directly.
8 Conclusions and Retrospective
The Starburst Rule System is a fully implemented extension to the Starburst prototype relational
database system at the IBM Almaden Research Center. We have designed a rule language
that is flexible and general, with a well-defined semantics based on arbitrary database
state transitions. In addition to the usual commands for manipulating rules, our language
includes a basic rule set facility for application structuring, and it includes commands for
processing rules within transactions in addition to the automatic rule processing that occurs
at the end of each transaction. Rule processing in Starburst is completely integrated with
database query and transaction processing, including concurrency control, authorization,
rollback recovery, and error handling.
We have learned a number of interesting lessons from our careful development of the
Starburst rule language, from its thorough implementation, and from our experiments with
the running system on a variety of rule applications. With respect to our design of the
Starburst rule language, we make the following observations:
ffl Basing the semantics on arbitrary transitions offers considerable flexibility, and it generally
provides a clean execution behavior. Although users may feel initially that they
better understand tuple- or statement-level rule triggering, there can be surprising
anomalies in such behavior that do not arise with the Starburst semantics. On the
other hand, for very simple rule processing tasks, tuple-level or statement-level rule
processing usually does behave as the user expects, and it can be both more natural
and more efficient than the Starburst approach. 10 Note also that Starburst's transition-
oriented semantics prohibits a natural before option for rule triggering [34]. However,
again, specifying before may result in surprising rule interactions, where such behavior
is avoided with Starburst's rule semantics.
ffl Rule processing based on an iterative loop, as in Starburst, is intuitive, it seems to
be sufficient for most applications, and it is relatively easy to implement. Hence,
we believe that the more complex recursive rule processing algorithms used in, e.g.,
POSTGRES [42] or HiPAC [20], probably are not worthwhile.
ffl Complex conflict resolution policies, such as those used in OPS5 [9] and Ariel [31],
do not seem appropriate for most active rule applications. Simple relative priorities
appear to be sufficient, and they can be implemented easily and efficiently.
ffl A significant drawback in the Starburst rule language, as opposed to a number of other
active rule languages, is the lack of a language facility for "passing data" from a rule's
Consider, e.g., a rule that performs a simple modification to each inserted tuple and doesn't trigger any
other rules.
condition to its action. Note that the data associated with triggering operations is
available implicitly through transition tables. However, the data satisfying a rule's
condition is not directly available in the rule's action. In practice, users often write
Starburst rules that explicitly repeat the condition as a subquery in the action, or
that omit the condition altogether and place it in the action. A language feature for
referencing, in the action, the data satisfying the condition (as suggested in [15]) would
have been very useful.
ffl A convenient extension to the rule language would have been to allow rules that are
triggered by operations on multiple tables. In fact, this feature has no effect on the
semantics of the rule language [49], but was omitted due to the additional implementation
effort. Another useful extension would have been to allow rule actions that invoke
arbitrary host language procedures. Currently, this behavior can be simulated through
Starburst's foreign function feature in SQL [30], but host language procedures cannot
be called directly from rules.
ffl Rules in Starburst cannot be triggered by select operations. Although the reason for
this is partly implementation-dependent (the attachment extensibility feature is not
available for select operations), there are a number of semantic issues that would also
need to be addressed to add select as a triggering operation, such as whether rules are
triggered by nested select expressions.
With respect to our implementation of the Starburst Rule System, we make the following
observations:
ffl The extensibility features of Starburst offered a dramatic "head start" in implementing
the rule system. All three extensibility features that we used-attachments, event
queues, and table functions-were used heavily. Significant additional coding would
have been required had these features not been available.
ffl A number of main-memory data structures are maintained by the rule system (recall
Section 6). Because much of the work associated with rule processing involves manipulating
these structures, rule processing itself is very fast. However, each structure
needed recovery procedures coded for each of its operations (in case of a complete
or partial rollback), and certain important aspects of rule processing-such as the
number of rules, or the number of tuple-level operations relevant to rules within a
given transaction-are limited by the fact that these structures reside in memory. The
system would have been easier to implement and it would be more scalable if these
main-memory structures were implemented as persistent, recoverable database objects.
Unfortunately, one of the few things Starburst did not offer was a flexible facility for
such objects with the performance we desired for rule operations.
ffl Integrating rule processing directly into the database system, as opposed to a loosely
coupled approach, offers important advantages for both performance and functional-
ity. With a loosely coupled approach it would have been impossible to fully address
issues such as concurrency control, authorization, and recovery. In addition, significant
overhead would have been incurred by the need to intercept user commands and/or
database results at the client level rather than within the database system. Although
it may be unappealing (and, sometimes, impossible) to modify or extend the core code
of a database system, this appears to be a necessity if one wishes to build a fully
integrated active rule system with acceptable performance.
ffl Initial performance measurements have revealed that the vast majority of time spent in
rule processing is in fetching, compiling, and executing rule conditions and actions. In
Starburst we were unable to store precompiled queries, so rule conditions and actions
needed to be compiled each time they were executed. Once conditions and actions
are stored in compiled form, the main cost of rule processing will be in condition
evaluation and action execution (rather than other aspects pf rule processing, such as
finding triggered rules, selecting the highest priority rule, etc. This lead us to believe
that performance improvements will be made not by streamlining rule management or
the rule processing algorithm itself, but rather by finding ways to optimize condition
evaluation and action execution.
9 Applications and Future Work
The Starburst Rule System has been used as a platform for developing a number of applications
and for investigating various issues in active database systems. We have used
Starburst rules for enforcing integrity constraints [15], for maintaining materialized views
[16], and for implementing deductive databases [19], as well as for several other (more ad-
hoc) applications. We have studied how the Starburst Rule System can be supported in a
tightly-coupled distributed database environment with full distribution transparency [17];
we also have studied how the Starburst Rule System can be used to manage semantic heterogeneity
across loosely-coupled databases [18]. Because predicting and understanding the
behavior of active database rules is an important facet of application development, we have
developed methods for statically analyzing sets of Starburst rules; these analysis methods
determine (conservatively) whether a set of rules is guaranteed to terminate, and whether
the rules are guaranteed to produce a unique final state [2]. Other researchers have used the
Starburst Rule System as a basis for studying and implementing secure active databases [40],
dynamic integrity constraints [28,43], and automatically-generated compensating actions for
static constraints [13].
Although we do consider the Starburst Rule System to be complete at this time, there
are several directions in which it may be exercised, improved, and extended:
ffl Currently we have obtained only initial cursory performance results. We would like
to elaborate these results; this requires developing a mechanism for accurate measurements
and deriving a sufficient suite of test applications.
ffl As explained in Section 6, a rule's condition is evaluated by executing a query over the
database. We do incorporate one important optimization, namely that a rule condition
is understood to be true as soon as the first tuple in the query is found. However, we
do not support incremental condition monitoring methods such as those used in Ariel
[44] and in OPS5 [9,36]. We have explored incremental condition evaluation in the
context of Starburst [5], and we plan to explore other run-time optimization methods
as well. We are interested also in compile-time optimization methods, such as static
combination of multiple rules that have related conditions and/or actions.
ffl Statement-level rule processing can be achieved in the Starburst Rule System by issuing
a "process rules" command after each statement; it would be useful to provide a
more convenient mechanism for this. For example, we could predefine a system rule
set called Statement. Users would then add rules to this set, and the system would
automatically execute "process ruleset Statement" after each statement. A similar
mechanism could be provided for tuple-level rule processing.
ffl Currently, the Starburst Rule System includes only basic facilities for rule tracing and
for interaction between rule processing and application programs. The areas of debugging
and application interfaces offer considerable opportunities for useful extensions.
In addition to these Starburst-specific areas of future work, we hope and expect that the
Starburst Rule System will continue to be used as a basis for further research in active
database systems.
Acknowledgments
thanks go to Stefano Ceri, Bobbie Cochrane, Shel Finkelstein, and Bruce Lindsay,
all of whom made important contributions to one aspect or another of the Starburst Rule
System.
--R
On maintaining priorities in a production rule system.
Behavior of database production rules: Termination
A rule based language for deductive OODBS.
A new perspective on rule support for object-oriented databases
Using delta relations to optimize condition evaluation in active databases.
A model for active object oriented database.
On developing reactive object-oriented databases
Index support for rule activation.
Programming Expert Systems in OPS5: An Introduction to Rule-Based Programming
Integrating object-oriented data modeling with a rule-based programming paradigm
Active rule management in Chimera.
Automatic generation of production rules for integrity maintenance.
Consolidated specification of Chimera
Deriving production rules for constraint maintenance.
Deriving production rules for incremental view maintenance.
Production rules in parallel and distributed database environments.
Managing semantic heterogeneity with production rules and persistent queues.
Deriving incremental production rules for deductive data.
A research project in active
Issues in Integrating Active Rules into Database Systems.
The Relational Production Language: A production language for relational databases.
Incremental database rule processing in PARADISER.
Rule management in object-oriented databases: A uniform approach
A DOOD RANCH at ASU: Integrating active
Integrating active concepts into an object-oriented database system
Ode as an active database: Constraints and triggers.
Deriving integrity maintaining triggers from transition graphs.
On implementing a language for specifying active database execution models.
Starburst mid-flight: As the dust clears
Rule condition testing and action execution in Ariel.
Database language SQL3 (X3H2/94/080 and SOU/003)
Supporting semantic rules by a generalized event/trigger mechanism.
Understanding the New SQL: a Complete Guide.
Active rules based on object-oriented queries
Advances in RETE pattern match- ing
An architecture for transforming a passive DBMS into an active DBMS.
Implementing large production systems in a DBMS environment: Concepts and algorithms.
Implementing high level active rules on top of a relational DBMS.
Multilevel secure rules: Integrating the multilevel and active data models.
The POSTGRES rule manager.
On rules
Implementing temporal integrity constraints using an active DBMS.
A performance comparison of the Rete and TREAT algorithms for testing database rule conditions.
A denotational semantics for the Starburst production rule language.
The Starburst Rule System: Language design
Starburst Rule System user's guide.
Implementing set-oriented production rules as an extension to Starburst
--TR
--CTR
Stefano Ceri , Florian Daniel , Federico M. Facca, Modeling web applications reacting to user behaviors, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.10, p.1533-1546, 14 July 2006
David Botzer , Opher Etzion, Self-Tuning of the Relationships among Rules' Components in Active Databases Systems, IEEE Transactions on Knowledge and Data Engineering, v.16 n.3, p.375-379, March 2004
Angela Bonifati , Stefano Ceri , Stefano Paraboschi, Active rules for XML: A new paradigm for E-services, The VLDB Journal The International Journal on Very Large Data Bases, v.10 n.1, p.39-47, August 2001
Goce Trajcevski , Peter Scheuermann , Herv Brnnimann , Agns Voisard, Dynamic topological predicates and notifications in moving objects databases, Proceedings of the 6th international conference on Mobile data management, May 09-13, 2005, Ayia Napa, Cyprus
Goce Trajcevski , Peter Scheuermann, Reactive maintenance of continuous queries, ACM SIGMOBILE Mobile Computing and Communications Review, v.8 n.3, July 2004
Alan Abrahams , David Eyers , Jean Bacon, An asynchronous rule-based approach for business process automation using obligations, Proceedings of the 2002 ACM SIGPLAN workshop on Rule-based programming, p.93-103, October 05, 2002, Pittsburgh, Pennsylvania
Barbara Catania , Elisa Bertino, Static Analysis of Logical Languages with Deferred Update Semantics, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.386-404, February | expert database systems;extensible database systems;database production rules;active database systems |
627784 | Computation of Stable Models and Its Integration with Logical Query Processing. | AbstractThe well-founded semantics and the stable model semantics capture intuitions of the skeptical and credulous semantics in nonmonotonic reasoning, respectively. They represent the two dominant proposals for the declarative semantics of deductive databases and logic programs. However, neither semantics seems to be suitable for all applications. We have developed an efficient implementation of goal-oriented effective query evaluation under the well-founded semantics. It produces a residual program for subgoals that are relevant to a query, which contains facts for true instances and clauses with body literals for undefined instances. This paper presents a simple method of stable model computation that can be applied to the residual program of a query to derive answers with respect to stable models. The method incorporates both forward and backward chaining to propagate the assumed truth values of ground atoms, and derives multiple stable models through backtracking. Users are able to request that only stable models satisfying certain conditions be computed. A prototype has been developed that provides integrated query evaluation under the well-founded semantics, the stable models, and ordinary Prolog execution. We describe the user interface of the prototype and present some experimental results. | Introduction
Significant progress has been made in understanding the declarative semantics of deductive
databases and logic programs with negation. Two dominant proposals are the well-founded
semantics [31] and the stable model semantics [13]. For a normal logic program, where the body
of each rule is a conjunction of literals, its well-founded semantics is characterized by a unique
Supported in part by the National Science Foundation under Grant No. IRI-9212074.
y Supported in part by the National Science Foundation under Grant No. CCR-9102159.
three-valued model, called the well-founded partial model. It is well defined for all normal logic
programs. However, the well-founded semantics is inadequate in dealing with reasoning by cases
or multiple alternative situations.
Example 1.1 Consider the following program:
covered(Course) :- teach(Faculty, Course).
Its well-founded partial model is such that every ground atom is undefined, thus providing no
useful information about the scenario being described. 2
The well-founded semantics of normal logic programs has been extended by Van Gelder [30]
to general logic programs, where the body of each rule may be an arbitrary first-order formula.
The resulting semantics is called the alternating fixpoint logic [30].
The notion of stable models [13] originated from the work on autoepistemic logic [12]. Each
stable model represents a set of beliefs that can be derived from itself. In Example 1.1, there are
two stable models, one in which John teaches CSE 5381 and the other in which Mary teaches
CSE 5381. In either case, CSE 5381 is covered. Unlike the well-founded partial model, stable
models may not exist for a given program, e.g., p :-p, and even if they exist, they may not be
unique.
Recent research shows that the well-founded partial model and (two-valued) stable models
are two extreme cases of three-valued stable models [9, 22, 26]. The well-founded partial model
coincides with the smallest three-valued stable model. It corresponds to the skeptical semantics
that includes only beliefs that are true in all possible situations. On the other hand, the notion
of stable models captures the credulous semantics that concludes as many beliefs as possible from
a normal logic program.
Separate techniques have been developed for query evaluation under the well-founded semantics
and for computing stable models. The well-founded semantics has a constructive definition
based upon a least fixpoint construction. For function-free programs, it has a polynomial time
data complexity [31]. In addition to direct extensions of SLDNF resolution [21, 24], various
mechanisms of positive and negative loop handling have been incorporated for effective query
evaluation under the well-founded semantics [2, 3, 6, 7, 23, 28]. However, not all of them can be
extended directly for stable model computation.
The definition of stable models requires guessing an interpretation and then verifying if it is
a stable model. In fact, the problem of the existence of a stable model of a logic program is NP-complete
[17]. There have been several proposals for stable model computation [10, 19, 20, 26].
Two aspects are common. One is that only two-valued stable models are computed. This is
not surprising. Two-valued stable models represent the credulous semantics that does not allow
any incomplete information, and they have a smaller search space from a computational point
of view. The other common aspect is that only ground programs are processed. This, however,
is a severe restriction in practice since almost all rules have variables.
We have developed a prototype system, called SLG, for logical query answering. SLG supports
goal-oriented query evaluation under the well-founded semantics of normal logic programs, or
more generally, the alternating fixpoint logic of general logic programs. The latter is an important
extension since a standard translation of general logic programs into normal logic programs does
not always preserve the semantics. In either case, SLG has a polynomial time data complexity
for function-free programs. If a query has undefined instances, SLG produces a residual program
besides true and false instances of the query. The residual program can be further processed to
compute its (two-valued) stable models. SLG is available by anonymous ftp from seas.smu.edu
or cs.sunysb.edu.
By applying stable model computation to only the residual program of a query, SLG has two
advantages. First, answers of a query that are true in the well-founded semantics can always
be derived within polynomial time in the size of a database. They can be computed even more
efficiently if a program satisfies certain properties such as stratification. More importantly, non-ground
programs and queries can be handled. Second, the residual program of a query is often
much smaller than the original program. The approach in SLG restricts the potentially expensive
(two-valued) stable model computation to a small portion of the entire program. Furthermore,
three-valued stable models are partially supported since the original program may not have a
(two-valued) stable model even though the portion of the program that is relevant to a query
has (two-valued) stable models.
The main contributions of this paper are threefold. First, we describe a simple assume-and-
reduce algorithm for computing (two-valued) stable models of a finite ground program. It assumes
the truth values of only those ground atoms whose negative counterparts occur in a program.
The search space of stable models is further reduced by forward and backward propagation of
the assumed truth values of ground atoms and by reduction of the program. Second, we show
how to integrate query evaluation under the well-founded semantics with the computation of
stable models. Two aspects are noteworthy. One is that handling negative loops by delaying
not only avoids redundant derivations, but also leads to the residual program needed for stable
model computation later. The other is that the forward chaining network set up for simplifying
delayed literals in the derivation of the well-founded semantics is directly useful for stable model
computation. Finally, due to the multitude of stable models, it is not clear what answers should
be computed for a query. We introduce a versatile user interface for query evaluation with respect
to stable models.
Section 2 describes a method of stable model computation. Section 3 presents its integration
with query evaluation of the well-founded semantics. Section 4 contains some examples and
performance analysis. Section 5 compares with related work.
2 Computation of Stable Models
This section reviews the terminologies of logic programs [16] and the notion of (two-valued)
stable models by Gelfond and Lifschitz [13]. An assume-and-reduce algorithm is described for
computing (two-valued) stable models of finite ground programs.
2.1 Definition of Stable Models
An atom is of the form p(t 1
is an n-ary predicate symbol and t 1
are terms.
For an atom A, A is a positive literal and -A is a negative literal, and they are complements of
each other. A clause is of the form
A :-
where A, the head of the clause, is an atom, and the body of the clause, are
literals. A definite clause is a clause that has no negative literals in its body. A (definite) program
is a set of (definite) clauses. A ground atom (literal, clause, program) is one that is variable-free.
The Herbrand universe of a program P is the set of all ground terms that may be constructed
from the constants and function symbols appearing in P . An arbitrary constant is added if no
constant occurs in P . The Herbrand base of P , denoted by B P , is the set of all ground atoms with
predicates occurring in P whose arguments are in the Herbrand universe of P . The Herbrand
instantiation of P is the (possibly infinite) set of all ground clauses obtained by substituting
terms in the Herbrand universe for variables in clauses in P .
A set I of ground literals is consistent if for no ground atom A, both A and -A are in I.
We denote by P os(I) the set of positive literals in I and Neg(I) the set of ground atoms whose
complements are in I. A partial interpretation (or just interpretation) I is a consistent set of
ground literals. A total interpretation is an interpretation where
ground literal L is true in an interpretation I if and only if L 2 I.
Definition 2.1 ([13]) Let P be a program and I be an interpretation. The Gelfond-Lifschitz
transformation of P with respect to I, denoted by P I , is the program obtained from the Herbrand
instantiation of P by deleting
ffl each clause that has a negative literal -B in its body with B 2 I, and
ffl all negative literals -B in the bodies of the remaining clauses with -B 2 I.
If I is a total interpretation, the resulting program P I is a definite program. According to [1],
every definite program P has a unique minimal model, which we will denote by M(P ).
Definition 2.2 ([13]) Let I be a total interpretation, and P be a logic program. I is a (two-
valued) stable model of P if and only if I coincides with M(P I ).
2.2 Derivation of Stable Models
For the derivation of stable models, we consider finite ground programs. The application for
goal-oriented query evaluation of non-ground logic programs will be discussed in Section 3.
Let P be a finite ground program. We can restrict interpretations to ground atoms that occur
in P as every ground atom that does not occur in P is definitely false. According to Definition
2.2, stable models of P can be computed by enumerating every possible total interpretation I
and checking if I coincides with the unique minimal model M(P I ) of P I . The number of possible
total interpretations is obviously exponential in the size of the Herbrand base. Fortunately there
are often mutual dependencies among ground atoms in a program, which can be used to reduce
the search space and to speed up the computation of stable models substantially.
2.2.1 Assuming Negative Literals Only
Our first observation is that we have to guess only the truth values of ground atoms A such that
-A occurs in P .
Example 2.1 Consider the following ground program:
Two negative literals, -teach(mary,cse5381) and -teach(john,cse5381), occur in the program.
As soon as their truth values are determined, the truth value of covered(cse5381) can be derived.
There is no need to make assumptions about the truth value of covered(cse5381). This reduces
the search space for stable models. 2
Let N (P ) denote the set of ground atoms A such that -A occurs in P .
Lemma 2.1 Let P be a ground logic program, and I be a total interpretation. Then I is a
stable model of P if and only if for some interpretation J , where
I coincides with M(P J ).
Proof: Let I be a total interpretation and let J be the restriction of I to N (P
and the lemma follows from the
definition of stable models. 2
2.2.2 Propagation of Assumed Truth Values
The truth values of ground atoms in N (P ) are not independent of each other either. In Example
2.1, if we assume that teach(mary,cse5381) is true, the clause for teach(john,cse5381) can be
deleted since its body is false according to the assumption. Hence teach(john,cse5381) must be
false. Similarly, if teach(mary,cse5381) is assumed to be false, teach(john,cse5381) can be derived
to be true. Therefore it is not necessary to enumerate all the four possible truth assignments for
teach(john,cse5381) and teach(mary,cse5381).
Our second observation is that the assumed truth value of a ground atom should be used to
simplify the program being considered in order to reduce the search space for truth assignments
of ground atoms in N (P ) that lead to stable models.
Let P be a finite ground program, and -A be a ground literal that occurs in P . There are
two possible choices: either A is true or -A is true. We derive two programs from P , namely PA
where P is simplified based upon the assumption that A is true and P-A where P is simplified
based upon the assumption that -A is true. The objective is to derive stable models of P from
those of PA and P-A .
The simplification of a program P based upon the assumed truth value of a literal should be
done in such a way that avoids the generation of models that are supported but not stable.
Example 2.2 Let P be the following program:
The only stable model for P , which is also the perfect model in this case, is that q is true and p
is false. Suppose that p is assumed to be true. Based upon the assumption that p is true, we can
delete the clause for q since -p is false according to the assumption. However, the assumption
that p is true cannot be used to simplify away the positive occurrence of p in the body of the
clause for p. Otherwise, we would derive a model, fp; -qg, that is not a stable model. Therefore
the simplified program, P p , should contain one clause, p :- p. In this particular case, assuming
that p is true does not lead to any stable model since p is false in the stable model of P p , which
is inconsistent with the assumption.
On the other hand, suppose that -p is assumed to be true. The simplified program, P-p ,
should contain one fact, q. That is, the assumption that -p is true can be used to delete the
clause for p since its body is false according to the assumption. 2
Example 2.2 indicates that the assumed truth value of a ground atom can be used to delete
every clause whose body is false and every negative body literal that is true according to the
assumption, but it cannot be used to delete a positive body literal that is assumed to be true.
Definition 2.3 Let P be a ground program, and L be a ground literal. Then PL is the program
obtained from P by deleting
ffl every clause in P whose body contains the complement of L, and
ffl every occurrence of L in P if L is a negative literal.
Lemma 2.2 Let P be a ground program, A be a ground atom, and I be a total interpretation.
Then I is a stable model of P if and only if either A is true in I and I is a stable model of PA ,
or -A is true in I and I is a stable model of P-A .
Proof: Suppose that A is true in I. Then P Therefore I is a stable model of P if and
only if I is a stable model of PA .
Now assume that -A is true in I. Compared with Gelfond-Lifschitz transformation, the
simplification of P to P-A also deletes all clauses that have a positive literal A in the body.
Therefore P I is identical to (P-A ) I , except that P I may contains some additional clauses with
positive literal A in the body. By definition, I is a stable model of P if and only if I coincides
with M(P I ). Since -A is true in I, A must be false in M(P I ). Therefore M(P I
and so I is a stable model of P if and only if I is a stable model of P-A . 2
2.2.3 Reduction of a Program
Our third observation is that the simplification carried out by the construction of PL may determine
the truth values of other ground atoms, which should be propagated as much as possible to
reduce the program for which stable models are being sought. The propagation allows us to avoid
choice points for guessing truth values of ground atoms whose values are already determined by
previous assumptions.
Example 2.3 Let P be the following program on the left:
q :-s. q :-s.
q :-u. q.
r :-t. r :-t.
Suppose that -u is assumed to be true. The program, P-u , obtained from P , is shown above
on the right. Further propagation of known truth values of ground atoms leads to a partial
interpretation, namely fp; q; -s; -u; vg, and a much simpler program:
r :-t.
In this case, the derived truth value of u is consistent with the assumption that -u is true. 2
Propagation of previously known or assumed truth values is essentially a process of forward
chaining. The result is a partial interpretation and a reduced program.
Definition 2.4 Let P be a ground program, and U be a set of ground atoms that contains all
those occurring in P . We define I
only if
ffl I is a non-empty interpretation, where P os(I) is the set of all ground atoms A such that
contains a fact A, and Neg(I) is the set of all ground atoms A in U such that there is
no clause in P with A in the head, and
is obtained from P by deleting
- every clause that has a literal L in the body that is false in I, and
- every literal L in the bodies of remaining clauses such that L is true in I, and then
- every clause with A in the head where A 2 P os(I).
is reduced if there exists no interpretation I such that (P; U) I
U 0 .
Notice that if (P; U) is not reduced, there exists a unique non-empty interpretation I, a unique
I
is reduced, then U must be exactly the set of all
atoms occurring in P .
In Example 2.3, let U 0
be fp; q;
is fp; q; vg,
and
contains the following two clauses:
r :-t.
and U 1 is fr; s; t; ug. With another step of reduction, we have
I reduction is possible since
reduced.
Definition 2.5 Let P be a ground program, and U(P ) be the set of all ground atoms occurring
in P . P is reduced to an interpretation I and a program P 0 if and only if for some n - 0,
\Gamma! ::: In
\Gamma!
where
Notice that every finite ground program can be reduced to an interpretation and a (possibly
simpler) program. The following lemma shows that reduction preserves stable models.
Lemma 2.3 Let P be a finite ground program, and P be reduced to an interpretation I and a
program P . Then every stable model I of P is equal to I [ J for some stable model J of P
and vice versa. Furthermore, if P is a definite program, then P os(I
Proof: We show that every step of reduction preserves stable models, and the lemma follows by
a simple induction.
Suppose that (P k ; U k ) I k+1
consists of all ground atoms
A where A is a fact in P k , and Neg(I k+1 ) consists of all ground atoms A 2 U where there is no
clause in P k with A in the head.
Let I be any interpretation such that P os(I) [
interpretation J where
For any interpretation I such that P os(I) [ I is a stable model of P k if and
only if I
and is a stable model of P k+1 . 2
The reduction of a program with respect to a partial interpretation differs from the simplification
of a program according to the assumed truth value of a ground atom. Recall that if a
ground atom A is assumed to be true, this assumption cannot be used to delete any occurrence
of A in a program as a clause body literal. On the other hand, the reduction of a program P
with respect to a partial interpretation I is similar to the bottom-up computation embodied in
the transformation T P (I) [1].
Reduction, however, does not attempt to compute the well-founded semantics. It derives only
literals that are true or false with respect to Clark's completion of a program. For instance, the
following program
cannot be reduced further. In our framework, reduction is used in stable model computation,
which is carried out after a query is evaluated under the well-founded semantics. There is no
interleaving of computations of stable models and the well-founded semantics, in the sense that
our algorithm of stable model computation does not call any general procedure for computing
the well-founded semantics.
2.3 Assume-and-Reduce Algorithm
Figure
1 shows the assume-and-reduce algorithm for computing stable models of a finite ground
program. The algorithm is non-deterministic in the sense that certain choices have to be made
at some point. However, all stable models can be enumerated through backtracking.
Input: a finite ground program P
Output: a stable model or failure
begin
Let P be reduced to an interpretation I 0 and a program P
(3) AI := ;; N := N(Pgm);
(4) while N 6= ; do begin
(5) Delete an arbitrary element, A, from N ;
if A 62 DI and -A 62 DI then begin
choice point: L can be either A or -A */
AI := AI [fLg;
Let PgmL be reduced to an interpretation I and a program P ;
inconsistent then
AI and A 62 DI for some ground atom A then
else begin
(18) for every A that occurs in Pgm, add -A to DI;
return AI [ DI as a stable model of P ;
Figure
1: The assume-and-reduce algorithm for computing stable models
Theorem 2.4 Let P be a finite ground program. Then an interpretation I is a stable model of
P if and only if I is returned by an execution of the assume-and-reduce algorithm.
Proof: In the algorithm, DI represents the set of ground literals that have been derived to
be true (possibly from previous assumptions), and AI represents the set of ground literals that
are assumed to be true. The algorithm explores a tree of search space for stable models in a
non-deterministic and backtracking manner, where each node can be represented by a triple
(AI; DI;P gm). It terminates for finite ground programs.
We prove that the search space explored by the assume-and-reduce algorithm is complete.
Initially, the root node of the search tree is (AI; DI;P is reduced to
I 0 and P 0 . By Lemma 2.3, I is a stable model of P if and only if I = I 0 [ J and J is a stable
model of P 0 , i.e., I = AI [ DI [ J , where J is a stable model of P gm.
Given a node v represented by (AI; DI;P gm), let A be any ground atom in N (P ), i.e., -A
occurs in P , such that neither A or -A is in AI [ DI. Then -A is still in P gm. There are two
choices, either A is true or -A is true. By Lemma 2.2, an interpretation J is a stable model of
only if either A is true in J and J is a stable model of P gmA , or -A is true in J and
J is a stable model of P gm-A . Let P gmA (P gm-A ) be reduced to an interpretation I
A
-A
and a program P
A (P
-A ). Again, by Lemma 2.3, J is a stable model of P gmL , where L can be
either A or -A, if and only if
, where J
is a stable model of P
. Then v has two
child nodes. One is (AI [
A ) and the other is (AI [ f-Ag;DI [ I
-A ). By
the arguments above, for any interpretation I, I = AI [DI [J for some stable model J of P gm
if and only if for L that can be either A or -A, L 2 J and
L for some stable model
L of P
L for some stable model J
L of P
L .
For a leaf node v represented by (AI; DI;P gm) in the search tree for stable models, either
AI [DI is inconsistent, or for every A 2 N (P ), A or -A is in AI [DI. In the latter case, P gm
does not contain any negative literals. Furthermore P gm has no facts since it can be further
reduced otherwise. Therefore the only stable model of P gm, which is also the unique minimal
model M(P gm), is that every ground atom occurring in P gm is false. If A 2 AI and A 62 DI
for some ground atom A, -A must be true either in DI or in M(P gm), which is inconsistent
with the assumption in AI that A is true. 2
2.4 Backward Propagation of Assumed Truth Values
According to the definition of stable models, the assumed truth values of ground atoms should
coincide with the derived truth values. The assume-and-reduce algorithm uses forward chaining
to propagate the assumed truth values of ground atoms. This propagation may derive the truth
values of more ground atoms so that there is no need to lay down a choice point for guessing
their truth values.
incorporates backward propagation of assumed truth values of ground atoms under certain
conditions. Unlike forward propagation, which computes the derived truth values of ground
atoms, backward propagation of assumptions may lead to more assumptions, thus reducing the
search space for stable models.
Example 2.4 One application of stable models is to provide a semantics for programs with
choice constructs [26]. Suppose that three students are taking the AI class.
take(sean, ai). take(irene, ai). take(chris, ai).
The following ground program chooses exactly one student taking the AI class:
choose(sean,ai) :-diff(sean,ai).
diff(sean,ai) :- choose(irene,ai).
diff(sean,ai) :- choose(chris,ai).
choose(irene,ai) :-diff(irene,ai).
diff(irene,ai) :- choose(sean, ai).
diff(irene,ai) :- choose(chris,ai).
choose(chris,ai) :-diff(chris,ai).
diff(chris,ai) :- choose(sean,ai).
diff(chris,ai) :- choose(irene,ai).
There are three ground negative literals in the program. Suppose that diff(sean,ai) is assumed to
be false. By backward propagation, we can infer that both choose(irene,ai) and choose(chris,ai)
must be assumed to be false too. All three assumptions can be used to simplify the program to
the following:
choose(sean,ai).
diff(irene,ai) :- choose(sean,ai).
diff(chris,ai) :- choose(sean,ai).
A reduction of the program derives that diff(irene,ai) and diff(chris,ai) are true. 2
Let P be a finite ground program. SLG supports backward propagation under two situations:
ffl If a ground atom A is assumed to be true, and P contains exactly one clause with A in the
head, of the form
A :-
then every L i (1 - i - n) is assumed to be true;
ffl If a ground atom A is assumed to be false, then for every clause in P with A in the head
and a single literal L in the body, L is assumed to be false.
The backward propagation may be repeated several times. The correctness of backward propagation
is obvious according to the definition of stable models. The assume-and-reduce algorithm
can be modified to include backward propagation, the details of which are omitted.
3 Integration with Computation of the Well-Founded
The assume-and-reduce algorithm deals with only finite ground programs and computes (two-
valued) stable models. This section shows how to integrate computations of the well-founded
semantics and stable models to provide query evaluation of non-ground programs for practical
applications.
It is known that for logic programs without loops through negation, e.g., modularly stratified
programs [25], the well-founded partial model is total and coincides with the unique stable
model of the program. In that case, computation of the well-founded semantics is sufficient.
For programs with literals involved in loops through negation, the well-founded partial model
is in general three-valued. We discuss how negative loops should be handled in order to facilitate
computation of stable models and to ensure the polynomial time data complexity of query
evaluation under the well-founded semantics at the same time.
3.1 Handling Negative Loops
Negative loops occur due to recursion through negation. There are two main issues, namely how
to detect negative loops and how to treat literals that are involved in negative loops so that
query evaluation can proceed.
Example 3.1 Consider the following program [31]:
move(a,b). move(b,a). move(b,c). move(c,d).
Figure
2 shows a portion of the SLDNF tree for the query win(a), which contains an infinite
branch through negation. 2
?- win(a).
?- move(a,X), ~win(X).
?- ~win(b).
?- win(b).
?- move(b,Y), ~win(Y).
?- ~win(a). ?- ~win(c).
?- win(a). fail
Figure
2: SLDNF tree for win(a)
A simple mechanism for negative loop detection is to associate with each call a negative
context. This approach has been adopted in Well! [2] and in XOLDTNF resolution [6]. Consider
a branch through negation in an SLDNF tree. The negative context of a call on the branch
is the set of ground negative literals encountered along the path from the root to the call. In
Figure
2, the initial call win(a) has an empty negative context. The negative context for win(b)
is f-win(b)g, and the negative context for the second call of win(a) is f-win(b); -win(a)g.
In the tree for the second call win(a), when -win(b) is selected, it is in the negative context
of win(a), indicating that there is a negative loop. The approach in XOLDTNF resolution [6] is
to treat the selected ground negative literal -win(b) as undefined. In general, an answer consists
of not only an instance of a query atom, but also a truth value indicating whether the answer
is true or undefined. Figure 3 shows a portion of the XOLDTNF forest for query win(a). Each
node is labeled by a pseudo-clause. The head captures bindings of relevant variables that have
been accumulated and the truth value, and the body contains literals that are yet to be solved.
call: ({~win(b),~win(c)}, win(c))
(win(c),t) :- win(c).
(win(c),t) :- move(c,Z), ~win(Z).
(win(c),t) :- ~win(d).
(win(c),t).
(win(a),t) :- win(a).
(win(a),t) :- move(a,X), ~win(X).
call: ({}, win(a))
(win(a),t) :- ~win(b).
(win(b),t) :- win(b).
(win(b),t) :- move(b,Y), ~win(Y).
call: ({~win(b)}, win(b))
(win(b),t) :- ~win(a).
(win(a),u).
(win(b),t) :- ~win(c).
fail
(win(b),u).
call: ({~win(b),~win(a)}, win(a))
(win(a),t) :- win(a).
(win(a),t) :- move(a,X), ~win(X).
(win(a),t) :- ~win(b).
(win(a),u)
Figure
3: XOLDTNF forest for win(a)
The detection of negative loops using negative contexts is easy to implement in goal-oriented
query evaluation. However, associating with each call a negative context prevents the full sharing
of answers of a call across different negative contexts. Examples can be constructed in which a
subgoal may be evaluated in an exponential number of negative contexts [7], even though the
well-founded semantics is known to have a polynomial time data complexity.
Treating negative literals involved in negative loops as undefined is appropriate for query
evaluation under the well-founded semantics. But it destroys the mutual dependencies among
the negative literals. If a query turns out to be undefined in the well-founded semantics, there
is little information that can be used for computation of stable models.
In [7], we developed a framework called SLG resolution. It detects potential negative loops
by maintaining dependency information among calls incrementally. Each call (up to renaming
of variable) is evaluated at most once, allowing the full sharing of answers. When a potential
negative loop is detected, negative literals that are involved are delayed so that other literals in
the body of a clause can be evaluated. These delayed literals may be simplified later if they
become known to be true or false, or they may be returned as part of a conditional answer
otherwise.
Figure
4 shows a portion of the SLG forest for query win(a). A vertical bar (j) separates
delayed literals on the left and the remaining literals on the right in the body of a clause. Notice
win(a) :- win(a).
win(a) :- move(a,X), ~win(X).
win(a) :- ~win(b).
win(a) :- ~win(b) | .
fail
Figure
4: SLG forest for win(a)
that there is a conditional answer for win(a), namely win(a) :-win(b), and similarly for
win(b), win(b) :-win(a). These conditional answers constitute a residual program, to which
the assume-and-reduce algorithm can be applied to derive stable models and the answers of the
original query in each stable model.
3.2 Simplification of Delayed Literals
Given an arbitrary but fixed computation rule, there are programs in which ground negative
literals must be delayed before their truth or falsity is known.
Example 3.2 Assume that a left-most computation rule is used and that s is to be solved with
respect to the following program:
s :-p, -q, -r.
q :-s, -p, r.
r :-s, -q, p.
The first negative loop involves s and p, which is processed by delaying -p and -s. In the clause
of s, the next body literal -q is then selected, which leads to the second negative loop involving
s and q. Delaying is applied again so that query evaluation can proceed. The computation rule
selects the next body literal, namely -r, in the clause of s, whose evaluation results in the third
negative loop involving s and r. The literal -r in the clause of s and the literal -s in the clause
of r are delayed. At this point, the clause of s does not have any body literals that are not
delayed. Thus we derive a conditional answer for s, namely s :-p, -q, -r. The evaluation of p,
q, and r continues, leading to a negative loop involving p, q, and r. The corresponding negative
literals are delayed. Computation continues, and a positive loop is detected among p, q, and r.
They become completely evaluated without any answers, and so they are false. The falsity of p,
q, and r is then propagated to derive a true answer for s. 2
To facilitate the simplification of delayed literals, SLG sets up forward chaining links among
calls when a conditional answer is derived. When the truth value of a ground atom A becomes
known, all conditional answers with delayed literals A or -A are simplified.
Not all delayed literals can be simplified as the well-founded semantics is in general three-
valued. If a query has undefined instances under the well-founded semantics, its evaluation
produces a residual program consisting of all relevant conditional answers, as well as forward
chaining links for simplification of delayed literals.
For the program and query win(a) in Example 3.1, Figure 5 shows the residual program for
win(a) that consists of two conditional answers and the corresponding forward chaining links. A
link from win(b) to win(a) indicates that if win(b) is true or false, some conditional answer of
win(a) can be simplified. These forward chaining links are used directly in SLG for computation
of stable models, for propagation of assumed truth values of ground atoms and for reduction of
the residual program.
Figure
5: Residual program and forward chaining links for win(a)
3.3 General Logic Programs
Van Gelder has generalized the well-founded semantics of normal logic programs to the alternating
fixpoint logic of general logic programs [30]. The SLG resolution developed in [7] has been
extended for goal-oriented query evaluation of general logic programs [4]. Similarly it produces
a residual program and corresponding forward chaining links for queries that have undefined
instances under the alternating fixpoint logic.
Example 3.3 The following program describes a coloring of nodes in a graph in such a way
that two adjacent nodes cannot be both colored.
edge(a,b). edge(b,a). edge(b,c). edge(c,d).
Consider the query color(a). By By resolving (color(a) :- color(a)) with the program clause, we
obtain a new clause for color(a):
The literal, -edge(a,Y), is selected. The corresponding positive subgoal, edge(a,Y), is evaluated
completely and has one answer, namely edge(a,b). Hence -edge(a,Y) is true for all Y that is
distinct from b. We return the answer of edge(a,Y) to the clause for color(a). By resolving the
universal disjunction
8 Y. (-edge(a,Y) -color(Y))
with the answer edge(a,b), we derive a unit clause, color(b). The clause for color(a) is replaced
by the following clause:
color(a) :-color(b).
The literal, -color(b), is then selected. A new subgoal, color(b), is created and evaluated. Figure
6 shows the SLG forest resulted from the evaluation of color(a), where 8 is represented by All
and disjunction - is represented by ';'. Notice that color(d) is derived to be true and color(c)
is derived to be false, while both color(a) and color(b) are undefined. The residual program
consisting of
color(a) :-color(b).
can be further processed for stable model computation. 2
color(a) :- ~color(b) | .
color(c) :- color(c).
color(c) :- all(Y).(~edge(c,Y); ~color(Y)).
color(c) :- ~color(d).
fail
color(d) :- color(d).
color(d) :- all(Y).(~edge(d,Y); ~color(Y)).
Figure
forest for color(a)
3.4 Stable Models: Two-Valued versus Three-Valued
Let P be a program and Q be a query. SLG first evaluates Q with respect to the well-founded
semantics of P . The result includes a set of true and false instances of Q, and in general,
a residual program P und(Q) for undefined instances of Q. (Two-valued) stable models for the
residual program, P und(Q) , can be derived by using the assume-and-reduce algorithm. However, a
(two-valued) stable model of P und(Q) may or may not be extended to a (two-valued) stable model
of P .
Example 3.4 Consider the following simple program
a :-b.
b :-a.
and query b. The residual program for b contains two clauses:
b :-a.
a :-b.
There are two stable models for the residual program, one in which b is true and the other in
which a is true. However, the one in which b is true cannot be extended to a stable model of
the original program, even though there is a three-valued stable model of the original program
in which b is true. 2
In general, answers of a query computed by SLG are answers with respect to three-valued
stable models of a given program P . SLG does not enumerate all possible three-valued stable
models of P .
To compute (two-valued) stable models of a program P in SLG, one may define a new
predicate that calls all predicates in P with distinct variables as arguments. By evaluating the
new predicate (with distinct variables as arguments), SLG derives a residual program P und for
all undefined atoms of the original program. All (two-valued) stable models of P can be derived
by applying the assume-and-reduce algorithm to P und .
4 Integration with Prolog
All normal logic programs are obviously syntactically correct Prolog programs, even though
their execution under Prolog's strategy may not terminate. One of the objectives of the SLG
system is to integrate query evaluation with ordinary Prolog execution so that existing Prolog
environments can be readily used for knowledge-based applications. This section describes the
interface of SLG from a user's point of view.
4.1 Syntax
The syntax of Prolog is used for input programs, with additional directives for predicate declara-
tions. Predicates can be declared as tabled or prolog. Tabled predicates are evaluated using SLG
resolution. Prolog predicates are solved by calling Prolog directly. Calls to tabled predicates are
remembered in a table with their corresponding answers. Future calls to tabled predicates that
are renaming variants of previous calls are not re-evaluated, but will be satisfied using answers
that are computed as a result of the previous calls.
It is actually legal for a tabled predicate to call a Prolog predicate which in turn calls a tabled
predicate. However, the two invocations of tabled predicates will not share the same table, and
Prolog's infinite loops will not be terminated.
There are also certain constraints on the form of clauses that can be used to define tabled
predicates. In particular, the body of a clause for a tabled predicate should be a conjunction of
literals. Cuts are allowed in the body before any occurrence of a tabled predicate. Common uses
of cuts for selection of clauses according to certain guard conditions are supported for tabled
predicates.
Clauses with a universal disjunction of literals in the body are allowed. They are indicated
by an operator /\Gamma. For the program in Example 3.3, it will be written as follows:
where all variables that occur only in the body are universally quantified in the body, and
disjunction is indicated by ';'. Standard safety conditions are assumed [32]. To determine if
the body of a clause is safe to evaluate, we convert the universal quantification into existential
quantification:
The notion of safety requires that all free variables in the body must be bound when the negation
in the body is evaluated. For the conjunction inside the existential quantification, all variables
in a negative literal must also occur in the a positive literal. Accordingly we require that for a
clause with a universal disjunction of literals in the body, the head must be ground when the
clause is used, and all variables that occur in positive literals in the body also occur in the head
or in negative literals in the body.
4.2 Query Interface
Tabled predicates are evaluated with respect to the well-founded semantics by default. Both true
and conditional answers can be returned.
Example 4.1 The following is the winning program in Prolog syntax with a tabling directive
of SLG:
tabled win/1.
move(a,a). move(a,b). move(b,a). move(b,c).
By the initial default, all predicates are Prolog predicates unless declared otherwise. The default
can be changed to tabled by users if needed. The Prolog interface is also used for queries. In the
following, the first query asks for true answers under the well-founded semantics, and the second
returns also conditional answers, where each condition is a list of delayed literals.
no
noBy applying the assume-and-reduce algorithm to the residual program produced by the computation
of the well-founded semantics, SLG derives answers of queries under (two-valued) stable
models of the residual program (or three-valued stable models of the original program). (See
Section 3.4 for more discussions.) In general, there may be multiple stable models of the residual
program, and answers of queries have to be qualified by the corresponding stable model. SLG
provides a versatile interface for query evaluation under stable models of the residual program.
It includes the following predicates:
ffl st(Call,PSM) (or stnot(Call,PSM)): It succeeds if Call is a ground atom and there is a
stable model PSM in which Call is true (or false);
ffl stall(Call,Anss,PSM): It computes a stable model PSM and collects all answers of Call
in a list Anss.
ffl stselect(Call,PSM0,Anss,PSM): It is similar to stall/3, except that it computes only
those stable models in which all ground literals in PSM0 are true. This allows the user to
select only those stable models that satisfy certain conditions.
Alternative stable models of the residual program and the corresponding answers of Call are
returned upon backtracking.
Example 4.2 The following is a program that selects exactly one student for each course [26]:
tabled choose/2, diff/2.
take(sean,ai). take(irene,ai). take(chris,ai).
take(brad,db). take(irene,db). take(jenny,db).
same(X,X).
The query below selects those stable models in which choose(sean,ai)and "+choose(irene,db)
are true:
stselect(choose(-),[choose(sean,ai),"+choose(irene,db)],Anss,PSM).
no5 Related Work and Experimental Results
SLG seems to be the first work that provides integrated query evaluation under various semantics,
including the well-founded semantics and stable models of normal logic program, the alternating
fixpoint logic of general logic programs, and SLDNF resolution in Prolog execution. The combination
of Prolog's programming environment and SLG's query processing capabilities makes it
easier to develop knowledge-based applications.
The delaying mechanism for handling negative loops in SLG has two important implications.
First, it avoids redundant derivations in the computation of the well-founded semantics as delayed
literals are simplified away as needed using forward chaining links. Second, it allows SLG to
produce a residual program for undefined instances of a query, which can be used directly for
stable model computation. Most of existing techniques for query evaluation under the well-founded
semantics replace looping negative literals with an undefined truth value [2, 3, 6], or use
the alternating fixpoint method to compute possibly true or false facts [28]. In either case, little
information is saved for later computation of stable models.
Goal-oriented query evaluation with respect to stable models was studied by Dung in an
abductive framework [9]. It is a refinement of Eshghi and Kowalski's abductive procedure [11].
A ground negative literal can be assumed to be true if it does not lead to any inconsistency. It
is not clear how to specialize the abductive procedure to compute only answers that are valid in
the well-founded semantics. Pereira et al. [19] developed derivation procedures for goal-oriented
evaluation of ground programs under the well-founded semantics or stable models.
A bottom-up procedure, called backtracking fixpoint, was developed by Sacca and Zaniolo [26],
which non-deterministically constructs a stable model if one exists. In [15], stable models are
characterized by a transformation of normal logic programs into semantically equivalent positive
disjunctive programs, with integrity constraints in the denial form / each atom B,
is a new atom for the negation of B. Stable models are constructed using the model
generation theorem prover (MGTP), which is a bottom-up forward chaining system. Starting with
the set containing the empty interpretation, MGTP either expands an interpretation according to
a disjunctive clause or discards an interpretation if it violates some integrity constraints. Other
methods that construct all stable models simultaneously include [10, 20, 29].
The work most closely related to ours is the branch-and-bound method by Subrahmanian
et al. [29]. Their approach first computes Fitting's Kripke-Kleene semantics and at the same
time "compacts" the program by deleting parts of the program. The program is then processed
and further compacted by an alternating fixpoint procedure that computes the well-founded
semantics. The resulting program is used for computation of stable models using a branch-and-
bound method.
The branch-and-bound method and SLG are similar in the sense that both assume the truth
values of some atoms and compact or simplify the program as computation proceeds. However,
there are several major differences. First, the branch-and-bound method in [29] computes and
stores all stable models simultaneously. As the number of stable models can be exponential,
storing all stable models at the same time may require a substantial amount of memory. SLG,
on the other hand, computes alternative stable models through backtracking. Second, the branch-
and-bound method interleaves the assumption of the truth value of an atom with the computation
of the well-founded semantics. After the truth value of an atom is assumed, the resulting program
is processed with respect to the well-founded semantics. SLG only attempts to reduce the
program in such a way that ground atoms that are true or false in Clark's completion are
derived, which is simpler than a full-fledged computation of the well-founded semantics. Third,
the branch-and-bound method is intelligent in choosing which atom to make an assumption
about its truth value. It selects an atom in a leaf strongly connected component according
to the dependency graph. SLG uses a very simple criterion, namely only those atoms whose
complements occur in a program can be assumed. Finally, SLG integrates query evaluation with
ordinary Prolog execution and accepts programs with variables, while the method in [29] assumes
a finite ground program.
To get a rough idea how SLG performs, we took two benchmark programs reported in [29]
together with their timing information, and ran them using SLG. It should be pointed out that
a systematic study of benchmark programs have to be conducted before a clear picture of the
relative performance of the various systems can be obtained. The prototype compiler in [29]
was written in C running under the Unix environment on a Decstation 2100. SLG was written
in Prolog running under SICStus Prolog in the Unix environment on a Decstation 2100. The
timing information of SLG was obtained by Prolog builtin predicate statistics. All timing data
are in milliseconds.
The first program consists of the following rules:
An additional unary predicate y( ) is used to introduce constants in the program. To test the
program in SLG, we added the following rules:
The query m(X) is then evaluated by calling stall(m(X),Anss,PSM). A failure loop is used to
get all answers of the call. Table 7 shows the timing of SLG and the intelligent branch-and-bound
in [29]. The relative rate of increase in execution time in SLG seems closer to the rate of increase
of the number of stable models. The execution time of SLG falls below that of the intelligent
Number of constants 1 2 3 4 5
Number of stable models 4
Branch and bound 43 262 1413 9431 95766
Figure
7: SLG and branch-and-bound for enumerating all stable models
branch and bound when the number of constants reaches 5, probably due to the large number
of stable models that have to be stored in the latter.
The second program, also taken from [29], is as follows:
It is augmented by a unary predicate y( ) whose sole purpose is to introduce constant symbols
into the program. For SLG, we added y(X) at the beginning of the body of each rule for s(X).
The query s(X) is then evaluated by calling stall(s(X),Anss,PSM), and a failure loop is used
to check all possibilities. Table 8 shows the timing information of SLG versus the intelligent
branch and bound in [29]. In this case, there is no stable model for the program, which can be
detected as soon as the truth value of p(X), q(X), or r(X) for some X is assumed. Thus most of
the time is spent on the computation of the well-founded semantics.
Number of
Branch and bound 54 117 198 303 431 586 755 972 1203 1475
Figure
8: SLG and branch-and-bound for checking non-existence of stable models
The notion of stable models provides a declarative semantics for the choice construct in LDL
[18]. It has been shown by Greco et al. [14] that for certain classes of programs with choice,
the data complexity for computing a stable model is polynomial time. The choice construct has
been used to model a variety of applications where only one stable model is needed [14].
We tested SLG on a classical choice program [14]:
choose(X,Y) :- base(X,Y), choice(X,Y).
It is translated into a normal logic program:
choose(X,Y) :- base(X,Y), -diffchoice(X,Y).
diffchoice(X,Y) :- choose(X,Z), -same(Y,Z).
same(X,X).
The base relation contains a set of facts of the form:
base(i,a). base(i, b). base(i, c). base(i, d).
where i ranges from 1 to N, and N is used as a parameter. The query choose(X; Y ) is evaluated
by calling stall(choose(X,Y),Anss,PSM). We measured the time (on a Decstation 2100) for
computing the first solution for programs of different sizes by varying N from 2 to 10. Table 9
shows the timing of SLG for different values of N. The execution time of SLG seems polynomial
in the size of the base relation.
Figure
9: SLG for computing the first stable model
SLG is currently implemented as a Prolog meta interpreter [8], and therefore carries significant
overhead. A compiler implementation of SLG by extending the Warren Abstract Machine is being
carried out in the XSB project led by the second author [27]. XSB currently handles modularly
stratified programs. A preliminary performance analysis shows that XSB is over an order of
magnitude faster than SLG [5].
6 Conclusion
We have presented an assume-and-reduce algorithm for computing stable models and its integration
in SLG with goal-oriented query evaluation under the well-founded semantics, or more
generally the alternating fixpoint logic of general logic programs. The synergism exemplified
by SLG between Prolog on the one hand and deductive query processing and nonmonotonic
reasoning on the other offers an ideal environment for developing knowledge-based applications.
--R
Contributions to the theory of logic programming.
Tabulated resolution for well founded semantics.
Query evaluation of alternating fixpoint logic.
Efficient top-down computation of queries under the well-founded semantics
Query evaluation under the well founded semantics.
The SLG System
Negation as hypotheses: An abductive foundation for logic programming.
Computing stable models by using the ATMS.
Abduction compared with negation by failure.
On stratified autoepistemic theories.
The stable model semantics for logic programming.
Greedy by choice.
Transforming abductive logic programs to disjunctive programs.
Foundations of Logic Programming.
Autoepistemic logic.
A Logical Language for Data and Knowledge Bases.
Derivation procedures for extended stable models.
A truth maintenance system based on stable models.
Every logic program has a natural stratification and an iterated least fixed point model.
The well-founded semantics coincides with the three-valued stable se- mantics
Controlling the search in bottom-up evaluation
A procedural semantics for well-founded negation in logic programs
The Semantics of Deductive Databases.
Stable models and non-determinism for logic programs with nega- tion
The XSB Programmers Manual
The alternating fixpoint of logic programs with negation.
The well-founded semantics for general logic programs
Safety and translation of relational calculus queries.
--TR
--CTR
Chris Giannella , John Schlipf, An empirical study of the 4-valued Kripke-Kleene and 4-valued well-founded semantics in random propositional logic programs, Annals of Mathematics and Artificial Intelligence, v.25 n.3-4, p.275-309, 1999
Patrik Simons , Ilkka Niemel , Timo Soininen, Extending and implementing the stable model semantics, Artificial Intelligence, v.138 n.1-2, p.181-234, June 2002
Ilkka Niemel, Logic programs with stable model semantics as a constraint programming paradigm, Annals of Mathematics and Artificial Intelligence, v.25 n.3-4, p.241-273, 1999
Thomas Eiter , Wolfgang Faber , Nicola Leone , Gerald Pfeifer, Declarative problem-solving using the DLV system, Logic-based artificial intelligence, Kluwer Academic Publishers, Norwell, MA, 2000
P. A. Bonatti, Resolution for Skeptical Stable Model Semantics, Journal of Automated Reasoning, v.27 n.4, p.391-421, November 2001
Francesco Calimeri , Wolfgang Faber , Gerald Pfeifer , Nicola Leone, Pruning Operators for Disjunctive Logic Programming Systems, Fundamenta Informaticae, v.71 n.2,3, p.183-214, August 2006
Weidong Chen , David S. Warren, Tabled evaluation with delaying for general logic programs, Journal of the ACM (JACM), v.43 n.1, p.20-74, Jan. 1996
Francesco Calimeri , Giovambattista Ianni, Template programs for Disjunctive Logic Programming: An operational semantics, AI Communications, v.19 n.3, p.193-206, August 2006
Marcello Balduccini , Enrico Pontelli , Omar Elkhatib , Hung Le, Issues in parallel execution of non-monotonic reasoning systems, Parallel Computing, v.31 n.6, p.608-647, June 2005
Franois Bry , Adnan Yahya, Positive Unit Hyperresolution Tableaux and Their Application to Minimal Model Generation, Journal of Automated Reasoning, v.25 n.1, p.35-82, July 2000
V. s. Subrahmanian, Nonmonotonic Logic Programming, IEEE Transactions on Knowledge and Data Engineering, v.11 n.1, p.143-152, January 1999
Nicola Leone , Gerald Pfeifer , Wolfgang Faber , Thomas Eiter , Georg Gottlob , Simona Perri , Francesco Scarcello, The DLV system for knowledge representation and reasoning, ACM Transactions on Computational Logic (TOCL), v.7 n.3, p.499-562, July 2006
Jrgen Dix , Ulrich Furbach , Ilkka Niemel, Nonmonotonic reasoning: towards efficient calculi and implementations, Handbook of automated reasoning, Elsevier Science Publishers B. V., Amsterdam, The Netherlands, 2001 | stable model semantics;alternating fixpoint logic;deductive database;logic programming;logical query evaluation;nonmonotonic reasoning;well-founded semantics |
627792 | Storage Allocation Policies for Time-Dependent Multimedia Data. | AbstractMultimedia computing requires support for heterogeneous data types with differing storage, communication, and delivery requirements. Continuous media data types such as audio and video impose delivery requirements that are not satisfied by conventional physical storage organizations. In this paper, we describe a physical organization for multimedia data based on the need to support the delivery of multiple playout sessions from a single rotating-disk storage device. Our model relates disk characteristics to the different media recording and playback rates and derives their storage pattern. This storage organization guarantees that as long as a multimedia delivery process is running, starvation will never occur. Furthermore, we derive bandwidth and buffer constraints for disk access and present an approach to minimize latencies for non-continuous media stored on the same device. The analysis and numerical results indicate the feasibility of using conventional rotating magnetic disk storage devices to support multiple sessions for on-demand video applications. | Introduction
Files comprised of multimedia data are different from conventional data files in many re-
spects. As shown in Table 1, multimedia data, and hence files, consume enormous space
and bandwidth relative to program files or "text" documents. For example, a single feature-length
JPEG-compressed movie can require over 2 Gbytes of memory for digital storage.
Multimedia data can also be sensitive to timing during delivery. When a user plays-out or
records a time-dependent multimedia data object, the system must consume or produce data
at a constant, gap-free rate. This means that the file system must ensure the availability of
sufficient data buffer space for the playback or recording process. For example, to maintain
a continuous NTSC-quality video playback, a file system must deliver data at a rate of
frames/s. Moreover, the delivery mechanism must also satisfy the intermedia synchronization
requirement among related media (e.g., the lip synchronization between audio, video,
and subtitles).
Table
1: Properties of Multimedia Data
Data Type Buffer/Bandwidth
Single text document (HTML) - 80 Kb/document
Voice-quality audio (8 bits @ 8 KHz) 64 Kb/s
CD quality audio (stereo @ 44.1 KHz) 1.4 Mb/s
NTSC-quality video (uncompressed @ 5.9 Mb/frame
512 \Theta 480 pixels, 24 bits/pixel) (177 Mb/s)
JPEG-compressed NTSC video - 7 Mb/s - 3.5 Mb/s
MPEG-I-compressed NTSC video - 1.5 Mb/s
MPEG-II-compressed NTSC
HDTV-quality video (uncompressed @ 28.7 Mb/frame
1248 \Theta 960 pixels, 24 bits/pixel) (863 Mb/s)
A multimedia file system must reconcile the deficiencies of conventional storage subsys-
tems. A typical storage subsystem accesses data by positioning its read heads at the desired
location for a data block. A random allocation approach, regardless of the time-dependency
for multimedia data, increases the head and seek switching frequencies and resultant access
latency. In addition, the electro-mechanical nature of secondary-storage devices requires the
use of scheduling disciplines modified to meet the throughput and real-time requirements
of multimedia data delivery. When a multimedia file system transfers data from a disk,
it must guarantee that multimedia data arrive at the consuming device on time. It must
also meet the timing requirements of the multimedia object; however, this task is difficult
due to the unpredictability of disk seek latencies. Furthermore, in a multitasking system,
more than one user can request multimedia or non-real-time services, thereby requiring the
management of multiple sessions. In contrast, the data allocation and scheduling strategies
for conventional file systems are only concerned with the throughput, latency, and storage
utilization for random access to files. Therefore, we seek to provide real-time behavior for a
set of multimedia sessions originating from a single storage system; typically a conventional
rotating-disk magnetic storage device. Note that we constrain ourselves to cases in which
the aggregate bandwidth of sessions is less than or equal to the capacity provided by a single
device; we do not consider RAID or other data distribution approaches in this context.
A number of related works exist in this area. The problem of satisfying timing requirements
for multimedia data has been studied as a conceptual database problem [11], as an
operating system delivery problem [1, 12, 13, 22], as a physical disk modeling problem [6, 9,
10, 18], and as a physical data organization and performance problem [5, 7, 8, 14, 21, 23, 24].
Rangan et al. [16] propose a model for storing real-time multimedia data in file systems.
The model defines an interleaved storage organization for multimedia data that permits the
merging of time-dependent multimedia objects for efficient disk space utilization. In a related
work, Rangan et al. [15] develop an admission control algorithm for determining when
a new concurrent access request can be accepted without violating the real-time constraints
of existing sessions. Polimenis [14] shows that the hard requirement for the acceptance of
a set of real-time sessions is the availability of disk bandwidth and buffer space. Gemmell
and Christodoulakis [8] establish some fundamental principles for retrieval and storage of
time-dependent data. A theoretical framework is developed for the real-time requirements
of multimedia object playback. Storage placement strategies for multichannel synchronized
data are also examined. P. Yu, Chen, and Kandlur [24] present an access scheme called the
grouped sweeping scheme (GSS) for disk scheduling to support multimedia applications by
reducing buffer space requirements. C. Yu et al. [21, 23] describe approaches to interleaving
time-dependent data to support constant playout rates. Tobagi et al. [20] develop a Streaming
RAID approach to handle video traffic on a disk array. Chiueh and Katz [4] propose
a multi-resolution video representation scheme based on Gaussian and Laplacian Pyramids,
which allows the parallel disk array to deliver only the absolute minimum amount of data
necessary.
In this paper, we propose a physical data organization and file system for multimedia
data. We interleave different media objects within a block so as to maintain temporal
relationships among those objects during retrieval (Fig. 1). We also define an allocation
policy based on the contiguous approach to prevent frequent head movement that can cause
significant seek latencies and to support editing on multimedia files. The behavior of a
conventional magnetic rotating-disk storage device is analyzed with respect to the mean and
variance of the seek latency.
disk track
video
audio
text
reserved
Figure
1: Physical Storage Organization for a Rotating Disk Device
A round-robin scheduling discipline is chosen for the service of multimedia sessions as in
other work [12, 14, 17], permitting the disk to switch alternately between multimedia tasks
and other non-real-time tasks. The file system achieves a high disk bandwidth utilization
by assigning long disk reads or writes and thus sharing the seek and latency delays among a
large number of bits read or written, resulting in a small overhead per transferred unit. We
introduce a disk access schedule which is a refined model based on the work of Polimenis
[14]. We show the constraints which must be satisfied to permit the acceptance of a set of
multimedia sessions including bandwidth and buffer considerations. This work differs from
other approaches in that we establish a probabilistic model for our disk access schedule to
accept a set of sessions rather than using a guarantee of a worst case for the frequency of
starvation.
The remainder of this paper is organized as follows. In Section 2 we describe the storage
organization and allocation policy for multimedia objects to facilitate disk bandwidth uti-
lization. In Section 3 we analyze the probabilistic behavior of disk seek latency. In Section
4 we show an access schedule for the disk and present a periodic service discipline for multi-media
objects based on the probabilistic model. In Section 5 we describe how this schedule
reduces the required buffering and increases the number of supported multimedia sessions.
Section 6 concludes the paper.
Storage Organization for Multimedia Objects
Most existing storage server architectures employ random allocation of blocks on a disk.
This type of organization is not sufficient to meet the real time requirements of multimedia
applications because the disk latency between blocks of a media object is unpredictable [17].
The file system cannot guarantee satisfaction of the deadline for the retrieval of multimedia
data.
We view a multimedia object as an entity comprised of mixed-type data components.
Without loss of generality, we model a typical multimedia object as being comprised of
audio, video and text. These three components can be viewed as distinct even though they
might be recorded at the same time [17]. During retrieval, these three streams are sent to
three output queues for playout and ultimately are experienced by the user. From a timing
perspective, the data streams can arrive at the file system with specific implied timing (e.g.,
live audio) or can arrive at the file system arbitrarily. For example, live video and audio can
be recorded at the same time while subtitles are recorded later.
This leads us to the issue of data interleaving for maintaining intermedia synchronization.
The advantage of interleaving multiple data streams into a single layout is the preservation
of timing between related steams. The penalty with this scheme is the overhead associated
with data combination and redistribution. These layouts are also called homogeneous (non-
interleaved) and heterogeneous (interleaved) layouts [17]. The homogeneous layout stipulates
storage of single medium data in blocks without interleaving. However, timing relationships
among media are stored as part of the interrelated media.
In the homogeneous approach, each medium requests a session in a round-robin schedule.
When retrieving a multimedia object, the file system must switch between sessions which
can consume additional disk bandwidth and degrade throughput. There is no such problem
in the heterogeneous approach. We merge different media data within a block based on their
temporal relationships and can treat the aggregation of data as a single media object. There-
fore, there is only one session for each multimedia object for the heterogeneous approach.
For this reason we use the heterogeneous layout approach in this work. In our approach,
multiple media streams being recorded are stored within the same block and the length of
each object is proportional to its consumption rate.
In terms of intra-media timing, interleaving of data becomes important to maintain
smooth, gap-free playout. In the extreme case, contiguous space allocation yields the highest
effective bandwidth from a disk, but with a penalty for costly reorganization during data
insertions and updates:
1. With the interleaved policy, multimedia data are stored on disk in a interleaved fashion
[16, 17, 21, 23]. This approach can guarantee continuous retrieval and smooth the
speed gap between disk and multimedia devices. Therefore, it can reduce the buffer
requirement significantly. Usually, it can be applied on optical disks or in a single user
environment.
2. With the contiguous policy, multimedia data are stored on a disk contiguously. This
policy can also provide continuous retrieval, but entails enormous copying overhead
during insertions and deletions [16]. However, it is the most efficient way for bandwidth
utilization [14]. This approach can be used for data that is seldom modified such as
read-only digital entertainment video.
In our approach, we refine the contiguous scheme using a two-tiered structure. On the
first level, we propose a doubly-linked list which is created based on the temporal relations
for a multimedia object [11]. Each item in the list contains a pointer which points to the
disk address of a media block. The reason for the doubly-linked list structure is to support
reverse playback of multimedia objects. On the second level, we store the multimedia data
that are indicated in the first level, permitting the reversal of a multimedia presentation
at any moment. Multimedia objects are stored sequentially on the disk. Subsequent media
blocks are put on adjacent, unoccupied blocks. If a disk track or cylinder becomes full (or the
next block is occupied) this policy places the multimedia data in the next nearest available
block.
3 Disk Latency and Bandwidth
To support multimedia data requires the manipulation of large files and the support for
large data consumption rates. It is the responsibility of the file system to organize the
data for efficient storage and delivery within space and I/O bandwidth limitations. In most
disk drive subsystems, the dominant inhibitor to achieving maximum disk I/O bandwidth
is seek latency. However, seek latency can be reduced through contiguous writes or reads of
time-dependent multimedia data. When these data become fragmented and discontinuous,
effective disk bandwidth diminishes due to additional seek and rotational latencies involved
in each discontinuity.
Table
2: Disk Parameters
Symbol Identification Value Units
dt Size of a single track 54,900 bytes
N head Number of tracks in a cylinder (number of disk heads) 15 tracks
T hh Time to change head to the another surface 2,000 -s
T tt Time to cross a track 21 -s
rot Rotation time for a disk 16,700 -s
R t Data transfer rate within a track 3.29 Mbyte/s
c Number of cylinders per disk 2,107 cylinders
In our modeling approach, we consider latencies attributed to data fragmentation as
well as session switching latencies. In the proposed scheduling approach, the disk is cycled
through a set of independent multimedia sessions. Because sessions exist for many cycles
and their access is unpredictable due to user interaction (e.g., start, stop, reverse), there are
significant session switching latencies. In this section, we determine these disk latencies and
their distributions through analysis for a typical hard disk storage unit suitable for a Unix
workstation [19]. Parameters characterizing such a device are summarized in Table 2 using
symbols adopted and extended from Kiessling [10].
3.1 Delay Latency
When a user edits the multimedia file or the file system schedules another process to access
the disk, the next block to be retrieved can be arbitrarily located anywhere on the device. The
disk head must start up, cross a number of tracks, switch to a recording (writing) surface and
rotate to the indicated block. Assuming that the location of the desired block is uniformly
distributed on the whole disk, then the total latency is T latency
where T cross is the arm positioning time for the disk head move to the correct track, T switch
is the delay to switch the head to the other surface, and T rotate is the delay for disk rotation.
We have derived various statistical disk performance behaviors from these base parameters,
and summarize them in Table 3.
Table
3: Derived Statistical Disk Behavior
Symbol Equation Value Units
ms
E(T cross ms
cross
ms 2
oe cross
pT tt 10.4 ms
E(T switch
ms
head
ms 2
oe switch
ms
E(T rotate ms
rotate
rot 92.96 ms 2
oe rotate
pT rot 9.64 ms
E(T latency
ms
latency
rot 201.6 ms 2
3.2 Disk Bandwidth Normalization
In an ideal disk storage organization, data can be accessed without latencies, and the data
transfer rate (or bandwidth) is dependent only on the disk rotational speed. In a real disk,
latencies are introduced due to track and platter switching, and disk rotation. These latencies
are determined by the layout of data on the disk and the scheduling policy for their access.
We can normalize the data transfer rate based on a complete disk scan policy as follows:
once the head reaches and retrieves the first block of an object, it retrieves the adjacent block
in the same track. If the whole track has been retrieved, it switches to the next surface but
remains on the same cylinder. If the whole cylinder has been retrieved, the disk arm crosses
to the next track. We normalize by considering each of these head motions in the complete
scan.
We define the size of a block as M . The frequency for switching the head to the other
disk P switch is
The size of a cylinder is S dt \Theta N head . Thus, the frequency P cross for the arm to cross to
the next track is P
. Let TM be the time to transfer a block from disk in the
optimal case. Then
period
latency 1 latency 2 latency 3 leftover
session 1 session 2 session 3
playback recording
Figure
2: Layout Model
dt \Theta N head
T represents the minimum transfer time to transfer a single byte from the disk:
dt \Theta N head
T be the maximum transfer rate onto the disk. We normalize the disk bandwidth
R as:
R =1
(1)
Therefore, we can use this derived value as the maximum effective bandwidth for data
transfer from the disk.
4 Disk Access Scheduling
In this section we show the constraints for the acceptance of a set of multimedia sessions
and the requirements for buffer size and disk bandwidth.
4.1 Scheduling Layout Model
In the layout model of Polimenis [14], a working period T period is defined for a set of multi-media
tasks and other non-real-time tasks as shown in Fig. 2.
During a working period, the schedule switches among all multimedia sessions. It carries
enough data into the buffer for the ith session to keep task i busy until its term is active
in the next working period. If R is the whole disk bandwidth that we derived in Equ. 1,
then each session i shares an interval T (i) proportional to its consumption rate R c (i). The
amount of data accessed during T (i) is equal to the amount consumed during the period
T period as follows:
R c (i)
R
period (2)
In this equation, R c (i) represents the consumption rate for session i. Let the ith session
contain k different media data (video, audio, text, etc. For viable multimedia data delivery,
the bandwidth lost due to task switching latencies plus the bandwidth consumed by each
multimedia session must be less than the normalized disk bandwidth (where the period is
fixed unless we change the number of sessions).
4.2 Bandwidth Requirements
In this section, we derive the bandwidth constraint based on the round-robin scheduling
model. Let n(i) be the number of bytes accessed for medium i during a working period
T period . The total number of bytes n to be read during a period T period is then
Because the time interval T (i) for each media is proportional to its bandwidth requirement
and R. Thus, we have period \Theta R c (i), then
R c
R c
R c (i) (3)
As shown in Fig. 2, the total interval used for multimedia sessions plus the disk seek
latency should be less than the working period T period in order to have sufficient bandwidth
for other non-real-time tasks. On the other hand, the period T period must be greater than the
time needed in the worst case to transfer data from (or to) the disk for all sessions. Suppose
we have m multimedia sessions. Let R be the total disk bandwidth and T latency (i) be the
task switching latency between sessions
R
latency
R c (i) (4)
where n(i)
R c (i)
should be equal to T period to maintain a steady-state. This means that the
amount of data read from the disk for each session i during a period is exactly equal to the
amount of data consumed by the ith consumer process. Thus, by Equ. 4,
latency (i)
latency (i)
, then
R ?R c (i)
latency (i)
R c (i)
latency (i)
The right-hand side of the above equation can be divided into two parts. The first part
is the bandwidth requirement of all multimedia sessions. The second part is the factor due
to the seek latency between any two sessions. Thus,
R c (i) +R seek (5)
and
latency (i)
latency (i) (6)
The R seek is the bandwidth wasted, or lost, when the disk head is switched between
sessions.
4.3 Buffer Requirements
In Section 4.1, we showed the bandwidth requirements for a set of multimedia sessions
without considering their acceptability in terms of buffer utilization. In the layout model,
a
time
buffer
use
latency i
period
a
Figure
3: Buffer Consumption
each session i shares only part of a period (Fig. 2). Each session must carry enough data
into the buffer to keep process i busy until it is reserviced, otherwise, the process starves.
Therefore, the second condition to accept a set of multimedia sessions is the availability of
sufficient buffer space. As illustrated in Fig. 3, session i shares a duration T (i) in a disk
access.
When session i is active, its buffer size increases at a rate R \Gamma R c (i). Outside this duration,
the buffer size shrinks at a rate R c (i). Let B(i) be the buffer requirement for session i. Then
(i), or B(i) ? R c (i) \Theta (T period \Gamma T (i)). If we let B be the total buffer
(i)]. Rewriting, we get:
Therefore, we have defined the buffer constraint that can be applied to determine the
feasibility of adopting additional multimedia sessions.
4.4 Length of Period T period
In Fig. 2 and Equ. 4, we show that the period T period must be greater than the sum of all
individual session periods in order to transfer data from (or to) disk for all sessions. Let D
be the leftover duration as shown in Fig. 2. For each period, the disk spends T transfer to
transfer data, where T
D. In a period, session i shares
duration based on its consuming rate R c (i). Therefore,
latency
R c (i)
To maintain a steady-state for the system, the data read from the disk during T (i) for
session i must be equal to the amount consumed during the period T period . Otherwise, the
buffer can starve or grow without bound. Thus,
latency (i) \Theta
R
If we let U be the utilization, where
let C be the total latencies,
then the minimum period for a set of multimedia sessions is [14]:
In Equ. 8, T latency (i) represents the seek latency corresponding to the switch from session
Because the next retrieval for session i can be allocated anywhere on
the disk, the latency T latency is a random variable. In Section 3, we derive the average seek
latency and the variance of the seek latency. Let E(T latency ) be the average seek latency and
latency be the variance of seek latency (Table 3). The expectation E(T ) and variance oe 2 (T )
of T in Equ. 8 are as follows:
latency ) \Theta
R
latency \Theta
R
By the above equations, we know T is also a random variable, so we cannot assign T to
be the lower bound of the period T min
period . Let p be the probability of starvation that can be
tolerated for the mth session. By Chebychev's Inequality we have P [jT min
period - E(T
frequency of
starvation
E(T)
Probability
Figure
4: Distribution of T
This means that if the lower bound T min
period is chosen, the probability for the mth session
to be accepted successfully is greater than
By Equ. 10, if we choose T period equal to the lower bound E(T )
, we can guarantee
that the starvation rate for session m will be less than p. Equation 10 is always true; however,
it does not mean that the starvation rate is equal to p. In the heavy load situation, when the
number of multimedia sessions m is very large, by the Law of Large Numbers, the starvation
rate will approach p. In the light load case, the starvation rate can be much lower than p.
Conversely, we can use a shorter period T period to keep the starvation rate under p.
A period T period for a set of multimedia sessions must meet two hard requirements. In
Section 4.2, we derived the bandwidth requirement, but it was not sufficient to determine
whether to accept a set of multimedia sessions. The system must also provide sufficient
buffering for each multimedia session. In the lightly loaded situation, there are always
enough buffering to support multimedia sessions. However, buffering becomes significant
when the number of multimedia sessions m is large. In this case, compared to the period
period , the duration T (i) assigned to each multimedia session is small. We simplify Equ. 7
by ignoring the T (i) and the result is still valid:
R c (i) (11)
From the equation above, we see that the buffer requirements are dependent on the length
of period T period . Let B max be the maximum buffer space that is available. There is an upper
bound T max
period for the period that can be accepted for a set of multimedia sessions; otherwise,
the total buffer requirements will exceed the available buffer space B max . From Equ. 11, we
Equs. derived above are for the general case where the consumption rates
for multimedia sessions have different values. In real applications, the disk bandwidth requirements
for multimedia sessions can have the same value. In the following example, we
assume, for simplicity, that the consumption rates for all multimedia sessions are the same
and evaluate the buffer consumption and number of sessions supported.
Example 1 In this example, we assume all multimedia sessions request the same disk
bandwidth R c . Each multimedia session includes video data at a rate of 1.92 Mb/frame
@ frames/s with a 20:1 compression ratio and audio data at a rate of 1.4 Mb/s with a
4:1 compression ratio. Each multimedia session consumes disk bandwidth at a rate of 0.4
Mbyte/s. Using the disk parameters from Tables 2 and 3 we pick the average disk latency
E(T latency ) equal to 35; 965-s and the standard deviation oe latency equal to 14; 212-s. For
Equ. 10 we let p be 0:05. We then derive the lower bound for different numbers of supported
sessions using Equ. 10 assuming the availability of 16 Mbytes of main memory that can be
assigned for buffering. The upper bound of a period is then determined by Equ. 12.
Table
4: File System Performance for Example 1
100 % Bandwidth Utilization 100 % Buffer Allocation
period (ms) Buffer Allocation (bytes) T max
period (ms) Bandwidth Utilization
6 14,013 * 28,163,000 6,667 * 100.80 %
* Insufficient memory.
Let N be the number of multimedia sessions and T min
period be the lower bound for the period.
If T min
period is chosen then there is no disk bandwidth left. By Equ. 7 we know that the buffer
requirement is minimized and we have
fR c (i) \Theta [T min
R c (i)
R
period
N \Theta R c \Theta T min
period \Theta (1 \Gamma
R c
R
)0.51.52.53.5x(Sec)
Number of Multimedia Sessions
Period
lower bound
upper bound
Figure
5: Number of Sessions vs. Period Length
The results of this analysis are summarized in Table 4. The third column presents the
buffer requirement for N multimedia sessions when we chose T min
period . The fourth column
indicates the upper bound for period. In this case, the entire 16 Mbytes of memory are
assigned to buffering, allowing us to minimize the use of disk bandwidth given the constraints.
In our layout model, a period T period is equal to the sum of all durations assigned to
multimedia sessions plus the session switching latency between sessions plus the leftover used
for other non-real-time process (Fig. 2). The percentage P of disk bandwidth consumed by
multimedia sessions can be considered as the interval assigned to the multimedia sessions,
plus disk latency lost in task switching between multimedia sessions, divided by the length
of the period:
latency (i)
period
R c
R +N \Theta T latency
period
In the fifth column of Table 4 we show the percentage of disk bandwidth consumed by
the multimedia sessions when the upper bound T max
period is chosen.
When we increase the number of supported sessions, both buffer and bandwidth requirements
will increase (Fig. 5). If there are five multimedia sessions accessing the file system,
the system can perform within these constraints, but it cannot accept additional multimedia
sessions. In this case an additional session causes the request for a 28,163,000 byte buffer
and 100:8% of disk bandwidth, both of which exceed the capacity of the system.
From the analysis presented in Sections 3 and 4, it is appropriate to describe considerations
for choosing the length of a round-robin scheduling period, and to describe the impact of
session consumption rates.
5.1 Consideration for Choosing a Period
Two hard requirements must be met when choosing the length of a period, otherwise the
system cannot function for a given workload. A period must be greater than T min
period to meet
the bandwidth requirement and less than T
period to meet the buffer requirement. These
constraints are summarized as:
period
A new multimedia session can be accepted only it satisfies this relationship. Fig. 5 illustrates
the ranges of sessions supported that satisfy these constraints. The region enveloped
by the lower bound and upper bound is safe. In Table 4, for the sixth session, the lower bound
of period T min
period is 14; 013 ms, the upper bound T max
period is 6; 667 ms. Since T min
period ,
we know the file system cannot accept six multimedia sessions at the same time.
We estimate the upper and lower bound very conservatively (due to the large m assumed).
The real upper bound can be larger and the lower bound can be lower than we have derived.
However, when the number of sessions increases, our estimates approach the real upper and
lower bounds. There are two justifications for our assumption. First, in the lightly loaded
case, there are always enough resources for use. We are more concerned about the heavily
loaded situation in which the number of multimedia sessions m is large. Second, it is not
necessary or wise to chose a period T period close to either the upper or lower bounds because
of the degradation of the throughput of other non-real-time data transfers. For a general-purpose
machine, a multimedia file system not only has to meet the hard requirements
above, but also must leave enough bandwidth for these other non-real-time transfers. Let
period be the the percentage of disk bandwidth used to read data from the disk
for non-multimedia jobs during every period T period . For a set of multimedia sessions, A is
maximized when T
period [14]. This means if we increase the period T period we can
have additional disk bandwidth leftover for non-multimedia tasks.
From a memory perspective, a multimedia file system must minimize its buffer utilization
to make memory available for other system tasks. From Equ. 11, we see that when period
period , the buffer requirement is minimized. From the above two results, we seek to
increase the period for more disk bandwidth for non-multimedia traffic but also to reduce the
period for more free memory for non-multimedia tasks. In the extreme case, if we minimize
the T period value, we minimize the buffer requirement and maximize free memory for other
non-multimedia tasks. At the same time, the leftover for disk bandwidth is zero. Similarly,
maximizing the T period can free the maximum disk bandwidth for other non-multimedia
processes to use but will also result in complete memory consumption. In this case, even if
the disk has ample bandwidth available, no non-multimedia process can use it. Thus, these
two soft requirements are in conflict.
To improve the response time for non-multimedia processes, we can change the period
period dynamically with feedback from the operating system to balance resource allocation.
For example, if there are tasks suspended due to disk bandwidth shortages and there is free
buffer space available, the file system can extend the period T period in order to have more
disk bandwidth to assign to non-multimedia processes. If there are non-multimedia processes
waiting for memory and the disk is idle during the leftover interval, the file system can shrink
the period T period in order to free memory for additional non-multimedia processes.
Table
5: Refined Model vs. Worst Case
Refined Model Worst Case
Period (ms) Buffer Allocation (bytes) Period (ms) Buffer Allocation (bytes)
For a multimedia on-demand server, the file system need only provide service to multimedia
processes. In this situation, we chose the lower bound to achieve the highest disk
utilization. Given the physical disk characteristics we can determine the buffer requirements.
By Fig. 3 and Equ. 7, we know that the amount of consumed buffer space is determined by
the period length T period . By Equ. 8, the period length depends on the sum of random variables
latency (i). We assume the worst case, take the maximum value for all task switching
latencies T latency (i), and decide the period length. This assumes that starvation can never
happen, when in practice it will only rarely happen. In a refined model, we define an acceptable
rate of non-starvation, and derive the period length which guarantees a
set of multimedia sessions can be accepted with at least a probability q of not starving. In
Table
5, we define 95%. In this case, if there are five multimedia sessions in the system
we can save 20:8% of available memory.
5.2 Consumption Rate for Multimedia Sessions
There are several factors that effect the consumption rate for a multimedia session. The most
important factor is the data compression ratio affecting the multimedia data. For example,
for video data, a compression ratio in the range of 1:10 to 1:100 is not uncommon.
In Fig. 6, we show a set of constrained bandwidth-buffering regions for sessions with
differing data rates due to a range of compression rates. Parameters are otherwise identical
to that of Example 1. This figure illustrates the safe region for various consumption rates
and allows the selection of period length T period and buffer use for a given number of sessions.
By varying the compression rate we can reduce the bandwidth required for any (video)
session and increase the number of multimedia sessions supported per device. Assuming
a uniform bandwidth requirement for each session, Fig. 7 shows the number of sessions
supported for a range of consumption ratios (bandwidth).
5.3 Variable Video Encoding Rates
In our analysis we have assumed constant-bit-rate (CBR) video encoding. This assumption
greatly simplifies analysis and is reasonable based on the MPEG-I ISO 11172 CBR option.
However, we recognize that CBR video is uncommon. Our model can be modified to accommodate
variable-bit-rate (VBR) compression schemes by aggregating several VBR streams
together [3]. For this situation, not only is the disk production rate unpredictable but the
(Sec)
Number of Multimedia Sessions
Period
Figure
Number of Sessions vs. Period Length481216200.1 0.2 0.3 0.4 0.5 0.6 0.7
Number
of
Multimedia
Sessions
Consumption Rate (MByte/s)
Figure
7: Consumption Rate vs. Number of Sessions
display consumption can be unpredictable as well, particularly if software-only decompression
of video is used. We view disk seek latencies and the transfer time of VBR streams
as random variables and use a similar probabilistic model to guarantee that the frame loss
ratio will be under a given threshold. Moreover, in [3], we describe an algorithm to reduce
the impact of frame losses due to disk starvation.
6 Conclusion
When a multimedia file system transfers data from a disk, it must guarantee that multimedia
data arrive at the playout device with a minimum latency. It must also satisfy the timing
requirements implied by the nature of the multimedia object (e.g., intermedia synchronization
among media). However, disk seek latency is very significant and can be unpredictable
in a general-purpose file system.
In this paper we presented a physical data organization for supporting the storage of
time-dependent multimedia data. We interleaved different media objects within a block
to maintain timing among the objects during data storage and retrieval. Furthermore, we
introduced a probabilistic model as a refinement of the round-round scheduling discipline
that supports concurrent multimedia sessions. It was found to reduce the amount of required
buffering during data transfer from storage. We showed the acceptance conditions
for additional multimedia sessions including bandwidth and buffer constraints, and a means
for balancing these two parameters to support the largest number of multimedia sessions
originating from a single device.
--R
"A Continuous Media I/O Server and Its Synchronization Mechanism,"
"Physical Storage Organizations for Time-Dependent Multimedia Data,"
"A Scalable Video-on- Demand Service for the Provision of VCR-Like Functions,"
"Multi-Resolution Video Representation for Parallel Disk Array,"
"Design and Performance Considerations for an Optical Disk-based, Multimedia Object Server,"
"Disk Shadowing,"
"Optimal Placement of High-Probability Randomly Retrieved Blocks on CLV Optical Disks,"
"Principles of Delay-Sensitive Multimedia Data Storage and Retrieval,"
"Parity Striping of Disk Arrays: Low Cost Reliable Storage with Acceptable Throughput,"
"Access Path Selection in Databases with Intelligent Disc Subsystems,"
"Interval-Based Conceptual Models for Time-Dependent Multimedia Data,"
"The Design and Implementation of a Continuous Media Storage Server,"
"Multimedia/Realtime Extensions for the Mach Operating System,"
"The Design of a File System that Supports Multimedia,"
"Designing an On-Demand Multimedia Service,"
"Efficient Storage Techniques for Digital Continuous Mul- timedia,"
"Designing File Systems for Digital Video and Audio,"
"An Introduction to Disk Drive Modeling,"
Seagate Wren 8 ST41650N Product Manual (Volume
"Streaming RAID - A Disk Array Management System for Video Files,"
"Placement of Audio Data on Optical Disk,"
"A Runtime Environment for Multimedia Communications,"
"Efficient Placement of Audio Data Optical Disks for Real-Time Applications,"
"Design and Analysis of a Grouped Sweeping Scheme for Multimedia Storage Management,"
--TR
--CTR
Gang Qu , Malena Mesarina , Miodrag Potkonjak, System Synthesis of Synchronous Multimedia Applications, Proceedings of the 12th international symposium on System synthesis, p.128, November 01-04, 1999
Kyungoh Lee , Heon Y. Yeom, An effective admission control mechanism for variable-bit-rate video streams, Multimedia Systems, v.7 n.4, p.305-311, July 1999
Keun Hyung Kim , Seog Park, Storage System for Supporting More Video Streams in Video Server, Multimedia Tools and Applications, v.13 n.2, p.177-196, February 2001
Guido Nerjes , Peter Muth , Gerhard Weikum, Stochastic service guarantees for continuous data on multi-zone disks, Proceedings of the sixteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, p.154-160, May 11-15, 1997, Tucson, Arizona, United States
Gang Qu , Miodrag Potkonjak, System synthesis of synchronous multimedia applications, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.1, p.74-97, February
Korst , Joep Aerts, On the Guaranteed Throughput of Multizone Disks, IEEE Transactions on Computers, v.52 n.11, p.1407-1420, November
Kyung-Oh Lee , Jun-Ho Park , Yoon-Young Park, Striping and scheduling for large scale multimedia servers, Journal of Computer Science and Technology, v.19 n.6, p.885-895, November 2004
Nevzat Hurkan Balkir , Gultekin Ozsoyoglu, Delivering presentations from multimedia servers, The VLDB Journal The International Journal on Very Large Data Bases, v.7 n.4, p.294-307, December 1998 | performance modeling;multimedia;time-dependent audio and video data;scheduling;physical data organization;file systems;secondary storage |
627797 | Efficient Mining of Association Rules in Distributed Databases. | AbstractMany sequential algorithms have been proposed for mining of association rules. However, very little work has been done in mining association rules in distributed databases. A direct application of sequential algorithms to distributed databases is not effective, because it requires a large amount of communication overhead. In this study, an efficient algorithm, DMA, is proposed. It generates a small number of candidate sets and requires only O(n) messages for support count exchange for each candidate set, where n is the number of sites in a distributed database. The algorithm has been implemented on an experimental test bed and its performance is studied. The results show that DMA has superior performance when comparing with the direct application of a popular sequential algorithm in distributed databases. | Introduction
Database mining has recently attracted tremendous amount of attention in database research because
of its applicability in many areas, including decision support, marketing strategy and financial
forecast. The research community has observed that data mining, together with data warehousing
and data repositories are three new uses of database technology, which are considered as important
areas in database research [20].
Many interesting and efficient data mining algorithms have been proposed (e.g., see [2, 3, 4, 5,
6, 7, 8, 10, 12, 13, 15, 16, 17, 19, 21]). These database-oriented mining algorithms can be classified
The research of the authors were supported in part by RGC (the Hong Kong Research Grants Council) grant
338/065/0026.
into two categories: concept generalization-based discovery and discovery at the primitive concept
levels. The former relies on the generalization of concepts (attribute values) stored in databases.
One such example is the DBMiner system [7, 12]. The latter discovers strong regularities (rules)
from the database without concept generalization. Association rule [4, 6, 16] is an important type
of rules in the latter approach.
Most of the algorithms for mining association rules proposed so far are sequential algorithms.
An algorithm PDM has been proposed recently for parallel mining of association rules [17]. It is
an adaptation of the DHP algorithm in the parallel environment [16]. Another algorithm Count
Distribution (CD), which is an adaptation of the Apriori algorithm, has also been proposed for the
same parallel mining environment with an implementation on the IBM SP2 [5]. To the best of
our knowledge, very little work has been done on the mining of association rules in a distributed
database environment. In this paper, we have developed a distributed algorithm DMA (Distributed
Mining of Association rules), which can be used to solve this problem.
The distributed database in our model is a horizontally partitioned database. The database
schema of all the partitions are the same, i.e., their records are transactions on the same set of
items. (DMA can be modified for the case in which the schema at different sites are not completely
identical.) Many distributed databases are horizontally partitioned. For example, a retail chain
may have several regional data centers, each manages the transaction records in its own region. It
is important to mine the association rules based on data from all the centers. Distributed mining
can be applied to many applications which have their data sources located at different places.
In the sequential environment, many algorithms have been proposed for mining association rules.
The most popular are the Apriori, DHP, and PARTITION algorithms [6, 16, 19]. A candidate set
generation function Apriori-gen is adopted in the Apriori algorithm which supports an efficient
method for candidate set generation. DHP applies a hashing technique to prune away some size-2
candidate sets to improve its efficiency. PARTITION divides the database into small partitions such
that they can be processed efficiently in memory independently to find out their large itemsets. The
large itemsets from the partitions are then combined to form a set of candidate sets. Following that,
only one scan of the database is required to find out the large itemsets from the candidates.
In the parallel environment, the PDM algorithm proposed in [17] tries to parallelize the DHP
algorithm. Each node computes the globally large itemsets by exchanging their support counts (or
counts, as referred in some literatures) of the candidate sets. In order to apply the hashing technique,
all nodes have to broadcast the hashing result, which causes a huge amount of communication. In
[17], a technique has been proposed to decrease the number of messages. Amount all the hash
buckets, only those in which the total count are larger than a threshold are selected for bucket
count exchange, so that not all buckets have to be broadcasted. After a node receives these partial
count for the selected buckets, it polls the other sites to get the total counts. However, there are
two unfavourable features in this proposal. Firstly, the reduction of candidate sets is only done in
the second iteration. The number of candidate sets in some other iterations could also be quite
large. Secondly, to find the large candidate sets, O(n 2 ) messages are required for support count
exchange for each candidate set, where n is the number of nodes.
Another algorithm proposed for parallel mining of association rules is the CD algorithm [5]. It
is an adaptation of the Apriori algorithm in the parallel case. At each iteration, it generates the
candidate sets at every site by applying the Apriori-gen function on the set of large itemsets found
at the previous iteration. Every site then computes the local support counts of all these candidate
sets and broadcasts them to all the other sites. Subsequently, all the sites can find the globally large
itemsets for that iteration, and then proceed to the next iteration. This algorithm has a simple
communication scheme for count exchange. However, it also has the similar problems of higher
number of candidate sets and larger amount of communication overhead.
The efficiency of the algorithm DMA that we have developed is attributed mainly to the following
two features.
1. Both Apriori and DHP generate the candidate sets by applying the Apriori-gen function on
the large itemsets found in the previous iteration. CD and PDM use the same technique in the
parallel environment. DMA uses a new technique to generate a much smaller set of candidate
sets then either Apriori or DHP. (This will be explained in Section 3.2).
2. In DMA, to determine whether a candidate set is large, only O(n) messages are needed for
support count exchange. This is much less than a straight adaptation of Apriori, which
requires O(n 2 ) messages for support count exchange.
Distributed database has an intrinsic data skewness property. The distribution of the itemsets
in different partitions are not identical, and many items occur more frequently in some partitions
than the others. For example, in a distributed database of a national supermarket chain, it is
expected the consumers' purchasing patterns in New York City will be quite different from that in
Los Angles. As a result, many itemsets may be large locally at some sites but not necessarily in the
other sites. This skewness property poses a new requirement in the design of mining algorithm.
Furthermore, DMA can be applied to the mining of association rules in a large centralized
database by partitioning the database to the nodes of a distributed system. This is particularly
useful if the data set is too large for sequential mining.
Extensive experiments have been conducted to study the performance of DMA and compare
it against the algorithm Count Distribution (CD), which is a direct application of the Apriori
algorithm to distributed databases. The remaining of the paper is organized as follows. A brief
summary of mining association rules in the sequential environment will be discussed in Section 2.
In Section 3, the problem of mining association rules in a distributed database is defined and some
important results are discussed. The algorithm DMA is presented in Section 4. A performance
study is discussed in Section 5. Some discussion and conclusions are presented in Sections 6 and 7.
2 Sequential Mining of Association Rules
2.1 Association rules
be a set of items. Let DB be a database of transactions, where each
transaction T is a set of items such that T ' I. Given an itemset X ' I, a transaction T contains
X if and only if X ' T . An association rule is an implication of the form X
;. The association rule X ) Y holds in DB with confidence c if c% of the
transactions in DB that contain X also contain Y . The association rule X ) Y has support s in
DB if s% of the transactions in DB contain X [ Y .
Given a minimum confidence threshold minconf and a minimum support threshold minsup,
the problem of mining association rules is to find all the association rules whose confidence and
support are larger than the respective thresholds. We also call an association rule a strong rule to
distinguish it from the weak ones, i.e., those that do not meet the thresholds [13].
For an itemset X, its support is defined similarly as the percentage of transactions in DB which
contains X. We also use X:sup, to denote its support count, which is the number of transactions
in DB containing X. Given a minimum support threshold minsup, an itemset X is large if its
support is no less than minsup. Moreover, for presentation purpose, we will call an itemset of size-k
a k-itemset. It has been shown that the problem of mining association rules can be reduced to two
subproblems [4].
1. Find all large itemsets for a pre-determined minimum support.
2. Generate the association rules from the large itemsets found.
The most crucial factor that affects the performance of mining association rules is to find efficient
method to resolve the first problem [6].
2.2 Apriori algorithm
The Apriori algorithm is one of the most popular algorithm in the mining of association rules in a
centralized database. The main idea of Apriori is outlined in the following [6].
1. The large itemsets are computed through iterations. In each iteration, the database is scanned
once and all large itemsets of the same size are computed. The large itemsets are computed
in the ascending order of their sizes.
2. In the first iteration, the size-1 large itemsets are computed by scanning the database once.
Subsequently, in the k-th iteration (k ? 1), a set of candidate sets C k is created by applying
the candidate set generating function Apriori-gen on L is the set of all large
1)-itemsets found in iteration k \Gamma 1. Apriori-gen generates only those k-itemset whose
every 1)-itemset subset is in L k\Gamma1 . The support counts of the candidate itemsets in C k
are then computed by scanning the database once and the size-k large itemsets are extracted
from the candidates.
Two interesting extensions of the Apriori algorithm are the DHP [17] and PARTITION algorithms
[19]. In the first iteration, while it is computing the support counts of the size-1 itemsets,
DHP stores the support counts of the size-2 candidate itemsets in a hash table. Upper bounds of
the support counts of the size-2 candidates can be deduced from the hash table and are used to
prune away some size-2 candidates in the second iteration. As a result of the hashing and pruning,
the cost of computing the support counts of the size-2 candidate sets is reduced substantially in
DHP.
The PARTITION algorithm divides the database into partitions such that each of them can be
processed efficiently in memory to find the itemsets which are large in it. The set consists of all
these itemsets becomes a candidate set for finding the large itemsets in the database. The advantage
of the PARTITION algorithm is that only one scan of the database is required after the candidate
sets are found in the partitions.
3 Mining of Association Rules in Distributed Databases
3.1 Problem Description
Let DB be a partitioned database located at n sites . The database partitions at
these sites are g. (In the following, we will adopt the convention of attaching
a superscript i on a notation to denote the corresponding distributed notation for site S i .)
Let the size of DB and the partitions DB i be D and D i , respectively. For a given itemset X,
let X:sup and X:sup i be the respective support counts of X in DB and DB i . We will call X:sup
the global support count and X:sup i the local support count of X at site S i . For a given minimum
support s, X is globally large if X:sup - s \Theta D; correspondingly, X is locally large at site S i , if
. In the following, we will use L to denote all the globally large itemsets in DB
and L k to denote all globally large k-itemsets in L. The problem of mining association rules in a
distributed database DB can be reduced to the finding of all globally large itemsets.
3.2 Generate a Smaller Set of Candidate Sets
Before we discuss how to generate a small set of candidate sets, we first present a few interesting
and useful observations. First of all, we have found that many candidate sets generated by applying
the Apriori-gen function are not needed in the search of large itemsets. In fact, there is a natural
and effective method for every site to generate its own set of candidate sets, which is typically much
smaller than the set of all the candidate sets. Following that, every site only needs to find the
large itemsets among these candidate sets. By using this technique, we have achieved an effective
division of the mining task amongst the sites in the database. In the following, several lemmas and
theorem are described to illustrate the above observations.
Lemma 1 If an itemset X is locally large at a site S i , then all its subsets are also locally large at
Proof. This follows from the definition of locally large. 2
A similar result as Lemma 1 for centralized database first appeared in [4].
If an itemset X is globally large, then there exists a site S i , (1 - i - n), such that X
and all its subsets are locally large at site S i .
Proof. If cannot be globally large.
Therefore, X must be locally large at some site S i . It follows from Lemma 1 that all the subsets of
X must be locally large at S i . 2
For a site S i , if an itemset X is both locally large at site S i and globally large, then we say that
X is heavy at site S i . We use HL i to denote the set of heavy itemsets at site S i , and HL i
k to denote
the set of heavy k-itemsets at site S i . In DMA, the heavy itemsets at each site play an important
role in the generation of candidate sets.
Lemma 3 If an itemset X is globally large, then there exists a site S i , (1 - i - n), such that X is
heavy at site S i .
Proof. Since X is globally large, it follows from Lemma 2 that X must be locally large at some site
is heavy at site S i . 2
Lemma 4 If an itemset X is heavy at a site S i , (1 - i - n), then all its subsets are also heavy at
Proof. If X is heavy at site S i , then it must be globally large, therefore, all its subsets are globally
large. Moreover, since X is locally large at site S i , it follows from Lemma 1 that all the subsets of
X must be locally large at site S i . Hence, all its subsets are heavy at site S i . 2
Lemma 4 is a very interesting property, it shows that the heavy itemsets at each site have a
monotonic subset relationship among them. This relationship also exists among the large itemsets
in the centralized case, and it is a necessary condition such that large itemsets can be computed
iteratively.
Lemma 5 If X 2 L k , (i.e. X is a globally large k-itemset), then there exists a site i, (1 - i - n),
such that X and all its size are heavy at site S i .
Proof. This follows from Lemmas 3 and 4. 2
Lemma 5 is equivalent to the combination of Lemma 3 and Lemma 4. It is a basis to design an
effective method to generate a smaller set of candidate sets in the distributed environment.
In general, in a straightforward adaptation of Apriori, in the k-th iteration, the set of candidate
sets would be generated by applying the Apriori-gen function on L k\Gamma1 . We denote this set of
candidate sets by CA k , (which stands for size-k candidate sets from Apriori). In order words,
At each site S i , let CH i
k be the set of candidates sets generated by applying Apriori-gen on
(CH stands for candidate sets generated from heavy itemsets). Hence CH i
k is generated from
which is only a subset of L k\Gamma1 .
According to Lemma 5, for every large itemset X 2 L k , there exists a site S i , such that all the
subsets of X are heavy at site S
k for some site S i . Therefore
We use CH k to denote the set [ n
k .
Theorem 1 For every k ? 1, the set of all large k-itemsets L k is a subset of CH
Hence CH k is a set of candidate sets for the size-k large itemsets.
Proof. The proof follows from Lemma 5 and the above discussion. 2
Since every HL i
in Theorem 1 is a subset of L k\Gamma1 , the number of candidate sets in CH k is
in general smaller than that in CA k . In DMA, we use the result in Theorem 1 to generate a set of
candidate sets CH i
k for each site S i in each iteration. It can be seen that this set of candidate sets
is typically much smaller than that in a direct application of Apriori-gen on L k .
In the following, Example 1 is used to illustrate the reduction of candidate sets by using Theorem
1.
Example 1 Assuming there are 3 sites in a database DB with partitions DB 1 , DB 2 and DB 3 .
After the first iteration, suppose the set of large 1-itemsets B;C;D;E;F;Gg, in which
A; B;C are locally large at site S 1 , B;C;D are locally large at site S 2 , and E; F; G are locally large
at site S 3 . Therefore, HL 1
Gg.
It follows from Theorem 1 that the set of size-2 candidate sets at site S 1 is equal to CH 1, where
fAB;BC;ACg. Similarly, CH 2
fEF;FG;EGg. Hence , the set of candidate sets for large 2-itemsets is CH
and it only has 8 candidates.
However, if Apriori-gen is applied to L 1 , the set of candidate sets
would
have 21 candidates. This shows that the technique in Theorem 1 is very effective in reducing the
candidate sets.3.3 Local Pruning of Candidate Sets
In the previous subsection, we have shown that the set CH k is typically a much smaller set of
candidate sets than CA k . To find the globally large itemsets, subsequent to the generation of CH k ,
support count exchange should be done. However, we have observed that some candidate sets in
CH k can be pruned away by using some local information before the count exchange starts.
From Lemma 5, if X is a globally large k-itemset, then there must exist a site S i , such that
k and X is heavy at site S i . As a consequence, X must be locally large at site S i . Therefore,
a site S i can prune away those candidates in CH i
k which are not locally large at S i . In other words,
to compute all the large k-itemsets, at each site S i , DMA can restrict its search domain on all the
sets
k which are locally large at site S i . For convenience, we use LL i
k to denote those
candidate sets in CH i
k which are locally large at site S i .
Follows from the above discussion, in every iteration, (loop counter = k), DMA computes the
heavy k-itemsets at each site S i according to the following procedure.
1. Candidate Sets Generation: generate the candidate sets CH i
based on
the heavy itemsets found at site S i in the doing so, each site actually is
responsible for generating its own set of candidate sets, and hence computing its own set of
large itemsets.)
2. Local Partition Scanning: for each X 2 CH i
k , scan the partition DB i to compute the local
support count X:sup i .
3. Local Pruning: for each X 2 CH i
k , if X is not locally large at site S i , then it is pruned away;
the remaining candidate sets are stored in LL i
k . (The above pruning only removes X from
the candidate set at site S i . X could still be a candidate set at some other site.)
4. Support Count Exchange: broadcast the candidate sets in LL i
k to other sites to collect support
counts; compute their global support counts and find all the heavy k-itemsets in site S i .
(j 6= i), which has received a request from S i for support counts, does not need to scan
its partition again to compute the support counts. The counts can be computed in advance
in Step 2. A detail discussion of this is in Section 4.1).
5. Broadcast Mining Result: broadcast the heavy k-itemsets found to all the other sites.
In the following, we extend Example 1 to Example 2 to illustrate the execution of the above
procedure. Before that, for clarity purpose, we list the notations used so far in our discussion in
Table
1.
D The number of transactions in database DB
s The support threshold minsup
k The set of globally large k-itemsets
k The set of candidate sets generated from L k
X:sup The global support count of an itemset X
i The number of transactions in the partition DB i
k The set of heavy k-itemsets at site S i
k The set of candidate sets generated from HL i
k The set of locally large k-itemsets in CH i
X:sup i The local support count of an itemset X at site S i
Table
1: Notation Table.
Example 2 In Example 1, assume the database has 150 transactions and each one of the 3
partitions has 50 transactions. Also assume that the support threshold
as has been illustrated in Example 1, in the second iteration, the candidate sets generated at
at site S 2 are CH 2
and at site S 3 are
fEF;EG;FGg.
In order to compute the large 2-itemsets, DMA first computes the local support counts at each
site. The result is recorded in Table 2. The last three rows are the local support counts of the
candidate sets at the corresponding sites. For example, the candidate sets at site S 1 are listed in
the first column, and their local support counts are listed in the second column.
From
Table
2, it can be seen that AC:sup therefore, AC is not locally large.
Hence, the candidate set AC is pruned away at site S 1 . On the other hand, both AB and BC
Table
2: Locally Large Itemsets.
have enough local support counts and they survive the local pruning. Hence LL 1
fAB;BCg.
Similarly, BD is pruned away at site S 2 and LL 2
fBC;CDg. The only remaining candidate set
at site S 3 is EF , i.e., LL 3
fEFg. After the local pruning, the number of size-2 candidate sets
has been reduced to half of the original size.
Once the local pruning is completed, each site broadcasts messages containing all the remaining
candidate sets to the other sites to collect their support counts. The result of this count support
exchange is recorded in Table 3.
locally large request broadcast X:sup 1 X:sup 2 X:sup 3 X:sup
candidate sets from sites
Table
3: Globally Large Itemsets.
The request for support count for AB is broadcast from S 1 to site S 2 and S 3 , and the counts
sent back are recorded at site S 1 as in the second row of Table 3. The other rows record similar
count exchange activities at the other sites. At the end of the iteration, site S 1 finds out that only
is heavy, because Hence the
heavy 2-itemset at site S 1 is HL 1
fBCg. Similarly, HL 2
fEFg. After
the broadcast of the heavy itemsets, all sites return the large 2-itemsets fBC;CD;EFg.
In terms of message communication, in this example, most of the candidate sets are locally large
at one site. For each one of them, only one broadcast and receive are needed. However, for the
candidate set BC, messages are broadcast from both S 1 and S 2 , which is not as efficient as in the
single broadcast case. In Section 3.4, an optimization technique to eliminate this duplication will
be discussed.
3.4 Message Optimization for Finding Large Itemsets
In a straight adaptation of the sequential Apriori algorithm, not only the number of candidate sets
generated is larger, but the number of messages for count exchange for each candidate set is also
large. This is due to the broadcast for every candidate set from all the sites. This requires O(n 2 )
messages in total for each candidate set, where n is the number of partitions.
In DMA, if a candidate set X is locally large at a site S i , S i only needs O(n) messages to collect
all the support counts for X. In general, very few candidate sets are locally large at all the sites.
Because of the data skewness property, the percentage of overlappings of the locally large candidate
sets from different sites should be small. Therefore, in most cases, DMA requires much less than
messages for each candidate set.
To ensure that DMA requires only O(n) messages for every candidate set in all cases, an optimization
technique has been introduced. To achieve single broadcast, DMA uses some simple
assignment functions, which could be a hash function, to determine a polling site for each candidate
set.
For each candidate set X, its polling site is responsible for broadcasting the polling request,
collecting the support counts, and determine whether X is large. Since there is only one polling
site for each candidate set X, the number of messages required for count exchange for X is O(n).
In the k-th iteration, after the local pruning phase has been completed at a site S i , DMA uses
the following procedure to do the polling.
1. Candidates sent to Polling Sites: S i acts as a home site of its candidate sets; for every polling
finds all the candidate sets in LL i
whose polling site are S j and stores them in
k , (i.e., candidates are being divided into groups according to their polling sites), the local
support counts of the candidate sets are also stored in the corresponding set LL i;j
sends each
k to the corresponding polling site S j .
2. Polling Site send Polling Requests: S i acts as a polling site; S i receives all LL j;i
k sent to it from
the other sites; for every candidate set X received, S i finds the list of originating sites from
which X is being sent; S i then broadcasts the polling requests to the other sites not on the
list to collect the support counts.
3. Remote Site reply Polling Requests: S i acts as a remote site to reply polling requests sent to it;
for every polling request LL p;i
k from polling site S p , S i sends the local support counts of the
candidates in LL p;i
k back to S p . (There is no need to scan the partition D i again to find the
local support counts. It is found already during the local pruning. Please see Section 4.1 for
details.)
4. Polling Site Compute acts as a polling site to compute the heavy itemsets;
receives the support counts from the other sites; computes the global support counts for its
candidates in LL i
k and finds the heavy itemsets; eventually, S i broadcasts the heavy itemsets
together with their global support counts to all the sites.
Example 3 In Example 2, assuming that S 1 is assigned as the polling site of AB and BC, S 2 is
assigned as the polling site of CD, and S 3 is assigned as the polling site of EF .
Following from the assignment, site S 1 is responsible for the polling of AB and BC. In the
simple case of AB, S 1 sends polling requests to S 2 and S 3 to collect the support counts. As for BC,
it is locally large at both S 1 and S 2 , the pair hBC;BC:sup is sent to S 1 by S 2 . After
receives the message, it sends a polling request to the remaining site S 3 . Once the support count
is received from S 3 , S 1 finds out that
is a heavy itemset at S 1 . By using a polling site, DMA has eliminated the double polling messages
for BC.4 Algorithm for Distributed Mining of Association Rules
In this section, we present the DMA algorithm (DMA) in detail based on the above discussion.
Before the description of the algorithm, we will discuss a technique for computing the local support
counts of all the candidate itemsets at different sites by performing only one single scan on each
partition.
4.1 Optimizing Partition Scanning for Count Exchanges
At each site S i , DMA has to find two sets of support counts in order to do local pruning and count
exchange. The first set is the local support counts of all the candidate sets generated at site S i . (
These candidate sets are the sets in CH i
k described in Theorem 1). A hash tree can be used to store
the support counts of these candidate sets [6]. A scan on the partition DB i is needed to compute
the counts to store in the hash tree. On the other hand, in order to answer the polling requests
from the other sites, a second set of support counts of the candidate sets generated at the other
sites is needed. If these counts are computed after the requests are received, a second scan on the
partition is unavoidable.
In order to avoid doing two scans, DMA is required to find the two sets of support counts by
one scan on the partition and store the counts on the same hash tree. This is possible because the
heavy sets for candidate set generation are available to all the sites at the end of each iteration.
According to Theorem 1, at a site S i , the set of candidate sets generated in the k-th iteration is
On the other hand, those generated in any other site S j is CH j
are available at S i , S i can
compute all these candidate sets and put them in the same hash tree before the scan for their local
support counts starts. In other words, every site only needs to scan its partition once to find the
local support counts of the itemsets in CH
technique, the
two sets of support counts required for local pruning and count exchange can be found in a single
scan of the partition. Therefore, the number of scans in DMA is minimized and is comparable to
that in the sequential case.
Further more, since every site will have the same set of candidate sets CH k , there is no need to
send the itemset names in a polling request, only their positions in the ordered list of the itemsets
in CH k is required. This would optimize the message size needed for count exchange.
4.2 The DMA algorithm
In this section, we present the DMA algorithm in details.
Algorithm 1 DMA: Distributed Mining of Association rules algorithm
Input: (1) DB i : the database partition at each site, (its size is equal to D i ); (2) s: the minimum
support threshold; both submitted at each site S i ,
Output: L: the set of all large itemsets in DB, returned at every site;
Method: iterates the following program fragment distributively at each site S i starting from
is the iteration loop counter; the algorithm terminates when either L k
returned is empty or the set of candidate sets CH k is empty
scan DB i to compute T i
1 is an array containing all size-1 itemsets in DB i and */
/* their local support counts in site S i */
else f
/* generate size-k candidate sets */
scan DB i to built the hash tree T i
k contains all candidate sets in CH k and */
/* their support counts in site S i */
for all
do
to n do
if polling
/* compute the locally large candidates and divide them according to their polling sites */
/* Send Candidates to Polling Sites */
do
send LL i;j
k to site S
Receive Candidates as a Polling Site */
receive LL j;i
for all X 2 LL j;i
do f
store X in LP i
update X:large sites in LP i
k to record the sites at which X is locally large; g
/* Send Polling Requests as a Polling Site to Collect Support Counts */
for all X 2 LP i
do f
broadcast polling requests for X to the sites S j , where S j 62 X:large sites;
receive X:sup j from the sites S j , where S j 62 X:large sites;
/* Compute Global Support Counts and Heavy Itemsets
for all X 2 LP i
do f
if X:sup - s \Theta D then insert X into H i
/* filter out the heavy k-itemsets;
broadcast H i
receive H j
k from all other sites S j , (j 6= i);
return
Performance Study of DMA
We have done an in-depth performance study on DMA to confirm our analysis of its efficiency. DMA
is implemented on a share-nothing distributed system by using PVM (Parallel Virtual Machine)
[11]. A 10Mb LAN is used to connect six RS/6000 workstations running the AIX system to perform
the study. The database in the experiment is composed of synthetic data.
In order to study the performance of DMA, we have also implemented the algorithm CD in our
test bed. In each iteration, CD generates the candidate sets at every site by applying the Apriori-
gen function on the set of large itemsets found in the previous iteration. Every site computes the
local support counts of all these candidate sets and broadcasts them to the other sites. All the sites
can then find the globally large itemsets for that iteration.
We have performed two experiments to compare the performance of DMA and CD. In the first
experiment, the test bed has a fixed number of sites. The aim is to perform the comparison with
respect to different support thresholds and database sizes. In the second experiment, the threshold
and database size are fixed, and the performance of the two algorithms are compared with respect
to different number of sites. The result of the first experiment is described in detail in Section 5.1,
and those of the second experiment is presented in Section 5.2.
The databases used in our experiments are synthetic data generated using the same techniques
introduced in [6, 16]. The parameters used are similar to those in [16]. Table 4 is a list of the
parameters and their values used in our synthetic databases. Readers not familiar with these
parameters can refer to [6, 16]. In the following, we use the notation Tx.Iy.Dm to denote a database
in which y.
5.1 Performance Comparison with Different Thresholds and Database
Sizes
In the first experiment, the test bed consists of three sites. The purpose of this experiment is to
compare the performance between DMA and CD with respect to different thresholds and database
Parameter Interpretation Value
D The number of transactions in database DB
size of the transactions 10
I j Mean size of the maximal potentially large itemsets 4
Number of potentially large itemsets 2000
N Number of items 1000
c r Correlation level 0.5
Multiplying factor 1260 - 2400
Table
4: Parameter Table.
sizes. Each site has its own local disk, and its partition is loaded on its local disk before the
experiments start.
The three partitions are generated separately using the parameters and the values in Table 4. In
order to control the skewness of the partitions, two more control parameters are introduced. These
two parameters are primary range r p and secondary range r s . The primary range is an interval of
items, and the secondary range is a sub-interval of the primary range. If the items range from 1
to 1000, a possible pair of primary and secondary ranges could be r
As described in [16], itemsets are generated as groups of similar itemsets. The size of each group
is controlled by the clustering size S q , and the size of the itemsets is a Poisson distribution. In our
synthesizing model, the first itemset in a group is picked randomly from the primary range r p , and
the other itemsets in the group contain two parts, the head and the tail. The head is a random
extraction from the first itemset that has been generated. If the head cannot fill up the itemset
size, then the tail is picked randomly from the secondary range r s . By doing this, most itemsets
generated are within the primary range, with some clustering in the secondary range. Therefore,
we can generate databases that have certain skewness towards the secondary range.
The data skewness of a distributed database can be controlled by using different primary and
secondary ranges for different partitions. In Table 5, the primary and secondary ranges of the
three partitions in the first experiment are listed. The first two partitions are skewed towards
the ranges [1; 700] and [300; 1000] respectively. The third partition DB 3 is generated with two
clustering ranges. Two disjoint pools of large itemsets are used in synthesizing DB 3 . The first one
is from the range pair [1; 550] and [1; 400], while the second one is from the range pair [450; 1000]
and [600; 1000]. Half of the transactions are picked from the first pool, and the other half from the
second pool. Together, these three partitions exhibit a certain degree of skewness.
In this experiment, the sizes of the databases range from 100K to 900K transactions, and the
minimum support threshold ranges from 0:75% to 2%. While the number of candidate sets in DMA
are different at each site; the number in CD remains the same at all sites.
When comparing DMA against CD, we experienced, on average, a 65% reduction of the number
of candidate sets at every site. In Figure 1, the average number of candidate sets generated by
partition primary range secondary range
Table
5: Partition Primary and Secondary Ranges .
T10.I4.D500K501502% 1.50% 1% 0.75%
Minimum support
Average
number
of
candidate
sets
per
site
(in
DMA CD
T10.I4.D500K0.30.342% 1.50% 1% 0.75%
Minimum support
of
average
number
of
candidate
sets
DMA/CD
Figure
1: Candidate Sets Reduction.
DMA and CD at each site for a database of size 500K transactions are plotted against the support
thresholds. DMA has much less candidate sets in all cases, and the difference increases as the
support decreases. For the same database, the ratios of the number of candidate sets between
DMA and CD are presented also in Figure 1. The figure shows that the reduction in the number
of candidate sets in DMA against CD is about 65% \Gamma 70%.
The above comparison is on the number of candidate sets per site. The result has direct implication
on the reduction in the total number of messages required, because only one site will generate
messages for a candidate set to do polling.
The reduction in the total messages required is bigger than that in candidate sets when comparing
DMA against CD. We have experienced a reduction of about 90% in total message size in
all cases. In Figure 2, for the database of 500K, the total message size needed by DMA and CD
are plotted against the support thresholds. Moreover, the ratios of the total message sizes between
DMA and CD are presented in the same figure. The reduction is larger when the support threshold
is smaller, (i.e., when there are more large itemsets). In the bar chart of Figure 2, it can be seen
that DMA requires 6% \Gamma 12% of the messages of CD.
We have also compared the execution time between DMA and CD. With the database of 500K,
DMA is about 7% to 25% faster than CD, depending on the support threshold. In Figure 3, the
execution time of DMA and CD are plotted against the thresholds for the 500K database. The
ratios of speed-up are presented in the same figure in bar chart. For some other database sizes in
this experiment, the best speed-up can reach about 55%.
1.50% 1% 0.75%
Minimum support
Total
size
of
message
transmitted
(in
DMA CD
T10.I4.D500K0.040.122% 1.50% 1% 0.75%
Minimum support
of
message
size
transmitted
DMA/CD
Figure
2: Message Size Reduction.
Even though the speed-up in our experiment is substantial, it does not seem to be as significant
as the reduction in message size. The main reason is that the overhead in communication is relatively
small in our test bed. If DMA is running in a distributed database, whose partitions are placed in
far apart locations, the speed-up will be more significant.
T10.I4.D500K40012002% 1.50% 1% 0.75%
Minimum support
Execution
time
(in
seconds)
DMA CD
T10.I4.D500K11.22% 1.50% 1% 0.75%
Minimum support
of
execution
time
CD/DMA
Figure
3: Execution Time Speed Up.
In this experiment, we have also compared DMA against CD on a series of 5 databases from 100K
to 900K transactions. In terms of candidate sets and total message size reduction, the improvement
in DMA against CD is very steady. In Figure 4, the average number of candidate sets per site in
DMA is compared to that in CD over all the 5 databases, for the threshold 0:75%. The ratios
between them are plotted in the figure. The result shows that the percentage of reduction is about
70% in all cases.
In
Figure
5, the total size of message communication in DMA is compared to that in CD over
all the 5 databases, for the threshold 0:75%. The ratios between them are presented in the
figure, and it shows that the reduction is between 88% to 89% in all cases.
In
Figure
6, the execution time of DMA is compared to that of CD over all the 5 databases,
for the same threshold 0:75%. The ratios between them are plotted in the figure and DMA is
about 18% to 55% faster than CD.
Database size
of
average
number
of
candidate
sets
DMA/CD
Figure
4: Candidate Sets Reduction.
T10.I4.D100K-1M, s=0.75%0.110.12100K 300K 500K 700K 900K
Database size
of
message
size
transmitted
DMA/CD
Figure
5: Message Size Reduction.
5.2 Performance Comparison with Different Number of Sites
In the second experiment, the test bed consists of six RS/6000 workstations. The synthetic database
is generated similar to that in the first experiment. The aim of this experiment is to compare DMA
against CD when the number of sites changes. In the following, we will describe the result of a
comparison in which the number of sites varies from three to six. The size of the database is 200K
transactions, and it is partitioned equally across all the sites. The minimum support threshold is
3%.
Similar to the first experiment, we found significant reduction in both the number of candidate
sets and the total message sizes in all the cases in which the number of sites are 3, 4, 5, and 6,
respectively. In Figure 7, the average number of candidate sets per sites is compared between DMA
and CD. A reduction of about 75% \Gamma 90% is witnessed in DMA. In Figure 8, the ratios of the total
message sizes of the two algorithms is presented. DMA has about 85% \Gamma 90% reduction in message
sizes in all the cases. Lastly, the execution time ratios are described in Figure 9, again, DMA is
shown to be about 25% \Gamma 35% faster than CD in all the cases.
In general, the performance of DMA depends on the distribution of the data across the partitions.
If the itemsets are distributed with a higher skewness among the partitions, the techniques of local
pruning and candidate set generation reduction in DMA would be more powerful. When comparing
Database size
of
execution
time
CD/DMA
Figure
Execution Time Speed Up.
Number of Nodes
of
average
number
of
candidate
sets
DMA/CD
Figure
7: Candidate Sets Reduction
the results of the above two different experiments, it can be observed that DMA performs better
when the number of nodes is higher. This could be the consequence of a higher data skewness due
to the increased number of partitions.
6 Discussion
The efficiency of DMA is attributed to three techniques: (1) candidate sets generation, (2) local
pruning, and (3) messages optimization. In the described DMA, only local information available in
each partition is considered in the local pruning. Can we take advantage of the global information
available to do more pruning before support count exchange starts ? In fact, at the end of each
iteration, the polling site of a candidate set X not only knows the global support count of X but
also all the local support counts of X. The set of local support counts can be broadcasted to all
the sites together with X at the end of each iteration. We now discuss an optimization technique
which makes use of this global information to prune candidate sets.
If X is a k-itemset, with respect to each partition DB i , (1 - i - n), we use maxsup i (X) to
Number of Nodes
of
message
size
transmitted
DMA/CD
Figure
8: Message Size Reduction
Number of Nodes
of
execution
time
DMA/CD
Figure
9: Execution Time
denote the minimum value of the local support counts of all the size subsets of X, i.e,
1g. It follows from the subset relationship that
is an upper bound of the local support count X:sup i . Hence, the sum of these upper
bounds over all partitions, denoted by maxsup(X), is an upper bound of X:sup. I.e., X:sup !
(X). Note that maxsup(X) can be computed at every site at the
beginning of the k-th iteration. Since maxsup(X) is an upper bound of its global support count,
it can be used for pruning, i.e., if cannot be a candidate set. We call
this technique global pruning. Global pruning can be combined with local pruning to form different
pruning strategies. In the following, we outline three possible strategies.
1. Local Pruning followed by Global Pruning: After the local pruning, each site S i can apply
global pruning to the remaining candidate sets. The upper bound maxsup(X) for a candidate
set X can be fine tuned to
Since X:sup i is available during the local pruning, the above upper bound can be computed
at site S i , and it is more effective than the value maxsup(X) in global pruning.
2. Global Pruning followed by Local Pruning: Use the upper bound maxsup(X) to prune away
some candidate sets at site S i , and then apply local pruning on the remaining candidate sets.
(In the extreme case, we may use global pruning without local pruning).
3. Global Pruning at Polling Site: Only local pruning is done at a site during the pruning phase.
For a candidate set X, additional pruning is being done at its polling site. Let S p be the
polling site of X and \Gamma be the set of originating sites from which the requests to do polling on
are being sent. For the sites in \Gamma, the local support counts of X have been sent to S p already.
For a site S j not in \Gamma, since X is not locally large at S j , the polling site can deduce that its
local support count X:sup j is bounded by the value min(maxsup j (X); s \Theta D j ). Therefore an
upper bound of X:sup can be computed by
The above upper bound for X can be used to prune away some candidate sets at a polling
site before it starts to collect support counts.
The effectiveness of global pruning depends on the data distribution. For example, let AB be a
candidate set and its size-1 subset A is locally large in S 1 but small (not locally large) in S 2 , while
the subset B is small in S 1 but large in S 2 . By global pruning, it can be deduced that AB
is not globally large. On the other hand, if A and B are both large on S 1 , and small on S 2 , then
it cannot be deduced from global pruning that AB is small. In fact, the choice of an appropriate
global pruning strategy will depend on the data distribution.
The additional cost in doing global pruning is the storage required to store the local support
counts and the message communication to broadcast the support counts. There is a trade-off
between the cost and the reduction of candidate sets. It will depend on the data distribution as
well as the number of partitions. We believe that global-pruning will pay off when the distribution
of the data has certain degree of skewness. Additional performance study is required in order to
investigate this technique further.
The hashing technique and relaxation factor proposed in PDM can be integrated with the
techniques in DMA [17]. For example, in the selection of hash buckets for broadcasting, the local
pruning technique can be used. Also, a relaxation factor on the support threshold can be used to
increase the amount of information available at the polling site for global pruning.
Another point worthy to mention here is that the original Count Distribution algorithm as
proposed in [5], which is designed for high performance parallel environment, can be improved by
introducing polling sites to decrease the amount of message communication required. Its merit is
that it requires less synchronization. In fact, in a high performance parallel environment, DMA and
CD can be combined to form a hybrid algorithm which has less candidate sets than CD, a slightly
more message communication than DMA, but less synchronization. We will investigate this further
in our future study.
Another issue related to the performance of the mining of association rules in a distributed
database is the difference between the partition sizes. The algorithms such as DMA and CD require
some synchronization in each iteration. A large size difference between the partitions would not be
favourable to the performance. A possible solution would be to divide some large partitions further
to equalize their sizes. This would reduce the time in synchronization. However, the trade-off would
be more message communication.
7 Conclusion
We studied an efficient algorithm for mining association rules in distributed databases. The developed
method reduces the number of candidate sets at each partition effectively by using local
pruning. The communication scheme for count exchange is optimized by using polling sites. The
method is implemented and its performance is studied and compared with a direct application of
a popular sequential algorithm. The study shows that the proposed technique has superior performance
on the mining of association rules in distributed databases.
The efficiency of local pruning can be enhanced by global pruning if local support counts are
stored at the sites. We have also discussed the possibility of integrating the techniques in DMA
with those in PDM.
Recently, there have been some interesting studies at finding multiple-level or generalized association
rules in large transaction databases [13, 21]. An extension of the techniques in DMA to
the mining of multiple-level or generalized association rules in distributed database are interesting
problems for further research. For experimental purposes, we are planning to implement the DMA
and other related algorithms on an IBM SP2 system with 32 nodes to study the problem of mining
association rules in a parallel system with high speed communication.
--R
"Efficient similarity search in sequence databasess,"
"An interval classifier for database mining applications,"
"Database mining: A performance perspective,"
"Mining Association Rules between Sets of Items in Large Databases,"
"Parallel mining of association rules: Design, implementation, and experience,"
"Fast algorithms for mining association rules,"
"Knowledge discovery in databases: A rule-based attribute-oriented approach,"
"Maintenance of Discovered Association Rules in Large Databases: An Incremental Updating Technique,"
Advances in Knowledge Discovery and Data Mining.
"Knowledge discovery in databases: An overview,"
PVM: Parallel Virtual Machine.
"Data-driven discovery of quantitative rules in relational databases,"
"Discovery of multiple-level association rules from large databases,"
"Finding interesting rules from large sets of discovered association rules,"
"Efficient and effective clustering method for spatial data mining,"
"An effective hash-based algorithm for mining association rules,"
"Efficient Parallel Data Mining for Association Rules,"
Knowledge Discovery in Databases.
"An efficient algorithm for mining association rules in large databases,"
"Database Achievements and Opportunities Into the 21st Century,"
"Mining generalized association rules,"
Principles of Database and Knowledge-Base Systems
--TR
--CTR
Takahiko Shintani , Masaru Kitsuregawa, Parallel mining algorithms for generalized association rules with classification hierarchy, ACM SIGMOD Record, v.27 n.2, p.25-36, June 1998
Murat Kantarcioglu , Chris Clifton, Privacy-Preserving Distributed Mining of Association Rules on Horizontally Partitioned Data, IEEE Transactions on Knowledge and Data Engineering, v.16 n.9, p.1026-1037, September 2004
Satoshi Morinaga , Kenji Yamanishi , Jun-ichi Takeuchi, Distributed cooperative mining for information consortia, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
R. J. Miller , Y. Yang, Association rules over interval data, ACM SIGMOD Record, v.26 n.2, p.452-461, June 1997
Jianning Dong , William Perrizo , Qin Ding , Jingkai Zhou, The application of association rule mining to remotely sensed data, Proceedings of the 2000 ACM symposium on Applied computing, p.340-345, March 2000, Como, Italy
Thomas Legler , Wolfgang Lehner , Andrew Ross, Data mining with the SAP NetWeaver BI accelerator, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Shichao Zhang , Chengqi Zhang , Jeffrey Xu Yu, An efficient strategy for mining exceptions in multi-databases, Information Sciences: an International Journal, v.165 n.1-2, p.1-20, 3 September 2004
Jian Tang, Using incremental pruning to increase the efficiency of dynamic itemset counting for mining association rules, Proceedings of the seventh international conference on Information and knowledge management, p.273-280, November 02-07, 1998, Bethesda, Maryland, United States
new distributed data mining model based on similarity, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Vincent Cho , Beat Wthrich, Distributed mining of classification rules, Knowledge and Information Systems, v.4 n.1, p.1-30, January 2002
Mohammed Javeed Zaki , Srinivasan Parthasarathy , Wei Li, A localized algorithm for parallel association mining, Proceedings of the ninth annual ACM symposium on Parallel algorithms and architectures, p.321-330, June 23-25, 1997, Newport, Rhode Island, United States
Mohammed J. Zaki , Srinivasan Parthasarathy , Mitsunori Ogihara , Wei Li, Parallel Algorithms for Discovery of Association Rules, Data Mining and Knowledge Discovery, v.1 n.4, p.343-373, December 1997
Wen-Chih Peng , Ming-Syan Chen, Developing Data Allocation Schemes by Incremental Mining of User Moving Patterns in a Mobile Computing System, IEEE Transactions on Knowledge and Data Engineering, v.15 n.1, p.70-85, January
Jaideep Vaidya , Chris Clifton, Privacy preserving association rule mining in vertically partitioned data, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
D. W. Cheung , S. D. Lee , V. Xiao, Effect of Data Skewness and Workload Balance in Parallel Data Mining, IEEE Transactions on Knowledge and Data Engineering, v.14 n.3, p.498-514, May 2002
John D. Holt , Soon M. Chung, Parallel mining of association rules from text databases, The Journal of Supercomputing, v.39 n.3, p.273-299, March 2007
Xindong Wu , Shichao Zhang, Synthesizing High-Frequency Rules from Different Data Sources, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.353-367, February
Hongjun Lu , Ling Feng , Jiawei Han, Beyond intratransaction association analysis: mining multidimensional intertransaction association rules, ACM Transactions on Information Systems (TOIS), v.18 n.4, p.423-454, Oct. 2000
Frans Coenen , Paul Leng, Partitioning strategies for distributed association rule mining, The Knowledge Engineering Review, v.21 n.1, p.25-47, March 2006
Ling Feng , Jeffrey Xu Yu , Hongjun Lu , Jiawei Han, A template model for multidimensional inter-transactional association rules, The VLDB Journal The International Journal on Very Large Data Bases, v.11 n.2, p.153-175, October 2002
Miroslav Kubat , Alaaeldin Hafez , Vijay V. Raghavan , Jayakrishna R. Lekkala , Wei Kian Chen, Itemset Trees for Targeted Association Querying, IEEE Transactions on Knowledge and Data Engineering, v.15 n.6, p.1522-1534, November
Boris Rozenberg , Ehud Gudes, Association rules mining in vertically partitioned databases, Data & Knowledge Engineering, v.59 n.2, p.378-396, November 2006
Jaideep Vaidya , Chris Clifton, Secure set intersection cardinality with application to association rule mining, Journal of Computer Security, v.13 n.4, p.593-622, July 2005
Eui-Hong (Sam) Han , George Karypis , Vipin Kumar, Scalable Parallel Data Mining for Association Rules, IEEE Transactions on Knowledge and Data Engineering, v.12 n.3, p.337-352, May 2000
Qing Li , Ling Feng , Allan Wong, From intra-transaction to generalized inter-transaction: landscaping multidimensional contexts in association rule mining, Information SciencesInformatics and Computer Science: An International Journal, v.172 n.3-4, p.361-395, 9 June 2005
Antonin Rozsypal , Miroslav Kubat, Association mining in time-varying domains, Intelligent Data Analysis, v.9 n.3, p.273-288, May 2005
Kubat, Searching for high-support itemsets in itemset trees, Intelligent Data Analysis, v.10 n.2, p.105-120, March 2006
Xindong Wu , Chengqi Zhang , Shichao Zhang, Database classification for multi-database mining, Information Systems, v.30 n.1, p.71-88, March 2005
Parag C. Pendharkar , Girish Subramanian, Connectionist and evolutionary models for learning, discovering and forecasting software effort, Managing data mining technologies in organizations: techniques and applications, Idea Group Publishing, Hershey, PA,
Ycel Saygin , zgr Ulusoy, Exploiting Data Mining Techniques for Broadcasting Data in Mobile Computing Environments, IEEE Transactions on Knowledge and Data Engineering, v.14 n.6, p.1387-1399, November 2002
B. Park , H. Kargupta , E. Johnson , E. Sanseverino , D. Hershberger , L. Silvestre, Distributed, Collaborative Data Analysis from Heterogeneous Sites Using a Scalable Evolutionary Technique, Applied Intelligence, v.16 n.1, p.19-42, January-February 2002
Vipin Kumar , Mohammed Zaki, High performance data mining (tutorial PM-3), Tutorial notes of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, p.309-425, August 20-23, 2000, Boston, Massachusetts, United States | distributed data mining;data mining;knowledge discovery;distributed database;association rule;partitioned database;distributed algorithm |
627803 | Parallel Mining of Association Rules. | AbstractWe consider the problem of mining association rules on a shared-nothing multiprocessor. We present three algorithms that explore a spectrum of trade-offs between computation, communication, memory usage, synchronization, and the use of problem-specific information. The best algorithm exhibits near perfect scaleup behavior, yet requires only minimal overhead compared to the current best serial algorithm. | Introduction
With the availability of inexpensive storage and the progress in data capture technology, many organizations have
created ultra-large databases of business and scientific data, and this trend is expected to grow. A complementary
technology trend is the progress in networking, memory, and processor technologies that has opened up the
possibility of accessing and manipulating these massive databases in a reasonable amount of time. Data mining
(also called knowledge discovery in databases) is the efficient discovery of previously unknown patterns in large
databases. The promise of data mining is that it will deliver technology that will enable development of a new
breed of decision-support applications.
Discovering association rules is an important data mining problem [1]. Recently, there has been considerable
research in designing fast algorithms for this task [1] [3] [5] [6] [8] [12] [9] [11]. However, with the exception of
[10], the work so far has been concentrated on designing serial algorithms. Since the databases to be mined are
often very large (measured in gigabytes and even terabytes), parallel algorithms are required.
We present in this paper three parallel algorithms for mining association rules. In order to determine the best
method for mining rules in parallel, we explore a spectrum of trade-offs between computation, communication,
memory usage, synchronization, and the use of problem-specific information in parallel data mining. Specifically,
1. The focus of the Count Distribution algorithm is on minimizing communication. It does so even at the
expense of carrying out redundant duplicate computations in parallel.
2. The Data Distribution algorithm attempts to utilize the aggregate main memory of the system more
effectively. It is a communication-happy algorithm that requires nodes to broadcast their local data to all
other nodes.
Also Department of Computer Science, University of Wisconsin, Madison.
3. The Candidate Distribution algorithm exploits the semantics of the particular problem at hand both to
reduce synchronization between the processors and to segment the database based upon the patterns the
different transactions support. This algorithm also incorporates load balancing.
These algorithms are based upon the serial algorithm Apriori which was first presented in [3]. We chose the
Apriori algorithm because of its superior performance over the earlier algorithms [1] [6], as shown in [3]. We
preferred Apriori over AprioriHybrid, a somewhat faster algorithm in [3], because AprioriHybrid is harder to
parallelize; the performance of AprioriHybrid is sensitive to heuristically determined parameters. Furthermore,
the performance of Apriori can be made to approximate that of AprioriHybrid by combining the small workloads
of several Apriori cycles into a single workload requiring only one cycle The algorithm in [8] is quite similar to
Apriori and our parallelization techniques directly apply to this algorithm as well. The algorithm in [11] does
not perform as well as Apriori on large datasets with a large number of items. The algorithm in [9] attempts to
improve the performance of Apriori by using a hash filter. However, as we will see in Section 4.3, this optimization
actually slows down the Apriori algorithm. Concurrent to our work, that algorithm has been parallelized and
was recently presented with a simulation study in [10]. It too suffers from the use of a hash-filter, despite the
use of a special communication operator to build it. We discuss this further in Section 4.3.
Our three parallel algorithms have all been implemented on an IBM POWERparallel System SP2 (henceforth
referred to simply as SP2), a shared-nothing machine[7]. We present measurements from this implementation to
evaluate the effectiveness of the design trade-offs. The winning algorithm is now part of the IBM data-mining
product and is being used in the field.
The organization of the rest of the paper is as follows. Section 2 gives a brief review of the problem of mining
association rules[1] and the Apriori algorithm[3] on which the proposed parallel algorithms are based. Section 3
gives the description of the parallel algorithms. Section 4 presents the results of the performance measurements
of these algorithms. Section 5 contains conclusions. A more detailed version of this paper can be found in [2].
2 Overview of the Serial Algorithm
2.1 Association Rules
The basic problem of finding association rules as introduced in [1] is as follows. Let be a
set of literals, called items. Let D be a set of transactions, where each transaction T is an itemset such that
We say that a transaction T contains X, a set of some items in I, if X ' T . An association rule is an
implication of the form X ;. The rule X ) Y holds in the transaction
set D with confidence c if c% of transactions in D that contain X also contain Y . The rule X ) Y has support
s in the transaction set D if s% of transactions in D contain X [ Y .
Given a set of transactions D, the problem of mining association rules is to generate all association rules that
have certain user-specified minimum support (called minsup) and confidence (called minconf ).
Problem Decomposition The problem of mining association rules can be decomposed into two subproblems
[1]:
1. Find all sets of items (itemsets) whose support is greater than the user-specified minimumsupport. Itemsets
with minimum support are called frequent itemsets.
k-itemset An itemset having k items.
Lk Set of frequent k-itemsets (those with minimum support).
Each member of this set has two fields: i) itemset and ii) support count.
Ck Set of candidate k-itemsets (potentially frequent itemsets).
Each member of this set has two fields: i) itemset and ii) support count.
i The dataset local to the processor P i
DR i The dataset local to the processor P i after repartitioning
The candidate set maintained with the Processor P i during the
kth pass (there are k items in each candidate)
Figure
1: Notation
L1 := ffrequent 1-itemsetsg;
represents the pass number
while (
begin
Ck := New candidates of size k generated from
transactions t 2 D do
Increment the count of all candidates in Ck that are contained in t;
Lk := All candidates in Ck with minimum support;
Answer :=
Figure
2: Apriori Algorithm
2. Use the frequent itemsets to generate the desired rules. The general idea is that if, say, ABCD and AB
are frequent itemsets, then we can determine if the rule AB ) CD holds by computing the ratio
support(ABCD)/support(AB). If conf - minimum confidence, then the rule holds. (The rule will have
minimum support because ABCD is frequent.)
Much of the research has been focussed on the first subproblem as the database is accessed in this part of
the computation and several algorithms have been proposed [1] [3] [6] [8] [9] [11]. We review in Section 2.2 the
apriori algorithm [3] on which our parallel algorithms are based.
2.2 Apriori Algorithm
Figure
2 gives an overview of the Apriori algorithm for finding all frequent itemsets, using the notation given in
Figure
1. The first pass of the algorithm simply counts item occurrences to determine the frequent 1-itemsets.
A subsequent pass, say pass k, consists of two phases. First, the frequent itemsets L k\Gamma1 found in the
pass are used to generate the candidate itemsets C k , using the apriori candidate generation procedure described
below. Next, the database is scanned and the support of candidates in C k is counted. For fast counting, we need
to efficiently determine the candidates in C k contained in a given transaction t. A hash-tree data structure [3]
is used for this purpose.
Candidate Generation Given L k\Gamma1 , the set of all frequent 1)-itemsets, we want to generate a superset
of the set of all frequent k-itemsets. The intuition behind the apriori candidate generation procedure is that if
an itemset X has minimum support, so do all subsets of X. For simplicity, assume the items in each itemset are
in lexicographic order.
Candidate generation takes two steps. First, in the join step, join L k\Gamma1 with L
insert into Ck
select p.item1 , p.item2 , .,
from
Next, in the prune step, delete all itemsets c 2 C k such that some (k \Gamma 1)-subset of c is not in L k\Gamma1 .
For example, let L 3 be ff1 2 3g, f1 2 4g, f1 3 4g, f1 3 5g, f2 3 4gg. After the join step, C 4 will be ff1 2 3
g. The prune step will delete the itemset f1 3 4 5g because the itemset f1 4 5g is not in L 3 . We
will then be left with only f1 2 3 4g in C 4 .
3 Parallel Algorithms
We first present three parallel algorithms for the first subproblem - the problem of finding all frequent itemsets.
We then give a parallel algorithm for the second subproblem - the problem of generating rules from frequent
itemsets. Refer to Figure 1 for a summary of notation used in the algorithm descriptions. We use superscripts
to indicate processor id and subscripts to indicate the pass number (also the size of the itemset).
The algorithms assume a shared-nothing architecture, where each of N processors has a private memory
and a private disk. The processors are connected by a communication network and can communicate only by
passing messages. The communication primitives used by our algorithms are part of the MPI (Message Passing
communication library supported on the SP2 and are candidates for a message-passing communication
standard currently under discussion [4]. Data is evenly distributed on the disks attached to the processors, i.e.
each processor's disk has roughly an equal number of transactions. We do not require transactions to be placed
on the disks in any special way.
3.1 Algorithm 1: Count Distribution
This algorithm uses a simple principle of allowing "redundant computations in parallel on otherwise idle processors
to avoid communication". The first pass is special. For all other passes k ? 1, the algorithm works as
follows:
1. Each processor P i generates the complete C k , using the complete frequent itemset L k\Gamma1 created at the end
of pass k \Gamma 1. Observe that since each processor has the identical L k\Gamma1 , they will be generating identical
2. Processor P i makes a pass over its data partition D i and develops local support counts for candidates in
3. Processor P i exchanges local C k counts with all other processors to develop global C k counts. Processors
are forced to synchronize in this step.
4. Each processor P i now computes L k from C k .
5. Each processor P i independently makes the decision to terminate or continue to the next pass. The decision
will be identical as the processors all have identical L k .
In the first pass, each processor P i dynamically generates its local candidate itemset C i
1 depending on the
items actually present in its local data partition D i . Hence, the candidates counted by different processors may
not be identical and care must be taken in exchanging the local counts to determine global C 1 .
Thus, in every pass, processors can scan the local data asynchronously in parallel. However, they must
synchronize at the end of each pass to develop global counts.
Performance Considerations Steps 1-2 and 4-5 are similar to that of the serial algorithm. The non-obvious
step is how processors exchange local counts to arrive at global C k counts. Since each processor has the exact
same C k , each processor puts its count values in a common order into a count array. All that is needed now is
to perform a parallel vector sum of the arrays. This only requires communicating count values and can be done
in O(log(n)) communication steps. It also avoids any time-consuming logic that would otherwise be needed to
assure that we only combine counts that belong to the same candidate. The full details of this process including
the MPI communication primitives used are described in [2].
3.2 Algorithm 2: Data Distribution
The attractive feature of the Count distribution algorithm is that no data tuples are exchanged between processors
only counts are exchanged. Thus, processors can operate independently and asynchronously while reading
the data. However, the disadvantage is that this algorithm does not exploit the aggregate memory of the system
effectively. Suppose that each processor has memory of size jM j. The number of candidates that can be counted
in one pass is determined by jM j. As we increase the number of processors from 1 to N , the system has N \Theta jM j
total memory, but we still count the same number of candidates in one pass, as each processor is counting identical
candidates. The Count distribution algorithm counts no more candidates per pass than the serial algorithm.
The Data distribution algorithm is designed to exploit better the total system's memory as the number of
processors is increased. In this algorithm, each processor counts mutually exclusive candidates. Thus, as the
number of processors is increased, a larger number of candidates can be counted in a pass. On an N-processor
configuration, Data will be able to count in a single pass a candidate set that would require N passes in Count.
The downside of this algorithm is that every processor must broadcast its local data to all other processors in
every pass. Therefore, this algorithm can become viable only on a machine with very fast communication.
Pass 1: Same as the Count distribution algorithm.
1. Processor P i generates C k from L k\Gamma1 . It retains only 1=Nth of the itemsets forming the candidates subset
k that it will count. Which 1=N itemsets are retained is determined by the processor id and can be
computed without communicating with other processors. In our implementation, itemsets are assigned in
a round-robin fashion. The C i
k sets are all disjoint and the union of all C i
k sets is the original C k .
2. Processor P i develops support counts for the itemsets in its local candidate set C i
using both local data
pages and data pages received from other processors.
3. At the end of the pass over the data, each processor P i calculates L i
k using the local C i
k . Again, all L i
sets
are disjoint and the union of all L i
k sets is L k .
4. Processors exchange L i
k so that every processor has the complete L k for generating C k+1 for the next
pass. This step requires processors to synchronize. Having obtained the complete L k , each processor can
independently (but identically) decide whether to terminate or continue on to the next pass.
The interesting step is Step 2 in which processors develop support counts for local candidates C i
chronously. During this step, processors are broadcasting their local data as well as receiving the local data
of other processors. We must be careful to avoid network congestion and use asynchronous communication to
overlap communication time with the counting of support. See [2] for full details.
3.3 Algorithm 3: Candidate Distribution
One limitation of both the Count and Data distribution algorithms is that since any database transaction could
support any candidate itemset, each transaction must be compared against the entire candidate set. This is
what requires Count to duplicate the candidate set on every processor and Data to broadcast every database
transaction. Additionally, both Count and Data distribution algorithms require processors to synchronize at the
end of each pass to exchange counts or frequent itemsets respectively. If the workload is not perfectly balanced,
this can cause all the processors to wait for whichever processor finishes last in every pass. These problems
are due to the fact that neither Count nor Data exploit problem-specific knowledge; data tuples and candidate
itemsets are partitioned merely to equally divide the work. All processors must be consulted and all information
gathered before they can proceed onto the next pass.
The Candidate distribution algorithm attempts to do away with these dependencies by partitioning both the
data and the candidates in such a way that each processor may proceed independently. In some pass l, where l
is heuristically determined, this algorithm divides the frequent itemsets L l\Gamma1 between processors in such a way
that a processor P i can generate a unique C i
independent of all other processors (C i
At the same time, data is repartitioned so that a processor can count candidates in C i
m independent of all other
processors. Note that depending upon the quality of the itemset partitioning, parts of the database may have
to be replicated on several processors. The itemset partitioning algorithm considers this aspect by identifying
segments of L l\Gamma1 that are likely supported by different database transactions. The choice of the redistribution
pass is a tradeoff between decoupling processor dependence as soon as possible and waiting until the itemsets
become more easily and equitably partitionable. The partitioning algorithm exploits the semantics of the Apriori
candidate generation procedure described in Section 2.2.
After this candidate distribution, each processor proceeds independently, counting only its portion of the
global candidate set using only local data. No communication of counts or data tuples is ever required. The only
dependence that a processor has on other processors is for pruning the local candidate set during the prune step
of candidate generation. However, this information is sent asynchronously, and processors do not wait for the
complete pruning information to arrive from all other processors. During the prune step of candidate generation,
it prunes the candidate set as much as possible using whatever information has arrived, and opportunistically
starts counting the candidates. The late arriving pruning information can instead be used in subsequent passes.
The algorithm is described below.
Use either Count or Data distribution algorithm.
Pass
1. Partition L k\Gamma1 among the N processors such that L k\Gamma1 sets are "well balanced". We discuss below how
this partitioning is done. Record with each frequent itemset in L k\Gamma1 which processor has been assigned
this itemset. This partitioning is identically done in parallel by each processor.
2. Processor P i generates C i
logically using only the L k\Gamma1 partition assigned to it. Note that P i still has
access to the complete L k\Gamma1 , and hence can use standard pruning while generating C i
k in this pass.
3. P i develops global counts for candidates in C i
k and the database is repartitioned into DR i at the same
time.
4. After P i has processed all its local data and any data received from all other processors, it posts
asynchronous receive buffers to receive L j
k from all other processors. These L j
are needed for pruning C i
in the prune step of candidate generation.
5. Processor P i computes L i
k from C i
k and asynchronously broadcasts it to the other processors using
sends.
1. Processor P i collects all frequent itemsets that have been sent to it by other processors. They are used
in the pruning step of the candidate generation, but not the join step. Itemsets received from processor j
could be of length or greater than
keeps track for each processor P j the largest size of the frequent itemsets sent by it. Receive buffers for
the frequent itemsets are reposted after processing.
2.
k using the local L i
. Now it can happen that P i has not received L j
k\Gamma1 from all other
processors, so P i needs to be careful at the time of pruning. It needs to distinguish an itemset (a
long subset of a candidate itemset) which is not present in any of L j
from an itemset that is present in
some
but this set has not yet been received by processor P i . It does so by probing L (remember
that repartitioning took place in pass l) using the l \Gamma 1 long prefix of the itemset in question, finding the
processor responsible for it, and checking if L j
has been received from this processor.
3. P i makes a pass over DR i and counts C i
k . It then computes L i
k from C i
k and asynchronously broadcasts
k to every other processor using sends.
As in the Data distribution algorithm, Step 3 of pass communicating local data while support
counts are being developed. The one difference here is that local data need not be broadcast to every other
processor - because of the candidate partitioning, processors have some information about which transactions
are useful in developing support counts on other processors. This allows processors to send less data through
the network. Full details of this filtering are described in [2].
Partitioning L k We motivate the algorithm for partitioning L k by an example. Let L 3 be fABC, ABD,
ABE, ACD, ACE, BCD, BCE, BDE, CDEg. Then L
fABCDEg, and L whose members all have the common prefix AB.
Note that the candidates ABCD, ABCE, ABDE and ABCDE also have the prefix AB. The apriori candidate
generation procedure (Section 2.2) generates these candidates by joining only the items in E .
Therefore, assuming that the items in the itemsets are lexicographically ordered, we can partition the itemsets
in L k based on common long prefixes. By ensuring that no partition is assigned to more than one processor,
we have ensured that each processor can generate candidates independently (ignoring the prune step). Suppose
we also repartition the database in such a way that any tuple that supports an itemset contained in any of the L k
partitions assigned to a processor is copied to the local disk of that processor. The processors can then proceed
completely asynchronously.
The actual algorithm is more involved because of two reasons. A processor may have to obtain frequent
itemsets computed by other processors for the prune step of the candidate generation. In the example above,
the processor assigned the set E has to know whether BCDE is frequent to be able to decide whether to prune
the candidate ABCDE, but the set with prefix BC may have been assigned to a different processor. The other
problem is that we need to balance load across processors. Details of the full partitioning algorithm are given in
[2].
3.4 Parallel Rule Generation
We now present our parallel implementation of the second subproblem - the problem of generating rules from
frequent itemsets. Generating rules is much less expensive than discovering frequent itemsets as it does not
require examination of the data.
Given a frequent itemset l, rule generation examines each non-empty subset a and generates the rule a ) (l\Gammaa)
with This computation can efficiently be done
by examining the largest subsets of l first and only proceeding to smaller subsets if the generated rules have the
required minimum confidence [3]. For example, given a frequent itemset ABCD, if the rule ABC ) D does not
have minimum confidence, neither will AB ) CD, and so we need not consider it.
Generating rules in parallel simply involves partitioning the set of all frequent itemsets among the processors.
Each processor then generates rules for its partition only using the algorithm above. Since the number of rules
that can be generated from an itemset is sensitive to the itemset's size, we attempt equitable balancing by
partitioning the itemsets of each length equally across the processors.
Note that in the calculation of the confidence of a rule, a processor may need to examine the support of
an itemset for which it is not responsible. For this reason, each processor must have access to all the frequent
itemsets before rule generation can begin. This is not a problem for the Count and Data distribution algorithms
because at the end of the last pass, all the processors have all the frequent itemsets. In the Candidate distribution
algorithm, fast processors may need to wait until slower processors have discovered and transmitted all of their
frequent itemsets. For this reason and because the rule generation step is relatively cheap, it may be better in
the Candidate distribution algorithm to simply discover the frequent itemsets and generate the rules off-line,
possibly on a serial processor. This would allow processors to be freed to run other jobs as soon as they are done
finding frequent itemsets, even while other processors in the system are still working.
3.5 Discussion of Tradeoffs
Initially, it was not clear to us which of our three algorithms would win, or if there would even be a single over-all
winner. Count minimizes communication at the expense of ignoring aggregate memory. On a workstation-cluster
environment, this approach is probably ideal; it may not be so, however, on an SP2. Data distribution which fully
exploits aggregate memory at the cost of heavy communication will help us explore this issue. Also, Data's ability
to count in a single pass N times as many candidates as Count could make this algorithm a strong contender.
With the third algorithm, Candidate distribution, we will see if incorporating detailed problem-knowledge can
yield the benefits of both the Count and Data distribution algorithms. We will also see how beneficial removing
processor dependence and synchronous communication can be.
Performance Evaluation
We ran all of our experiments on a 32-node IBM SP2 Model 302. Each node in the multiprocessor is a Thin
consisting of a POWER2 processor running at 66.7MHz with 256MB of real memory. Attached to each
node is a 2GB disk of which less than 500MB was available for our tests. The processors all run AIX level 3.2.5
and communicate with each other through the High-Performance Switch with HPS-2 adaptors. The combined
communication hardware has a rated peak bandwidth of 80 megabytes per second and a latency of less than 40
microseconds. In our own tests of the base communication routines, actual point-to-point bandwidth reached
20MB/s. Experiments were run on an otherwise idle system. See [7] for further details of the SP2 architecture.
Name T I D 1 D
T Average transaction length
I Average size of frequent itemsets
Average number of transactions
Table
1: Data Parameters
We used synthetic datasets of varying complexity, generated using the procedure described in [3]. The
characteristics of the six datasets we used are shown in Table 1. These datasets vary from many short transactions
with few frequent itemsets, to fewer larger transactions with many frequent itemsets. All the datasets were
about 100MB per processor in size. We could not use larger datasets due to constraints on the amount of
storage available on local disks; the Candidate algorithm writes the redistributed database on local disks after
candidate partitioning, and we run out of disk space with the larger datasets. However, we include results of
experiments (up to 400 MB per processor) for the Count distribution algorithm to show the trends for
larger amounts of data per processor. Experiments were repeated multiple times to obtain stable values for
each data point.
4.1 Relative Performance and Trade-offs
Figure
3 shows the response times for the three parallel algorithms on the six datasets on a 16 node configuration
with a total database size of approximately 1.6GB. The response time was measured as the time elapsed from the
initiation of the execution to the end time of the last processor finishing the computation. The response times for
the serial version are for the run against only one node's worth of data or 1=16th of the total database. We did not
run the serial algorithm against the entire data because we did not have enough disk space available. We obtained
similar results for other node configurations and dataset sizes. In the experiments with Candidate distribution,
repartitioning was done during the fourth pass. In our tests, this choice yielded the best performance.
The results are very encouraging; for both Count and Candidate distribution algorithms, response times are
close to that of the serial algorithm; this is especially true for Count. The overhead for Count is less than 7.5%
when compared to the serial version run with 1/N data. One third of that overhead, about 2.5%, was spend
waiting for the processors to synchronize.
Among the parallel algorithms, Data distribution did not fare as well as the other two. As we had expected,
Data was indeed able to better exploit the aggregate memory of the multiprocessor and make fewer passes
in the case of datasets with large average transaction and frequent itemset lengths (see Table 2). However, its
Response
time
Candidate
Data
Count
Serial (1/N data)
Figure
3: Relative Performance of the Algorithms20006000100001400018000
Response
time
Normal
Communication
Figure
4: Communication Costs for Data Distribution
Name Serial Count Data Candidate
Table
2: Number of Data Passes Required
performance turned out to be markedly lower for two reasons: extra communication and the fact that every node
in the system must process every single database transaction. Communication is the worst of these two problems
as show by Figure 4, even on a machine such as SP2 with very fast communication. The points labeled "Normal"
correspond to the response times for the normal Data distribution algorithm on a 16-node configuration, but
with the same 100MB of data replicated on each node. The points labeled "No Communication" correspond to
a modified version of the Data distribution algorithm where, instead of receiving data from other nodes, a node
simply processed its local data 15 more times. Since each node had the exact same data, this yielded the exact
same results with the only difference being no time was spent on communication or its management. We did
this for three of the six datasets and discovered that fully half of the time taken by Data distribution was for
communication. The algorithm was also almost entirely CPU-bound, making I/O savings due to Data making
fewer passes practically negligible.
We had hoped for better results from the Candidate distribution algorithm, considering that it is the one
that exploits the problem-specific semantics. Since the Candidate algorithm must also communicate the entire
dataset during the redistribution pass, it suffers from the same problems as Data. Candidate, however, only
performs this redistribution once. Also, unlike Data, processors may selectively filter out transactions it sends to
other processors depending upon how the dependency graph is partitioned. This can greatly reduce the amount
of data traveling through the network. Unfortunately, even a single pass of filtered data redistribution is costly.
The question is whether or not the subsequent passes where each processor can run completely independently
with smaller candidate sets can compensate for this cost. As the performance results show, redistribution simply
costs too much.
Also, unlike Data distribution, the Candidate algorithm was unable to capitalize on its more optimal use of
aggregate memory; the large candidate sets that force Count into multiple subpasses all occur before Candidate
takes over with its redistribution pass. Candidate thus makes just as many data passes as Count. These
insufficient gains coupled with a high redistribution cost allow Count with its small overhead to emerge as the
overall winner.
Although our experiments show Count's overhead to be fairly small, synchronization costs can become quite
large if the data distributions are skewed or the nodes are not equally capable (different memory sizes, processor
speeds, I/O bandwidths and capacities). Investigation of these issues is a broad topic and it is in our future plans.
However, one can think of several alternatives for adding load balancing to the Count distribution algorithm that
do not require redistribution of the complete database as in the case of the Candidate distribution algorithm.
Extrapolating from the results of this study, our sense is that the Count distribution algorithm embellished with
an appropriate load balancing strategy is likely to continue to dominate.
4.2 Sensitivity Analysis
We examine below the scaleup, sizeup, and speedup characteristics of the Count distribution algorithm. We
do not report further the results of the Data and Candidate distribution algorithms because of their inferior
performance.
Scaleup To see how well the Count distribution algorithm handles larger problem sets when more processors are
available, we performed scaleup experiments where we increased the size of the database in direct proportion to
the number of nodes in the system. We used the datasets D2016K:T10:I2, D1456K:T15:I4 and D1140K:T20:I6
from the previous experiments except that the number of transactions was increased or decreased depending
upon the multiprocessor size. The database sizes for the single and 32 node configurations are shown in Table 1.
At 100MB per node, all three datasets range from about 100MB in the single node case to almost 3.2GB in the
node case.
Figure
5 shows the performance results for the three datasets. In addition to the absolute response times as
the number of processors is increased, we have also plotted scaleup which is the response time normalized with
respect to the response time for a single processor. Clearly the Count algorithm scales very well, being able to
keep the response time almost constant as the database and multiprocessor sizes increase. Slight increases in
response times is due entirely to more processors being involved in communication. Since the itemsets found by
the algorithm does not change as the database size is increased, the number of candidates whose support must
be summed by the communication phase remains constant.
Sizeup For these experiments, we fixed the size of the multiprocessor at nodes while growing the database
from 25 MB per node to 400 MB per node. We have plotted both the response times and sizeup in Figure 5.
The sizeup is the response time normalized with respect to the response time for 25MB per node. The results
show sublinear performance for the Count algorithm; the program is actually more efficient as the database size
is increased. Since the results do not change as the database size increases neither does the amount or cost of
communication. Increasing the size of the database simply makes the non-communication portion of the code
take more time due to more I/O and more transaction processing. This has the result of reducing the percentage
of the overall time spent in communication. Since I/O and CPU processing scale perfectly with sizeup, we get
sublinear performance.
Scaleup Relative Scaleup5001500250035004500
Response
time
Number of processors
Scaleup
Number of processors
Sizeup Relative Sizeup200060001000014000
Response
time
Amount of data per node (MB)
Sizeup
Amount of data per node (MB)
Speedup Relative Speedup200060001000014000
Response
time
Number of processors
Number of processors
Figure
5: Performance of Count Distribution
Response
time
Hash Filter
Count
Figure
Effect of Hash Filtering
Speedup For our last set of experiments, we kept the database constant and varied the number of processors.
Because of the constraint on available disk space, the size of each of the three databases was fixed at 400MB.
Figure
5 shows the results of running the Count algorithm on configurations of up to 16 processors. We did
not run with larger configurations because the amount of data at each node becomes too small. The speedup
in this figure is the response time normalized with respect to the response time for a single processor. As the
graphs show, Count has very good speedup performance. This performance does however begin to fall short of
ideal at 8 processors. This is an artifact of the small amount of data each node processing. At only 25MB per
node, communication times become a significant percentage of the overall response time. This is easily predicted
from our sizeup experiments where we noticed that the more data a node processes, the less significant becomes
the communication time giving us better performance. We are simply seeing the opposite effect here. Larger
datasets would have shown even better speedup characteristics.
4.3 Effect of Hash Filtering
Recently, Park, Chen, and Yu [9] proposed the use of a hash filter to reduce the cost of Apriori, particularly in
the second pass by reducing the size of C 2 . The basic idea is to build a hash filter as the tuples are read in the
first pass. For every 2-itemset present in a tuple, a count is incremented in a corresponding hash bucket. Thus,
at the end of the pass, we have an upperbound on the support count for every 2-itemset present in the database.
When generating C 2 using L 1 , candidate itemsets are hashed, and any candidate whose support count in the
hash table is less than the minimum support is deleted.
Figure
6 compares the combined response times for Pass 1 and 2 for the Count algorithm and this Hash Filter
algorithm. The times for the remaining passes are identical. The Count algorithm beats Hash Filter because
Count never explicitly forms C 2 ; rather, it uses a specialized version of the hash-tree as was done in [3]. Since
nothing in C 2 can be pruned by the Apriori candidate generation algorithm, it is equal to L 1 \Theta L 1 . C 2 can thus
be represented by a simple two-dimensional count array, drastically reducing memory requirements and function
call overhead. Any savings from using the hash filter to prune C 2 are lost due to the cost of constructing the
hash filter and the use a regular hash-tree for storing and counting C 2 .
A parallel version of this Hash Filter algorithm called PDM has been presented in [10], along with performance
results from a simulation study. It uses a parallelization technique similar to that of Count, except
that entire candidate sets are exchanged rather than just the candidate counts. This is more expensive in both
communication and CPU costs. The focus in PDM was on the efficient construction of the same hash-filter used
by the serial algorithm to speed up pass two. However, as in the serial algorithm, the hash-filter actually hurts
performance, resulting in a double performance hit in PDM.
Conclusions
We considered the problem of mining association rules on a shared-nothing multiprocessor on which data has
been partitioned across the nodes. We presented three parallel algorithms for this task based upon Apriori,
the best serial algorithms for mining association rules. The designs of these algorithms represent a spectrum of
trade-offs between computation, communication, memory usage, synchronization, and the use of problem-specific
information.
The Count distribution algorithms attempts to minimize communication by replicating the candidate sets
in each processor's memory. Processors work only with local data and only communicate counts. The Data
distribution algorithm takes the counter approach where each processor works with the entire dataset but only
portion of the candidate set. This maximizes the use of aggregate memory, but requires high communication to
broadcast all the data. Again, while minimizing communication may be the best approach for a workstation-
cluster environment, this is not necessarily true for an SP2. Lastly, the Candidate algorithm incorporates
domain-knowledge to partition both the data and the candidates, allowing each processor to work on a unique
set of candidates without having to repeatedly broadcast the entire dataset. This maximizes the use of aggregate
memory while limiting heavy communication to a single redistribution pass. This also completely eliminates the
synchronization costs that Count and Data must pay at the end of every pass.
We studied the above trade-offs and evaluated the relative performance of the three algorithms by implementing
them on 32-node SP2 parallel machine. The Count distribution emerged as the algorithm of choice.
It exhibited linear scaleup and excellent speedup and sizeup behavior. When using N processors, the overhead
was less than 7.5% compared to the response time of the serial algorithm executing over 1/N amount of data.
The Data distribution algorithm lost out because of the cost of broadcasting local data from each processor to
every other processor. Our results show that even on a high-bandwidth/low-latency system such as an SP2, data
redistribution is still too costly.
The Candidate distribution algorithm is similarly edged out because of the cost of data redistribution; gains
from having each processor work independently on a different subset of the problem could not make up for
single pass of redistribution. While it may be disheartening to learn that a carefully designed algorithm such as
Candidate can be beaten by a relatively simpler algorithm like Count, it does at least illuminate the fact not all
problems require an intricate parallelization. By exploring the various possibilities, we have shown that this is
true for mining association rules.
Acknowledgments
Maurice Houtsma implemented a parallel version of the association-rule mining algorithm
presented in [1] on an earlier version of IBM POWERparallel System (called SP1) in which all nodes were diskless
and all data were funneled through a master node. Although we could not use this implementation because of
changes in architecture, communication library, and the basic algorithm, we benefited from this experience.
Howard Ho provided us the early prototype implementation of the MPI communication library to get us going.
Ramakrishnan Srikant patiently explained many nuances of the serial Apriori implementation. Discussions with
Mike Carey were influential in the initial stages of this work. Finally, several in the SP organization, particularly
Hieronymous, Sharon Selzo, and Bob Walkup, were wonderful in their help in arranging SP cycles for our
tests.
--R
Mining association rules between sets of items in large databases.
Parallel mining of association rules: Design
Fast Algorithms for Mining Association Rules.
Message Passing Interface Forum.
Discovery of multiple-level association rules from large databases
Scalable POWERparallel Systems
Efficient algorithms for discovering association rules.
An effective hash based algorithm for mining association rules.
Efficient parallel data mining for association rules.
An efficient algorithm for mining association rules in large databases.
Mining Generalized Association Rules.
--TR
--CTR
Hongjun Lu , Ling Feng , Jiawei Han, Beyond intratransaction association analysis: mining multidimensional intertransaction association rules, ACM Transactions on Information Systems (TOIS), v.18 n.4, p.423-454, Oct. 2000
Mohammed J. Zaki , Neal Lesh , Mitsunori Ogihara, PlanMine: Predicting Plan Failures Using Sequence Mining, Artificial Intelligence Review, v.14 n.6, p.421-446, December 1, 2000
Canasai Kruengkrai , Chuleerat Jaruskulchai, A parallel learning algorithm for text classification, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Takahiko Shintani , Masaru Kitsuregawa, Parallel mining algorithms for generalized association rules with classification hierarchy, ACM SIGMOD Record, v.27 n.2, p.25-36, June 1998
Robert Grossman , Yike Guo, Data mining tasks and methods: parallel methods for scaling data mining algorithms to large data sets, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Assaf Schuster , Ran Wolff, Communication-Efficient Distributed Mining of Association Rules, Data Mining and Knowledge Discovery, v.8 n.2, p.171-196, March 2004
Dora Souliou , Aris Pagourtzis , Nikolaos Drosinos , Panayiotis Tsanakas, Computing frequent itemsets in parallel using partial support trees, Journal of Systems and Software, v.79 n.12, p.1735-1743, December, 2006
Min Song , Il-Yeol Song , Xiaohua Hu , Robert B. Allen, Integration of association rules and ontologies for semantic query expansion, Data & Knowledge Engineering, v.63 n.1, p.63-75, October, 2007
Assaf Schuster , Ran Wolff, Communication-efficient distributed mining of association rules, ACM SIGMOD Record, v.30 n.2, p.473-484, June 2001
Mohammed J. Zaki , Srinivasan Parthasarathy , Mitsunori Ogihara , Wei Li, Parallel Algorithms for Discovery of Association Rules, Data Mining and Knowledge Discovery, v.1 n.4, p.343-373, December 1997
Valerie Guralnik , George Karypis, Parallel tree-projection-based sequence mining algorithms, Parallel Computing, v.30 n.4, p.443-472, April 2004
Riedel , Christos Faloutsos , Gregory R. Ganger , David F. Nagle, Data mining on an OLTP system (nearly) for free, ACM SIGMOD Record, v.29 n.2, p.13-21, June 2000
Raymond T. Ng , Laks V. S. Lakshmanan , Jiawei Han , Alex Pang, Exploratory mining and pruning optimizations of constrained associations rules, ACM SIGMOD Record, v.27 n.2, p.13-24, June 1998
Masaru Kitsuregawa , Masashi Toyoda , Iko Pramudiono, Web community mining and web log mining: commodity cluster based execution, Australian Computer Science Communications, v.24 n.2, p.3-10, January-February 2002
Massimo Coppola , Marco Vanneschi, Parallel and distributed data mining through parallel skeletons and distributed objects, Data mining: opportunities and challenges, Idea Group Publishing, Hershey, PA,
Steve C. Chiu , Wei-keng Liao , Alok N. Choudhary , Mahmut T. Kandemir, Processor-embedded distributed smart disks for I/O-intensive workloads: architectures, performance models and evaluation, Journal of Parallel and Distributed Computing, v.64 n.3, p.427-446, March 2004
Shichao Zhang , Chengqi Zhang , Jeffrey Xu Yu, An efficient strategy for mining exceptions in multi-databases, Information Sciences: an International Journal, v.165 n.1-2, p.1-20, 3 September 2004
Asif Javed , Ashfaq Khokhar, Frequent Pattern Mining on Message Passing Multiprocessor Systems, Distributed and Parallel Databases, v.16 n.3, p.321-334, November 2004
Wen-Chih Peng , Ming-Syan Chen, Developing Data Allocation Schemes by Incremental Mining of User Moving Patterns in a Mobile Computing System, IEEE Transactions on Knowledge and Data Engineering, v.15 n.1, p.70-85, January
Ning , X. Sean Wang , Sushil Jajodia, Discovering calendar-based temporal association rules, Data & Knowledge Engineering, v.44 n.2, p.193-218, February
Ruoming Jin , Ge Yang , Gagan Agrawal, Shared Memory Parallelization of Data Mining Algorithms: Techniques, Programming Interface, and Performance, IEEE Transactions on Knowledge and Data Engineering, v.17 n.1, p.71-89, January 2005
Jiawei Han , Yongjian Fu, Mining Multiple-Level Association Rules in Large Databases, IEEE Transactions on Knowledge and Data Engineering, v.11 n.5, p.798-805, September 1999
Steve C. Chiu , Wei-keng Liao , Alok N. Choudhary , Mahmut T. Kandemir, Processor-embedded distributed smart disks for I/O-intensive workloads: architectures, performance models and evaluation, Journal of Parallel and Distributed Computing, v.65 n.4, p.532-551, April 2005
Shengnan Cong , Jiawei Han , Jay Hoeflinger , David Padua, A sampling-based framework for parallel data mining, Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, June 15-17, 2005, Chicago, IL, USA
Steve C. Chiu , Wei-keng Liao , Alok N. Choudhary, Distributed smart disks for I/O-intensive workloads on switched interconnects, Future Generation Computer Systems, v.22 n.5, p.643-656, April 2006
Gregory Buehrer , Yen-Kuang Chen , Srinivasan Parthasarathy , Anthony Nguyen , Amol Ghoting , Daehyun Kim, Efficient pattern mining on shared memory systems: implications for chip multiprocessor architectures, Proceedings of the 2006 workshop on Memory system performance and correctness, October 22-22, 2006, San Jose, California
Gregory Buehrer , Srinivasan Parthasarathy , Shirish Tatikonda , Tahsin Kurc , Joel Saltz, Toward terabyte pattern mining: an architecture-conscious solution, Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming, March 14-17, 2007, San Jose, California, USA
Assaf Schuster , Ran Wolff , Dan Trock, A high-performance distributed algorithm for mining association rules, Knowledge and Information Systems, v.7 n.4, p.458-475, May 2005
Christopher R. Lumb , Jiri Schindler , Gregory R. Ganger , David F. Nagle , Erik Riedel, Towards higher disk head utilization: extracting free bandwidth from busy disk drives, Proceedings of the 4th conference on Symposium on Operating System Design & Implementation, p.7-7, October 22-25, 2000, San Diego, California
Wei Li , Ari Mozes, Computing frequent itemsets inside oracle 10G, Proceedings of the Thirtieth international conference on Very large data bases, p.1253-1256, August 31-September 03, 2004, Toronto, Canada
Laks V. S. Lakshmanan , Carson Kai-Sang Leung , Raymond T. Ng, The segment support map: scalable mining of frequent itemsets, ACM SIGKDD Explorations Newsletter, v.2 n.2, p.21-27, Dec. 2000
Amol Ghoting , Gregory Buehrer , Srinivasan Parthasarathy , Daehyun Kim , Anthony Nguyen , Yen-Kuang Chen , Pradeep Dubey, Cache-conscious frequent pattern mining on modern and emerging processors, The VLDB Journal The International Journal on Very Large Data Bases, v.16 n.1, p.77-96, January 2007
Frans Coenen , Paul Leng, Partitioning strategies for distributed association rule mining, The Knowledge Engineering Review, v.21 n.1, p.25-47, March 2006
Carson Kai-Sang Leung , Quamrul I. Khan , Boyu Hao, Distributed Mining of Constrained Patterns from Wireless Sensor Data, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.248-251, December 18-22, 2006
Masahisa Tamura , Masaru Kitsuregawa, Dynamic Load Balancing for Parallel Association Rule Mining on Heterogenous PC Cluster Systems, Proceedings of the 25th International Conference on Very Large Data Bases, p.162-173, September 07-10, 1999
David W. Cheung , Yongqiao Xiao, Effect of Data Distribution in Parallel Mining of Associations, Data Mining and Knowledge Discovery, v.3 n.3, p.291-314, September 1999
Eui-Hong Han , George Karypis , Vipin Kumar, Scalable parallel data mining for association rules, ACM SIGMOD Record, v.26 n.2, p.277-288, June 1997
Riedel , Garth A. Gibson , Christos Faloutsos, Active Storage for Large-Scale Data Mining and Multimedia, Proceedings of the 24rd International Conference on Very Large Data Bases, p.62-73, August 24-27, 1998
Flip Korn , Alexandros Labrinidis , Yannis Kotidis , Christos Faloutsos, Ratio Rules: A New Paradigm for Fast, Quantifiable Data Mining, Proceedings of the 24rd International Conference on Very Large Data Bases, p.582-593, August 24-27, 1998
Ling Feng , Jeffrey Xu Yu , Hongjun Lu , Jiawei Han, A template model for multidimensional inter-transactional association rules, The VLDB Journal The International Journal on Very Large Data Bases, v.11 n.2, p.153-175, October 2002
H. Ravi Sankar , M. M. Naidu, An innovative algorithm for mining multilevel association rules, Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications, p.307-310, February 12-14, 2007, Innsbruck, Austria
John D. Holt , Soon M. Chung, Parallel mining of association rules from text databases, The Journal of Supercomputing, v.39 n.3, p.273-299, March 2007
Congnan Luo , Anil L. Pereira , Soon M. Chung, Distributed Mining of Maximal Frequent Itemsets on a Data Grid System, The Journal of Supercomputing, v.37 n.1, p.71-90, July 2006
R. J. Kuo , S. Y. Lin , C. W. Shih, Mining association rules through integration of clustering analysis and ant colony system for health insurance database in Taiwan, Expert Systems with Applications: An International Journal, v.33 n.3, p.794-808, October, 2007
Distributed higher order association rule mining using information extracted from textual data, ACM SIGKDD Explorations Newsletter, v.7 n.1, p.26-35, June 2005
Mohammed J. Zaki, Scalable Algorithms for Association Mining, IEEE Transactions on Knowledge and Data Engineering, v.12 n.3, p.372-390, May 2000
Tahsin Kurc , Mustafa Uysal , Hyeonsang Eom , Jeff Hollingsworth , Joel Saltz , Alan Sussman, Efficient Performance Prediction for Large-Scale, Data-Intensive Applications, International Journal of High Performance Computing Applications, v.14 n.3, p.216-227, August 2000
David W. Cheung , Kan Hu , Shaowei Xia, An Adaptive Algorithm for Mining Association Rules on Shared-Memory Parallel Machines, Distributed and Parallel Databases, v.9 n.2, p.99-132, March 2001
Mohammed Javeed Zaki , Srinivasan Parthasarathy , Wei Li, A localized algorithm for parallel association mining, Proceedings of the ninth annual ACM symposium on Parallel algorithms and architectures, p.321-330, June 23-25, 1997, Newport, Rhode Island, United States
Yen-Liang Chen , Shih-Sheng Chen , Ping-Yu Hsu, Mining hybrid sequential patterns and sequential rules, Information Systems, v.27 n.5, p.345-362, July 2002
David W. Cheung , Kan Hu , Shaowei Xia, Asynchronous parallel algorithm for mining association rules on a shared-memory multi-processors, Proceedings of the tenth annual ACM symposium on Parallel algorithms and architectures, p.279-288, June 28-July 02, 1998, Puerto Vallarta, Mexico
Jian Tang, Using incremental pruning to increase the efficiency of dynamic itemset counting for mining association rules, Proceedings of the seventh international conference on Information and knowledge management, p.273-280, November 02-07, 1998, Bethesda, Maryland, United States
Y. Sung , Zhao Li , Chew L. Tan , Peter A. Ng, Forecasting Association Rules Using Existing Data Sets, IEEE Transactions on Knowledge and Data Engineering, v.15 n.6, p.1448-1459, November
Sunita Sarawagi , Shiby Thomas , Rakesh Agrawal, Integrating association rule mining with relational database systems: alternatives and implications, ACM SIGMOD Record, v.27 n.2, p.343-354, June 1998
Qing Li , Ling Feng , Allan Wong, From intra-transaction to generalized inter-transaction: landscaping multidimensional contexts in association rule mining, Information SciencesInformatics and Computer Science: An International Journal, v.172 n.3-4, p.361-395, 9 June 2005
Jianchao Han , Nick Cercone , Xiaohua Hu, An interactive visualization system for mining association rules, Data mining, rough sets and granular computing, Physica-Verlag GmbH, Heidelberg, Germany, 2002
Antonin Rozsypal , Miroslav Kubat, Association mining in time-varying domains, Intelligent Data Analysis, v.9 n.3, p.273-288, May 2005
Ferenc Kovcs , Sndor Juhsz, Performance evaluation of the distributed association rule mining algorithms, Proceedings of the 4th WSEAS International Conference on Software Engineering, Parallel & Distributed Systems, p.1-6, February 13-15, 2005, Salzburg, Austria
Xindong Wu , Shichao Zhang, Synthesizing High-Frequency Rules from Different Data Sources, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.353-367, February
Sunita Sarawagi , Shiby Thomas , Rakesh Agrawal, Integrating Association Rule Mining with Relational Database Systems: Alternatives and Implications, Data Mining and Knowledge Discovery, v.4 n.2-3, p.89-125, July 2000
D. -I. Lin , Z. M. Kedem, Efficient Algorithm for Discovering the Maximum Frequent Set, IEEE Transactions on Knowledge and Data Engineering, v.14 n.3, p.553-566, May 2002
Edward R. Omiecinski, Alternative Interest Measures for Mining Associations in Databases, IEEE Transactions on Knowledge and Data Engineering, v.15 n.1, p.57-69, January
Ron Kohavi , Llew Mason , Rajesh Parekh , Zijian Zheng, Lessons and Challenges from Mining Retail E-Commerce Data, Machine Learning, v.57 n.1-2, p.83-113, October-November 2004
Ruoming Jin , Gagan Agrawal, A methodology for detailed performance modeling of reduction computations on SMP machines, Performance Evaluation, v.60 n.1-4, p.73-105, May 2005
Kubat, Searching for high-support itemsets in itemset trees, Intelligent Data Analysis, v.10 n.2, p.105-120, March 2006
Cai-Yan Jia , Xie-Ping Gao, Multi-scaling sampling: an adaptive sampling method for discovering approximate association rules, Journal of Computer Science and Technology, v.20 n.3, p.309-318, May 2005
Laks V. S. Lakshmanan , Carson Kai-Sang Leung , Raymond T. Ng, Efficient dynamic mining of constrained frequent sets, ACM Transactions on Database Systems (TODS), v.28 n.4, p.337-389, December
Miroslav Kubat , Alaaeldin Hafez , Vijay V. Raghavan , Jayakrishna R. Lekkala , Wei Kian Chen, Itemset Trees for Targeted Association Querying, IEEE Transactions on Knowledge and Data Engineering, v.15 n.6, p.1522-1534, November
Eui-Hong (Sam) Han , George Karypis , Vipin Kumar, Scalable Parallel Data Mining for Association Rules, IEEE Transactions on Knowledge and Data Engineering, v.12 n.3, p.337-352, May 2000
Mara C. Fernndez-Baizn , Ernestina Menasalvas Ruiz , Juan Fransisco Martnez Sarras, Reduction of discriminant rules based on frequent item set calculation, New learning paradigms in soft computing, Physica-Verlag GmbH, Heidelberg, Germany, 2002
A. Maniatty , Mohammed J. Zaki, Systems support for scalable data mining, ACM SIGKDD Explorations Newsletter, v.2 n.2, p.56-65, Dec. 2000
Peiyi Tang , Li Ning , Ningning Wu, Domain and data partitioning for parallel mining of frequent closed itemsets, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Mohammed J. Zaki, Parallel and Distributed Association Mining: A Survey, IEEE Concurrency, v.7 n.4, p.14-25, October 1999
Dejiang Jin , Sotirios G. Ziavras, A Super-Programming Approach for Mining Association Rules in Parallel on PC Clusters, IEEE Transactions on Parallel and Distributed Systems, v.15 n.9, p.783-794, September 2004
Jihye Kim , Sihui Zhao , Steffen Heber, Finding association rules of cis-regulatory elements involved in alternative splicing, Proceedings of the 45th annual southeast regional conference, March 23-24, 2007, Winston-Salem, North Carolina
Jack Dongarra , Ian Foster , Geoffrey Fox , William Gropp , Ken Kennedy , Linda Torczon , Andy White, References, Sourcebook of parallel computing, Morgan Kaufmann Publishers Inc., San Francisco, CA,
P. Deepa Shenoy , K. G. Srinivasa , K. R. Venugopal , Lalit M. Patnaik, Dynamic Association Rule Mining using Genetic Algorithms, Intelligent Data Analysis, v.9 n.5, p.439-453, September 2005
Claudio Silvestri , Salvatore Orlando, Distributed approximate mining of frequent patterns, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Massimo Coppola , Marco Vanneschi, High-performance data mining with skeleton-based structured parallel programming, Parallel Computing, v.28 n.5, p.793-813, May 2002
Mohammad El-Hajj , Osmar R. Zaane, Parallel Bifold: Large-scale parallel pattern mining with constraints, Distributed and Parallel Databases, v.20 n.3, p.225-243, November 2006
Vipin Kumar , Mohammed Zaki, High performance data mining (tutorial PM-3), Tutorial notes of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, p.309-425, August 20-23, 2000, Boston, Massachusetts, United States
Xun Yi , Yanchun Zhang, Privacy-preserving distributed association rule mining via semi-trusted mixer, Data & Knowledge Engineering, v.63 n.2, p.550-567, November, 2007 | data mining;parallel algorithms;association rules |
627813 | The Role of Polymorphic Reuse Mechanisms in Schema Evolution in an Object-Oriented Database. | AbstractA seamless approach to the incremental design and reuse of object-oriented methods and query specifications is presented. We argue for avoiding or minimizing the effort required for manually reprogramming methods and queries due to schema modifications, and demonstrate how the role of polymorphic reuse mechanisms is exploited for enhancing the adaptiveness of database programs against schema evolution in an object-oriented database. The salient features of our approach are the use of propagation patterns and a mechanism for propagation pattern refinement. Propagation patterns are employed as an interesting specification formalism for modeling operational requirements. They encourage the reuse of operational specifications against the structural modification of an object-oriented schema. Propagation pattern refinement is suited for the specification of reusable operational modules. It promotes the reusability of propagation patterns toward the operational requirement changes. This approach has a formal basis and emphasizes structural derivation of specifications. The main innovations are in raising the level of abstraction for behavioral schema design, and for making possible the derivation of operational semantics from structural specifications. As a result, both the modularity and reusability of object-oriented schemas are increased. | Introduction
Schema evolution, in general, is the ability of a database system to respond to the real world requirement
changes by allowing the schema to evolve as seamlessly as possible. Seamless extension of an
The work carried out at the University of Frankfurt was supported partly by the European Committee under ESPRIT
Project 6612 F-cube. The work at Northeastern University was supported in part by the IBM Corporation, Mettler-
Toledo AG, and the National Science Foundation under Grants CCR-9102578, CCR-9402486, and CDA-9015692 (Research
Instrumentation). The current address of Ling Liu is Department of Computer Science, University of Alberta, Edmonton,
T6G 2H1, Alberta, Canada.
object-oriented schema is important not only for increasing application developers' productivity but
also for facilitating and supporting extensibility. For example, if additional functionality can be added
seamlessly, existing application programs may either optionally ignore it or only require minimal modifications
when the added functionality becomes available. Therefore, how to effectively manage the
impact of schema modification, clearly, becomes an important issue for achieving such seamlessness.
We argue that one way to achieve seamless extensions is to employ polymorphic reuse mechanisms in
object-oriented database specifications. Thus, application programs can remain syntactically unchanged
or can be incrementally modified in the presence of schema evolution. In this paper, we assume that
schema modifications for an object-oriented database system are performed after the database is populated
with object instances, and application programs have been implemented and tested. Thus, the
impact of schema modifications implies not only the propagation of restructuring operations into the
database instances, but also the reprogramming of existing application programs (e.g., relevant methods
and queries). For example, in most existing method definition or query specification languages, each
name used in methods or queries must be associated with a precise path expression in order to traverse
the nested structure of the objects. Whenever a schema modification involves more than one existing
class, the path expressions relevant to those classes are changed in the modified schema. The methods
and queries which use those "old" path expressions must be updated accordingly to enable them to
be valid in the modified schema. Up to now, many researchers have studied issues related to avoiding
database restructuring and reorganization due to schema modification ([1, 3, 10, 21, 22, 23]). However,
the issue of avoiding or minimizing database reprogramming due to schema modification has received
surprisingly little attention in the database research community.
Why should reprogramming due to schema modification be avoided? Reprogramming of
object methods and database queries usually follows evolutionary changes of the logical object structure
(i.e., the database schema). Operations for reprogramming of methods and queries can be expensive,
especially when the relevant application programs are large and complex. Moreover, these operations
conflict with the reuse of software components and with the objective of seamless extension.
How can reprogramming be avoided? The concept of polymorphism and the mechanisms for
reuse of software components are useful utilities for avoiding or minimizing the reprogramming effort
required by schema modifications. One of the major reasons for manually reprogramming of methods
and queries after schema modifications is to keep the path expressions required in method definitions
or query specifications consistent with the modified schema. The precise knowledge of path expressions
is actually derivable from the logical object structure of the corresponding schema, although very
few object-oriented systems (and none of the existing object-oriented DBMS products we know of)
include support for structuring and deriving operational semantics from structural specifications. We
believe that adding support for automatically or semi-automatically deriving the semantics of operation
propagation over the hierarchical structure of complex objects opens new possibilities for the reuse of
operational specifications (such as methods or query programs) in object-oriented database systems.
Can reprogramming always be avoided? In most cases, when a schema modification incurs a
change in the propagation paths of existing methods or queries (e.g., a new class is added in between
two of the existing classes having construction (is-part-of) relationship), or when a schema modification
changes the properties of objects (e.g., a new property is added to an existing class), manually
reprogramming of existing methods or queries (due to schema modification) can be avoided by structural
derivation of operation propagation semantics, especially when polymorphic reuse mechanisms
are employed for the specification of methods and database queries. Unfortunately, when a schema
modification has substantially updated the logical object structure of a schema (in particular, when a
schema modification changes the minimal knowledge required for specifying a method or a query), the
reprogramming cannot be avoided completely.
With these baselines in mind, we propose a seamless approach to the incremental design and reuse of
object-oriented methods and query specifications and show how the polymorphic reuse mechanisms are
exploited for improving the adaptiveness of software programs against schema modification in an object-oriented
database. We argue that, by using this approach, operational specifications become more robust
and adaptive towards schema modifications. The effort to manually reprogram methods and queries
necessitated by schema modifications can be avoided or minimized. The salient features of our approach
are the use of propagation patterns and a mechanism for propagation pattern refinement. Propagation
patterns can be seen as an interesting specification formalism for modeling operational requirements
in object-oriented database systems. They encourage the reuse of operational specifications against
the structural modification of an object-oriented schema. Using propagation patterns provides method
designers and query writers with an opportunity to specify operations without detailed navigational
information. Propagation pattern refinement is suited for the specification of reusable operational mod-
ules. It promotes the reusability of propagation patterns towards the operational requirement changes.
We provide a number of examples to illustrate the concepts of propagation patterns and propagation
pattern refinement, and to show why these concepts are important polymorphic reuse mechanisms and
how they are employed to avoid or to minimize the effort required by manually reprogramming of
methods and queries after schema modifications.
In Section 2, we give a brief presentation of our reference object model. We discuss propagation patterns
and their formal semantics in Section 3. Several characteristics of propagation patterns are formally
studied too. In Section 4, we introduce a mechanism for propagation pattern refinement, present the
formalization of the concept and a number of examples for illustration. We compare our approach with
related work in Section 5. Section 6 concludes with a summary and a discussion on implementation
considerations as well as further research directions.
2 The Reference Object Model
We use the kernel of the Demeter data model [16] as our reference object model because this allows
us to show how our polymorphic approach is directly available using an existing tool: the Demeter
System/C++ TM ( [11], [12], [19]). In the object reference model, we describe the structure of objects
and classes in terms of a class dictionary graph (or so called schema graph). Two kinds of classes
are distinguished: alternation classes and construction classes. Alternation classes are regarded as abstract
classes. Construction classes are instantiable classes. Two kinds of relationships are distinguished
between classes: inheritance relationships (called alternation edges) and object reference relationships
(called construction edges). Information about what methods need to be attached to a class is deliberately
omitted from the class dictionary graph at this stage; it will be "injected" into a class via
propagation patterns at method propagation time. (See Section 3 for details.)
dictionary graph)
A class dictionary graph G is defined as a labeled, directed graph
and V A are finite sets of construction and alternation vertices, respectively; both are collectively called
the vertices of G; L is an ordered set of labels, each described by a character string; EC
is a ternary relation on V \Theta L \Theta V , representing construction edges; EA is a binary relation on V \Theta V ,
representing alternation edges. 2
For presentation brevity, we sometimes denote a class dictionary graph simply as
A.
Example 1 Suppose we want to model a document which is described by its title, the authors, a date,
annotations, and a document body. The annotations consist of a number of pages. A document body
contains a number of components each of which consists of a collection of pages. A component can
either be a text component or a figure component. A class dictionary graph representing the above
situation is shown in Figure 1.
Figure
Annotation
Date
String
annotate
content
date
authors
title
pages
Page
Component
Document
components
doc-body
Doc-Body
one-to-one
construction edge
one-to-many
construction edge
alternation edge
construction class
alternation class
Figure
1: A class dictionary graph of schema Document.
The class dictionary graph of this example is described as follows:
Figure, Annotation, String, Date.
(Document,date,Date), (Document,doc-body,Doc-Body),
(Document,annotate,Annotation), (Doc-Body,components,Component),
(Component,pages,Page), (Annotation,content,Page).
(alternation reachable: =) )
be a class dictionary graph. A vertex alternation reachable from vertex
denoted by w =) u, if and only if (iff ) one of the following conditions is satisfied:
Let us take a look at Figure 1. According to Definition 2(ii), class Text is alternation reachable from
class Component. Note that "=) " is the transitive closure of EA, since it is reflexive, antisymmetric
and transitive. No cyclic alternation path is allowed. We say a class dictionary graph is legal if there
are no cyclic alternation paths:
In the sequel, we assume that class dictionary graphs are legal.
Definition 3 (construction reachable: \Gamma! )
be a class dictionary graph. A vertex construction reachable from w
one of the following conditions is satisfied:
Consider Figure 1, by the condition (ii) of this definition, class vertex Page is construction reachable
from class vertex Text.
By applying construction reachability and alternation reachability, we may directly obtain the inheritance
property of a class hierarchy [4] over alternation edges. More specifically, when a class vertex v
is alternation reachable from vertex u, then for any vertex w 2 V which is construction reachable from
u, w is also construction reachable from v. We prove this property in the following Proposition.
Proposition 1 Let class dictionary graph. For any vertices u;
is alternation reachable from u (i.e., u =) v), then for any class vertex w 2 V , if u \Gamma! w holds, we
have v \Gamma! w.
Proof:
Given u =) v, let w 2 V such that u \Gamma! w. The proof follows the cases of Definition 2.
(i) when v, the proposition is trivially satisfied.
(ii) when (u; v) 2 EA, from u \Gamma! w, we follow the cases of Definition 3.
Case (a) if 9(u; '; w) 2 EC, then by u =) v and Definition 3(ii), we have v \Gamma! w.
Case (b) if 9u by u =) v
and Definition 2(iii), we have u 0 =) v; by Definition 3(ii), we get v \Gamma! w.
OPERATION void print-document()
FROM Document
TO Page
WRAPPER Document
PREFIX (@ date -? g-print() @)
WRAPPER Page
PREFIX (@ this -? g-print() @);
Figure
2: The propagation pattern "print-document".
Case (c) if 9u by induction on u \Gamma! u
i.e., the number of construction edges from u to u 0 ) and Definition 3(ii), we obtain
since u =) v. From v \Gamma! u 0 and u 0 \Gamma! w, by using Definition
3(iii), we obtain v \Gamma! w.
(iii) when 9u 0 s.t. u =) u 0 and u 0 =) v, by induction on u \Gamma! w, and Definition 3(ii),
we obtain u 0 \Gamma! w since u =) u 0 . Similarly, by induction on u 0 \Gamma! w and u 0 =) v, we
3 Propagation Patterns
3.1 Informal Overview
The concept of propagation patterns was originally introduced in the Demeter system TM in order to
specify the object-oriented programs at a higher level of abstraction ([13], [15]). We believe that propagation
patterns are also a useful conceptual programming technique for database applications, which
enables system designers and programmers to conceptualize application programs and system behavior
with minimal knowledge of the data structure. Propagation patterns are seen as a kind of behavioral
abstraction of application programs which define patterns of operation propagation by reasoning
about the behavioral dependencies among cooperating objects. They have proved to be an effective aid
for building highly adaptive database programs (methods and queries) and for supporting incremental
schema evolution. Consider the following example.
Example 2 Consider the class dictionary graph as shown in Figure 1. Suppose we want to have a
method "print-document" which prints the creation date of a document and the entire document body
but none of its annotations. We may define the method by writing the following propagation pattern
(see
Figure
2).
This propagation pattern states that the method "print-document" will print the creation date of a
document before printing the entire document content. Besides, all the Annotation objects of a document
will be excluded from this printing task. The idea behind this propagation pattern is based on the
fact that a number of classes in the class dictionary of Figure 1 need to cooperate to accomplish the task
"print-document", but only little information is necessary for specifying this task since the rest can easily
be derived from the structural specifications of the schema. For instance, in Figure 2, we specify the interface
of the method to be propagated with the clause OPERATION void print-document. The source
of this propagation pattern is given with the clause FROM Document, specifying where the propagation
pattern starts. The target of the propagation pattern is provided with the clause TO Page, indicating
which class(es) the propagation pattern terminates with. The clause BYPASSING *,annotate,*
identifies the restriction (propagation constraints) over this propagation pattern in order to exclude
all the annotations from this printing task. The source clause, the target clause and the propagation
constraint clause together are called a propagation directive of pattern "print-document". Note
that if instead, only the clause FROM Document and clause TO Page had been used, then the edge
(Document,annotate,Annotation) would have participated in the propagation too, an undesired ef-
fect. The clauses WRAPPER Document and WRAPPER Page, followed by the actual programming code
(e.g., C++ code) surrounded by "(@" and "@)", specify the method body. We provide a detailed
syntax description in Appendix A.
Remarks: (i) Writing a propagation pattern does not require knowledge of the detailed data struc-
ture. One obvious benefit of this feature is to allow reuse of propagation patterns at hand for several
similar data structures and thus to increase the adaptiveness of the operational specifications against
future schema changes. For instance, for writing the propagation pattern "print-document", the minimal
knowledge we need to know is Document, Page, Date, and annotate, and by Figure 2 they
are critical information (called hooks) for defining this function. Suppose now we need to modify the
schema of Figure 1 by changing the layout of the Document logical structure (see Figure 3). The schema
modification as such needs no reprogramming of the method "print-document" although the path from
Document to Page is changed in the modified schema; because all the critical information for specifying
"print-document" is unchanged and included in the modified Document schema. Thus, the above propagation
pattern "print-document" can still be used as a valid and meaningful propagation pattern to
this modified Document schema. (See Section 3.3 for further explanation.)
(ii) If the BYPASSING option is not included in the above propagation pattern, the propagation path
implied by the given propagation directive will include both the path from Document through doc-body
to Page and the path from Document through annotate to Page. It means that the task "print-
document" will print both the entire document content of a document itself and all the annotations of
it.
(iii) Propagation patterns can automatically be translated at so-called method propagation time into
code written in any object-oriented programming language (e.g., C++). The code fragments are inserted
into those classes which participate in the propagation pattern traversal. We will provide an illustration
of this point in Section 3.3.
Contrast the above robustness of propagation patterns with conventional object-oriented database languages
and their reaction to the schema modifications. To express the operation "print-document"
using most of the existing object-oriented languages the method designers or query writers must refer
Article
Document
Page
Figure
Section
Annotation
Date
String
Chapter
annotate
content
date
affiliate
authors
title
chapters content
sections
pages
Figure
3: A modified class dictionary graph of schema Document.
to the objects of interest by their precise path expressions. For example, using a SQL-like language,
the propagation pattern "print-document", defined on schema Document (Figure 1), can be expressed
as follows:
PRINT d.date, d.title, d.authors, d.doc-body.components.pages
FROM d in Document.
If the schema is modified such that items in the above path expression are involved in the modifica-
tion, then the SQL-like operation becomes invalid. For instance, if the Document schema is updated to
the schema in Figure 3 by adding a new property, affiliate, to Document objects, and inserting a class
Section in between Doc-Body and Component, then the above expression d.doc-body.components.pages
becomes invalid. It must be manually updated to d.doc-body.sections.components.pages. For this
single case this might not seem like a lot of work. However, if many routines in a number of application
programs are implemented over the "old" schema, one single modification of the schema could possibly
require massive rewriting of the routines in all the relevant application programs, a rather tedious task.
In contrast, by using propagation patterns, schema modifications usually have no or less impact on
the existing database programs because propagation pattern specifications require neither knowledge of
the detailed structure of the schema nor the navigational information of how to traverse the schema.
For example, the propagation pattern "print-document" defined in Figure 2 remains valid for the modified
Document schema and thus there is no need to update this propagation pattern even after the
modification of Document schema.
In summary, propagation patterns propose a novel method specification technique which promotes
adaptive object-oriented schema design. They can be used as a database programming language for
enhancing robustness of database programs. Adaptiveness and robustness of propagation patterns is
achieved by delaying the binding of the concrete propagation paths used in each method or query
specification from method (or query) writing time to operation propagation time, prior to compile time.
3.2 Formal Definitions
In order to define some reuse mechanisms for propagation patterns, we below present the formal definitions
of propagation directives and propagation patterns and then introduce the concept of propagation
scope.
Definition 4 (propagation directive)
Given a class dictionary graph propagation directive ffi over G is defined by a
triple
ffl F denotes a nonempty set of source vertices and F '
ffl PC denotes propagation constraints and
- I is a set of through edges called restriction constraints;
- X is a set of bypassing edges called exclusion constraints.
ffl T denotes a nonempty set of target vertices and T ' V . 2
Each propagation directive specifies a set of propagation paths and is described by a set of source
vertices, a set of propagation constraints and a set of target vertices.
Definition 5 (propagation pattern)
be a class dictionary graph. A propagation pattern ff over G is defined by a
triple (M; PD;MA).
ffl M(ff) is a method interface defined by a triple (u; mn; Lpa), where
denotes the output type which is either a class vertex or the keyword "void",
indicating an empty result type,
- mn is the method name,
- Lpa is a list of n (n 0) parameters; that is,
is the parameter name
ffl PD(ff) denotes a nonempty set of propagation directives over G; that is,
ffl MA(ff) is a set of method annotations. A method annotation consists of a set of prefix or suffix
components, denoted as (w; fg PRE ) or (w; fg SUF ), where w is a class vertex in V , fg PRE and
fg SUF each denotes a code fragment, describing the user-defined method implementation.
fg i is a character string, containing a code fragmentg.Note that in this definition, we have classified a method annotation into prefix and suffix code fragments.
Such a classification plays an important role in identifying the activation sequence of those fragments.
We refer to an activation sequence of wrapper code fragments of a propagation pattern as the wrapper
order of the propagation pattern. When specific object types are encountered during the traversal, the
prefix code fragments are to be executed before suffix code fragments. The exact execution sequence
depends jointly on the propagation pattern, the class dictionary graph, and the object being traversed.
In general, the following rules for wrapper execution order hold. When a class in the propagation scope
(see Def. 7) is traversed,
1. its prefix fragments are executed before its suffix fragments;
2. if the class has more than one prefix, all its prefix fragments are executed in the order they appear
textually in the propagation pattern definition;
3. all suffix wrappers of the class are executed in the reverse textual
4. if a class vertex v is alternation reachable from another class vertex u, then the prefix wrapper
of u is executed before the prefix wrapper of v, and the suffix wrapper of u is executed after the
suffix wrapper of v.
5. if a class vertex v is construction reachable from another class vertex u, then the prefix wrapper
of u is executed before the prefix wrapper of v, and the suffix wrapper of u is executed after the
suffix wrapper of v.
Interesting to note is that Rule (4) describes the dependency relationship between code fragments of a
specialized class and code fragments of a more general class. The specific code could be dependent on
the general code, but the general code should not depend on the specific code at all. Rule(5) specifies
the dependency between a component class and its container class.
Example 3 Consider the propagation pattern defined in Figure 2. It has two wrapper fragments:
(Document,fg Document
PRE ) and (Page,fg Page
PRE ). The wrapper execution order of these fragments is as follows:
(Document; fg Document
To capture the relationships between a propagation pattern, a class dictionary graph, and an object
being traversed, we introduce the notions of propagation paths, propagation scope, as well as compatibility
of propagation patterns. Readers who are interested in formal semantics of the wrapper orders and
implementation considerations may refer to [?, 18, 8].
3.3 Scope and Compatibility of Propagation Patterns
In this section, we define the scope and compatibility of propagation patterns. The scope of a propagation
pattern identifies for a given class dictionary graph the complete set of classes (vertices) and
relationships (edges) involved in executing the task specified by the propagation pattern. The compatibility
of a propagation pattern defines a family of class dictionary graphs to which the propagation
pattern is directly applicable.
Definition 6 (legal path)
be a class dictionary graph. For two vertices u and v in G, the predicate
holds if it can be recursively derived using the following construction rules:
(ii) u \Gamma! v;
Note that, for any legal path p: u ; v, the set of edges E(p) involved in a given path p can be
represented as:
and e is contained in pg:
The computation of E(p) can easily be obtained by using Definition 2, Definition 3 and Definition 7.
Put differently, we may compute E(p) by recursively constructing the edges involved in the path p:u ; v
as follows.
ffl If u =) v, then by Definition 2, 9w
ffl If u \Gamma! v, then by Definition 3 two cases are possible:
Note that if u ; v there could be several legal paths from u to v.
Example 4 Consider Document and Page in Figure 1 as an example. By Definition 3(ii), we have Text
\Gamma! Page and Figure \Gamma! Page, because Component =) Text and Component =) Figure hold,
and (Component, pages, Page)2 EC. Thus, according to Definition 6, we get the following three legal
paths from Document to Page.
1. Document
2. Document ; Doc-Body ; Component ; Figure; Page;
3. Document ; Doc-Body ; Component ; Page;
4. Document ; Annotation ; Page.
For the unique legal path from Document through Text to Page, we have:
E(Document ; Page) =f (Document,doc-body,Doc-Body), (Doc-Body,components,Component),
(Component,pages,Page)g.
Actually, due to the fact that class Component is a generalization of classes Figure and Text, traversing
the third path implies that both the first path and the second one will be visited.
Definition 7 (propagation scope)
Given a class dictionary graph propagation pattern
defined m) denote a propagation directive
described by We say a legal path p satisfies the propagation constraint
the following hold:
Let P i be the set of all possible legal paths from vertex u 2 F i to vertex v, v satisfying that v 2 T i or
g.
The propagation scope of a directive denoted by PS(ffi i ; ff), is defined as
follows:
E(p).
The propagation scope of pattern ff, denoted as PS(ff), is described by the union of the scopes of the
propagation directives of ff; that is,
ff)Note that for any given propagation pattern, the set of propagation directives decides what the propagation
scope is, whereas the method signature and the method annotations specify what is propagated
within this scope. A detailed discussion may refer to [18].
Example 5 Consider Figure 1, and the propagation pattern given in Example 2 (call it ff). Let
denote the propagation scope of directive denoting FROM Document BYPASSING *,annotate,*
TO Page . Let PS(ff) be the propagation scope of pattern ff over the schema (class dictionary graph)
defined in Figure 1. Thus, we have:
(Component,pages,Page), (Component,Text), (Component,Figure),
(Document,author,String), (Document,title,String), (Document,date,Date) g.
As propagation pattern ff has only one propagation directive, we have
It is important to note that a propagation pattern, which is syntactically correct according to Definition
5, may not be semantically correct. For example, consider a simple class dictionary graph
lcdg. It means that objects of class A are composed by objects of class B and of
class C, while objects of both class B and class C contain objects of class D as their subobjects. Now
assume that we have the following two propagation patterns PP 1 and
Operation void PP1 Operation void PP2
From A From A
To D Through (A,lab,B)
Wrapper A To D
. Wrapper A
Wrapper C .
. Wrapper C
Wrapper D .
. Wrapper D
Obviously, both are syntactically correct propagation patterns over G in terms of Definition 5. How-
ever, PP 2 is semantically incorrect propagation pattern over G, because class C is not included in the
propagation scope of PP 2 and the wrapper reference to C in PP 2 is invalid.
One possible means to define and check the semantic correctness of a propagation pattern over a given
class dictionary graph is to use the concept of propagation scope.
Definition 8 (semantic correctness of a propagation scope)
Given a class dictionary graph propagation pattern defined
over G with propagation scope PS(ff). Then V PS(ff) denotes the set of class vertices involved in PS(ff)
and is defined as follows:
(iii) nothing else is in V PS(ff).
The propagation pattern ff is said to be semantically correct over G, if and only if, for any (u; fg u
MA(ff), we have
In the sequel, we assume that all propagation patterns we deal with are correct both in syntax and in
semantics.
The major advantage of propagation patterns is that they can be applied to not just one class dictionary
graph but to a family of class dictionary graphs. This property makes them truly adaptive and reusable.
In order to determine the set of class dictionary graphs compatible with a given propagation pattern
we need to define the notion of compatibility. We say a propagation pattern to be compatible with a
class dictionary graph if the class dictionary includes all the information contained in the hooks of this
propagation pattern. Informally, the hooks of a propagation pattern consist of the To and From vertices,
the labels referred to in the Through and Bypassing edges, and the vertices referred to in the method
interface and the method annotations.
Definition 9 (compatibility of a propagation pattern)
\Gamma be a set of class dictionary graphs and, for G 2 \Gamma, let V (G), EA(G) and L(G) be a set of class
vertices, a set of alternation edges and a set of labels of G, respectively. Let be a
propagation pattern defined over G and HK(ff) denote the set of key information to pattern ff (we call
them "hooks" of pattern ff). For any x 2 we define that x 2 HK(ff) iff one
of the following conditions is verified.
Then, for any G 0 2 \Gamma, we say ff is compatible to G 0 iff HK(ff) '
Example 6 Recall the propagation pattern "print-document" defined in Example 2, denoted as ff. Let
the class dictionary graphs in Figure 1
and
Figure
3, respectively. We have
fDocument,annotate,Page,Dateg.
Thus, the propagation pattern "print-document" is compatible with both G 1 and G 2 .
3.4 Polymorphic Character of Propagation Patterns
As stated earlier, by using propagation patterns to model the dynamic part of an object-oriented
database system, we may achieve a certain degree of adaptiveness and flexibility of the database specifications
against future changes, especially with respect to several types of structural changes. For
example, the propagation pattern given in Figure 2 is defined over the class dictionary graph of Figure
1, but it can also be used as a propagation pattern for the class dictionary graph given in Figure 3,
because both class schemas include Document objects, which have Page objects as (sub)parts. By using
propagation patterns, schema designers and programmers may focus on only the most interesting
components of the class structure. No precise knowledge about how the structural details are modeled
in a particular schema (class dictionary graph) is required. We refer to this particular feature as the
polymorphic character of propagation patterns.
It is interesting to note that, for the same pattern print-document, its propagation scope over the class
schema of Figure 1 is quite different from the one over the class schema of Figure 3. Although the same
propagation pattern specification is valid for both schemas, the binding of it to the involved classes and
the code generated based on it are different. The polymorphic character of this propagation pattern is
only apparent. Generally speaking, the polymorphism of propagation patterns belongs to the family of
ad-hoc polymorphism [5]. We below provide another example.
Example 7 Suppose we define the Trip schema (class dictionary graph) as shown in Figure 4. The
Trip objects have parts called departure and arrival which can be printed. A Trip object contains
a list of Location objects each of which has an Ident object as a component, describing the city to
be visited during the trip. The list of Location objects is here modeled by a third kind of class, a
repetition class. A repetition class implements the one-to-many relationship between Trip class and
Location class. For the remainder of this paper, we will replace one-to-many construction edges with
repetition classes in order to show actual method code for all the classes.
Time
Trip
Location
Location
departure
arrival
locations
Number
value
Figure
4: A class dictionary graph of schema Trip.
Consider an operational requirement of printing trip itineraries in a travel agency. Given a Trip object,
we need to print the departure time and the list of cities to be visited, followed by the arrival time.
This application can be described by using the propagation pattern below (see Figure 5).
The following lists the key information (hooks) of this propagation pattern.
fTrip,departure,arrival,Identg.
OPERATION void print-itinerary()
FROM Trip
WRAPPER Trip
PREFIX (@ departure -? g-print() @)
SUFFIX (@ arrival -? g-print() @)
PREFIX (@ this -? g-print() @);
Figure
5: An example propagation pattern print-itinerary.
The wrapper order of MA(Trip) is as follows:
In this example, the prefix annotation is used to print the departure time before the Trip object
is traversed; the suffix annotation prints the arrival time after the Trip object has been traversed.
The primary annotation replaces the default traversal code when printing the current Ident object.
When the above propagation pattern is injected into the class structure of Figure 4 at propagation
time, program fragments will automatically be generated according to the given method annotations.
(See the C++ codes attached to the classes in Figure 6, which we obtained by running the Demeter
System/C++ on the example.) The C++ method definitions attached to each class in Figure 6 is
generated according to the propagation pattern given in Figure 5. The completeness of these C++
methods fully depends on the specification details of propagation patterns.
By using propagation patterns, any unnecessary information about the class structure need not be
hardwired into the specification. This allows the specification of a propagation pattern to be more
flexible towards schema modification. For example, suppose the schema shown in Figure 4 is extended
by adding the class DayTrip such that a Trip object now contains a list of DayTrip objects and each
DayTrip object contains a list of Location objects which are printable through Ident objects (see the
schema presented in Figure 7). Although the propagation pattern in Figure 5 is defined over the schema
in
Figure
4, the modification on Trip schema requires no reprogramming of this method, because all
the hooks of "print-itinerary" are included in the modified schema of Figure 7. Consequently, this
propagation pattern is also compatible with the schema of Figure 7. We can even reuse the propagation
pattern defined in Figure 5 for the Trip schema in Figure 7 without changing the specification of the
propagation pattern and the code generated previously based on the propagation pattern.
Clearly, with respect to the two Trip schemas, the propagation pattern "print-itinerary" also presents a
kind of ad-hoc polymorphism. More interestingly, this propagation pattern can actually be compatible
with a family of Trip class structures, as long as the Trip class has a departure and an arrival part,
and a "path" to class Ident. Polymorphism may provide a sound theoretical basis for investigating
the adaptiveness of object-oriented schema design and schema evolution in both the structural and the
dynamic aspect.
Time
Trip
Location
Number
Location
departure
arrival
value
locations
void
{
LocationList_iterator nextLoc( this );
Location *eachLoc;
while
void Trip::print_itinerary()
{
departure -> g_print();
locations -> print_itinerary();
arrival -> g_print();
void
{
this -> g_print();
void
{
city -> print_itinerary();
Figure
The Trip schema with generated C++ code attached to corresponding classes.
3.5 Remarks on Reuse Possibility
Up to now, we have shown by examples that propagation patterns are a promising conceptual programming
technique for modeling and programming the dynamic behavior of object-oriented database
systems, because of their adaptiveness to the structural changes of a schema.
The adaptiveness of propagation patterns results from a number of interesting features. First of all, the
specification of propagation patterns does not require hard-wiring them to a particular class structure.
This leaves room for deriving behavioral abstraction based on structure abstraction and for incremental
design of methods (e.g., propagation patterns). Secondly, propagation patterns are defined in terms of
only a few, essential classes and relationship specifications. They serve as hooks into the class structure
[11]. The rest of the knowledge required for behavior implementation can actually be derived on the basis
of these hooks and the corresponding class schema. Last but not least, propagation patterns promote
the well-known concept of late binding. Instead of binding methods to classes at program-writing time,
propagation patterns encourage the binding of methods to classes at propagation time, prior to compile
time. Therefore, given a propagation pattern defined over a class structure (say G), any change to the
structure of G (which does not affect the hooks of this propagation pattern) will have little impact on
the specification of this propagation pattern, even though its scope over the modified class schema could
be changed accordingly. In other words, the given propagation pattern specification by itself can be
reused in a modified class schema, if no additional propagation constraints or method annotations are
required. But the binding of the method interface and annotations to the relevant classes may need to
be re-adjusted implicitly at propagation time (through propagation pattern interpretation).
In contrast, when changes are required to the dynamic (behavioral) aspect of a schema and thus to some
existing propagation patterns, it becomes indispensable to redefine the affected propagation patterns or
Time
Trip
Location
Number
Location
departure
arrival
value
locations
void
{
this -> g_print();
daytrips
void
LocationList_iterator nextLoc( this );
Location *eachLoc;
while
void DayTripList::print_itinerary()
{
DayTripList_iterator nextDay( this );
while
eachDay -> print_itinerary();
void
{
city -> print_itinerary();
void DayTrip::print_itinerary()
{
locations -> print_itinerary();
void Trip::print_itinerary()
{
departure -> g_print();
dayTrips -> print_itinerary();
arrival -> g_print();
Figure
7: A second Trip schema with generated C++ code.
to further extend some existing propagation patterns (see Example 8 in the next section). It is definitely
beneficial if some reuse mechanisms are provided so that the adaptation of existing propagation patterns
to the new requirement changes do not have to start from scratch or be rewritten completely, even if
the affected propagation patterns are simple ones. Because once a propagation pattern is reused, both
the programming codes generated in terms of it and the existing binding of methods to classes at
propagation time may inherently be reused as well. Besides, by reusing the specification of propagation
patterns, information involved is maximumly localized such that any change to the existing specifications
is carried out only at one place. The effort to manually preserve the consistency of the specifications
due to schema modification is then minimized.
In what follows, we introduce a reuse mechanism for propagation patterns, which allows new propagation
patterns to be defined in terms of existing ones by a behavioral refinement mechanism.
Behavioral Refinement of Propagation Patterns
The refinement of propagation patterns is a behavioral abstraction mechanism, which allows us to define
more specialized propagation patterns in terms of existing propagation patterns by
(a) restricting propagation behavior to one or more specialized classes as arguments of the method,
(b) imposing extra propagation constraints, or
(c) adding additional method annotations.
4.1 A Motivating Example
Example 8 Consider again the Trip schema in Figure 7 and the propagation pattern for printing trip
itineraries in a travel agency defined in Figure 5. Suppose now we want to modify the Trip schema of
Figure
7 by adding a new property Date to class DayTrip (see Figure 8).
Time
Trip
Location
Number
Location
departure
arrival
value
void
this -> g_print();
daytrips
void
LocationList_iterator nextLoc( this );
Location *eachLoc;
while
void
{
city -> print_itinerary();
Date
locations date
day
year
month
void DayTripList::print_itinerary()
{
DayTripList_iterator nextDay( this );
while
eachDay -> print_itinerary();
void DayTrip::print_itinerary()
{
date -> g_print();
locations -> print_itinerary();
void Trip::print_itinerary()
{
departure -> g_print();
dayTrips -> print_itinerary();
arrival -> g_print();
Figure
8: An extended Trip schema with generated C++ code.
We also want to extend the task of printing trip itineraries by adding a new operational requirement that,
for each trip, the date for every travel day must also be printed. Comparing with the "old" propagation
pattern "print itinerary" defined in Figure 5, this extended task (let us call it "print-detailed-itinerary")
obviously includes all the functionalities of the "old" propagation pattern "print-itinerary" (see Figure 5)
and also some additional propagation constraints and method annotations. For instance, the following
annotation needs to be added for printing the date of each travel day within a trip:
WRAPPER DayTrip
PREFIX (@ date -? g-print() @)
OPERATION void print-detailed-itinerary()
FROM Trip
THROUGH (DayTrip,locations,LocationList)
TO Indent
WRAPPER Trip
PREFIX (@ departure -? g-print() @)
SUFFIX (@ arrival -? g-print() @)
WRAPPER DayTrip
PREFIX (@ date -? g-print() @)
PREFIX (@ this -? g-print() @);
Figure
9: A modified propagation pattern of "print-itinerary".
Additionally, in the schema of Figure 8, there is more than one path from Trip to
the edge locations and the other through the edge date. We also need to add the following extra
propagation constraint into the propagation pattern defined in Figure 5:
THROUGH (DayTrip,locations,LocationList)
or
BYPASSING (DayTrip,date,Date).
There are two ways to accomplish this operational requirement change. One way is to redefine (rewrite)
the previous propagation pattern "print-itinerary" completely and then redo the binding (injection) of
methods to classes at propagation time. For example, we may rewrite the propagation pattern "print-
itinerary" completely as shown in Figure 9. Although most of the previous bindings will remain the
same for this redefined propagation pattern (compare the code generated in Figure 7 and Figure 8), we
need to re-bind all methods to classes and regenerate the code for this modified propagation pattern
(see the code attached to the classes in Figure 8).
The other way is to employ some reuse mechanisms so that more specialized propagation patterns can
be defined in terms of existing ones. This means that only the propagation constraint and the method
annotation which are new need to be defined. The rest can be directly reused from (or shared with) the
existing pattern by means of the propagation refinement mechanism. Furthermore, at the propagation
time of the refined propagation pattern, only the new method annotations need to be injected to the
involved classes, since all previous bindings and code generated in terms of the "old" method annotations
may be reused accordingly. For example, compare the propagation pattern given in Figure 9 with the
one defined in Figure 5: only one prefix annotation is new and one extra propagation constraint is
added.
Moreover, the addition of date into the task of printing trip itineraries, in fact, only affects the "old"
binding of the method ("print-itinerary") to the class vertex DayTrip and the code generated for this
binding. The rest remains exactly the same. Thus, by using the propagation refinement mechanism
(see the next section), we may specify the desired requirement change as shown in Figure 10.
OPERATION void print-detailed-itinerary()
BEHAVIORAL REFINEMENT OF print-itinerary
ADD CONSTRAINT
THROUGH (DayTrip,locations,LocationList)
ADD ANNOTATION
WRAPPER DayTrip
PREFIX (@ date -? g-print() @).
Figure
10: A refined propagation pattern of "print-itinerary".
4.2 Definition
We now present a formal definition of propagation pattern refinement, based on the concept of signature
refinement and propagation directive refinement.
E) be a class dictionary graph. Given two method interfaces M 1 and M 2 , where
m). We say that the method interface M 2 is a signature refinement of the method interface M 1 ,
denoted by M 2 sig M 1 , iff the following conditions are verified:
Note that this definition also identifies that, given two operation interfaces M 1 and M 2 , if M 2 is a
signature refinement of M 1 , then M 2 may have more (additional) arguments than M 1 (see the condition
that (0 n m)). This is an important property for behavioral refinement of propagation patterns.
For example, when propagation patterns are refined in groups, it often requires adding extra arguments
in calls.
Definition 11 (propagation directive refinement)
E) be a class dictionary graph, be two given
propagation directives defined over G. Let PC
propagation scopes of directives ffi 1 and ffi 2 respectively. Then ffi 2 is a propagation directive refinement of
the following conditions are verified:
1. 8y
that is, y is alternation reachable from x.
2. If
It means that whenever the hook of a propagation directive ffi 1 is included in the hook of a propagation
directive the propagation constraints of ffi 1 should be implied by the propagation constraints
of
3.
that is, for any vertex u in the set T 2 of target vertices of ffi 2 , there is a vertex v in the set T 1 of
target vertices of ffi 1 such that u is alternation reachable from v.
4.
Condition (1) amounts to saying that whenever ffi 2 is a propagation directive refinement of ffi 1 , then any
given source vertex u in F 2 must have a corresponding source vertex (say v) in F 1 and u is alternation
reachable from v (v =) u). condition (3) identifies the similar result over the target sets T 1 and
Two cases are involved in condition (2) of Definition 11. We illustrate them in Example 10 and
Example 11, respectively. The following example combines conditions (1), (3) and (4) to infer that
propagation directive refinement holds.
Example 9 Let ffi 1 and ffi 2 be two propagation directives,
"FROM A' To C'". F fC'g. If A =) A' and C =) \Lambda C',
then holds. It means that ffi 2 is a propagation directive refinement of ffi 1 .
Definition 12 (propagation pattern refinement)
E) be a class dictionary graph and
PD(fi), MA(fi)) be two given propagation patterns defined over G. Let
and propagation pattern fi is a behavioral refinement of
propagation pattern ff, iff the following conditions are verified:
1. M(fi) sig M(ff); that is, M(fi) is a signature refinement of M(ff).
2.
It means that for any ffi i in PD(ff), there is a refined propagation
directive ! j in PD(fi).
3. MA(ff) ' MA(fi). 2
This definition states that if fi is a behavioral refinement of propagation pattern ff, then not only the
signature refinement condition and propagation directive refinement condition should hold but also
the inclusion of wrapper set of ff into the wrapper set of fi should be verified. If one of the three is
invalid, then fi is not a propagation pattern refinement of ff. Put differently, these three conditions
work together to guarantee that the scope of a refined propagation pattern can only be made smaller
to the limit that all wrappers of the generic propagation pattern still be applicable.
In addition, condition (3) presents the following wrapper refinement rule: if propagation pattern fi
is a behavioral refinement of propagation pattern ff, then the prefix wrappers of fi may extend the
prefix wrappers of propagation pattern ff by adding extra wrappers or by providing additional wrapper
fragments for the existing wrappers. We may introduce the keyword "ADD ANNOTATION" or the
"ADD FRAGMENT" within a WRAPPER clause to serve for this purpose. Also, Definition 11
has the property that if propagation pattern fi is a behavioral refinement of propagation pattern ff, then
all the code fragments defined in pattern ff are executed in the same order as they are in an execution
with propagation pattern fi.
Consider the motivating example given in the previous section and the schema in Figure 8.
Let ff denote the propagation pattern defined in Figure 5, page 16, and ffi be the propagation directive of
ff. Let fi be the modified propagation pattern defined in Figure 10 and ! be the propagation directive
of fi. We have:
The wrapper order of ff : (Trip, fg T rip
The wrapper order of fi : (Trip,fg T rip
g.
Therefore, by Definition 12, we conclude that propagation pattern fi is correctly defined as a behavioral
refinement of propagation pattern ff.
The above example also shows that all the wrappers of "print-itinerary" are activated in the same order
in the refined propagation pattern "print-detailed-itinerary". Put differently, the behavioral refinement
mechanism guarantees that the wrapper order of any refined propagation pattern should preserve (or
imply) the wrapper order of the (given) pre-defined propagation pattern. This important property
of propagation pattern refinement is stated formally in the following proposition. Readers who are
interested in the formal proof of the proposition may refer to our technical report [?]. The techniques
developed in [18, 8] can also be used for constructive proof of this proposition.
Proposition 2 (wrapper order property of propagation pattern refinement) Let E)
be a class dictionary graph and be two
given propagation patterns compatible with G. If propagation pattern fi is a behavioral refinement of
propagation pattern ff, then wrappers of pattern ff are executed in the same order over MA(fi) as over
MA(ff).
Note that the propagation pattern refinement mechanism not only increases the flexibility and adaptiveness
of propagation patterns against future operational requirement changes, but it can also be useful
for promoting the concept of propagation pattern inheritance under a class dictionary graph, especially
when there is a need of applying an existing method (function) to a subset of its current domain or
codomain instead.
Example 11 Recall the propagation pattern "print-document" given in Figure 2, page 6. Suppose this
propagation pattern is used as a propagation pattern over the Document schema given in Figure 3. Now
if we want to add a new method that prints only the documents of type Article, i.e., a subset of the
Document objects, by providing a support for behavioral refinement of propagation patterns, we could
easily reuse the propagation pattern "print-document" to obtain a more specialized propagation pattern
"print-article" to do the job (see Figure 11.)
OPERATION void print-article()
BEHAVIORAL REFINEMENT OF print-document
FROM Article TO Page.
Figure
11: A refined propagation pattern of Figure 2.
Let ff denote the propagation pattern "print-document" and fi denote "print-article", both defined over
the class schema in Figure 3. Obviously, we have
(i) M(fi) is a signature refinement of M(ff), since both ff and fi return void and have an
empty argument list;
(ii) PD(fi) is a propagation directive refinement of PD(ff), since
Document=) Article, and Document62
Thus, by Definition 11, propagation pattern fi (print-article) is a correct behavioral refinement of
propagation pattern ff (print-document).
Another interesting feature of propagation pattern refinement is presented by its transitivity. We below
prove that for any propagation pattern ff; fi; fl, if pattern ff is a behavioral refinement of pattern fi and
pattern fi is a behavioral refinement of pattern fl, then pattern ff is also a behavioral refinement of
pattern fl.
Proposition 3 Let predicate PP-refinement(ff; fi) hold if and only if propagation pattern ff is a behavioral
refinement of propagation pattern fi. Then we have
Proof: Let ff denote any propagation pattern described by a triple (M(ff); PD(ff); MA(ff)). We need
to prove that
(ii) For any j 2 PD(fl), there exists a ffi 2 PD(ff) such that ffi pd j;
The proof may proceed as follows.
By PP-refinement(ff; fi), PP-refinement(fi; fl) and Definition 11, M(ff) sig M(fi) and M(fi) sig M(fl)
are held. To prove M(ff) sig M(fl), let
By M(ff) sig M(fi), we get Similarly, by M(fi) sig M(fl),
we get u 3 =) Thus, according to Definition 2(iii), we have u 3 =)
holds.
The proof for (ii) goes as follows: by PP-refinement(fi; fl) and Definition 11, we get 8j 2 PD(fl); 9! 2
Quite similarly, from PP-refinement(ff; fi), we get 8! 2 PD(fi), 9ffi 2 PD(ff) s.t.
According to Definition 10, we could easily prove that 8j 2 PD(fl), 9ffi 2 PD(ff) s.t. ffi pd j
holds.
The proof of (iii) follows directly from the transitivity of set inclusion, since according to the given
predicates and Definition 11, we have MA(ff) ' MA(fi) and MA(fi) ' MA(fl). 2
The transitivity of propagation pattern refinement provides a sound basis for incremental design of
propagation patterns.
So far, we have shown that propagation pattern refinement is an important behavioral abstraction mechanism
for reuse of method definitions and query specifications. It encourages information localization
and offers better flexibility and adaptiveness towards schema modifications, especially towards future
operational requirement changes. By means of the behavioral refinement mechanism, three levels of
reusability can be obtained for managing the operational schema changes:
(i) The specification of propagation patterns can easily be reused. (See Figure 10 and Figure 11.)
(ii) The binding of method annotations to classes at propagation time can largely be reused.
(Compare the binding shown in Figure 7 and Figure 8.)
(iii) The generated code (e.g., for C++) can possibly be reused as well. (Compare the code
generated in Figure 7 and Figure 8.)
We believe that the propagation pattern refinement mechanism will increase the potential benefits of
using propagation patterns as a higher level database programming technique.
5 Related Research
Schema evolution is commonly recognized as a required facility in a persistent object-oriented system.
Generally speaking, the schema describes the interface between the set of application programs and the
persistent repository of objects. When the schema changes, so does the interface, which possibly incurs
incompatible elements on both sides.
Up to now, there have been two main research directions for achieving seamless schema evolution in
an object-oriented database system. One is to effectively integrate schema modifications and the propagation
of schema changes into the object instances (instance adaptation) as well as the application
programs (program adaptation). Several research projects have contributed to this issue in one way or
the other, such as Encore [20], GemStone [6], O 2 [23], Orion [1], OTgen [?]. One of the recent contributions
along this direction is the lazy update propagation problem and solutions addressed in [?, ?].
It provides mechanisms to guarantee a correct update propagation in terms of conversion functions in
the presence of schema modification. Large efforts, however, remain required in practice in order to
provide adequate propagation mechanisms to make the instance adaptation and program adaptation
effective. The other direction is to improve the design of database method and query specification languages
such that software programs written in these advanced languages have higher adaptiveness and
seamlessness to the schema evolution. However, there has been surprisingly little attention paid in the
database community towards this direction, although software reuse has been one of the critical issues
in software engineering for the last decade. As [2] has shown, program adaptation can be exceedingly
hard for typed languages (such as C++) even for simple schema changes. Therefore, it is important
and beneficial to put some research effort on avoiding or minimizing program changes in anticipation of
schema updates rather than trying to fix things after changes occured. The work reported in this paper
presents our contributions on how to use polymorphic reuse mechanisms to achieve higher adaptiveness
in object-oriented database specifications. We have presented two polymorphic reuse mechanisms: propagation
patterns and propagation pattern refinement and shown the role of these two reuse mechanisms
in avoiding or minimizing the impact of schema evolution on application programs in an object-oriented
database.
This work has been mostly encouraged by the Demeter system ([11], [12], [19]), the contract model [7],
and the activity model ([?], [17]). In comparison with the contract model, both propagation patterns
and contracts encourage a separation of object behavior specification from object structure specification
and both present interesting techniques for operational specification. But there are also a number of
differences. First of all, propagation patterns provide better adaptiveness towards schema evolution and
change management, because by means of propagation patterns and the propagation pattern refinement
mechanism, the reprogramming of methods and queries due to schema modifications can be avoided or
minimized. Second, propagation patterns concentrate more on the specification of and reasoning about
operation propagations among a group of related classes; whereas contracts emphasize more on the obligation
specification of each participant class in accomplishing a task defined by a group of cooperating
classes. Third, the conformance of contracts with classes is required explicitly in the contract model,
whereas the conformance of propagation patterns with classes is derived implicitly at propagation time.
Comparing the reuse mechanism of propagation patterns with the behavioral abstraction mechanisms
defined in the activity model [17], it is interesting to note that although there is a similarity between the
concept of propagation pattern refinement and the concept of activity specialization, the emphasis and
functionality of the activity model is on the declarative specification and reasoning of communication
behavior of objects. There is no consideration on specifying and reasoning about operation propagations
among cooperating classes in the current activity model.
An efficient implementation of propagation patterns has been described in [18]. The paper shows how
to generate an efficient object-oriented program, say in C++, for a given propagation pattern and a
compatible class dictionary graph. A proof of the correctness of the core of the translation is given.
The work in [8] presented a formal framework for maintaining behavior and consistency of object-oriented
systems during software evolution. The framework effectively couples the change avoidance
approach of propagation patterns with a change management mechanism to fully automate evolution.
Class structure transformations may render existing objects and programs inconsistent. The paper
identifies the introduced inconsistencies and provides the necessary object and program transformations
to reinstate consistency while maintaining the behavior of the system. A formal definition of behavioral
equivalence is given. To prove behavioral equivalency of propagation patterns, the paper defines a formal
semantics for propagation patterns and describes a proof system for the semantics. The semantics
formally defines the order of wrapper execution for prefix and suffix wrappers. The feasibility of the
evolution framework is demonstrated for a representative set of primitive class structure transformations,
mainly based on the extension relations identified in [9] and [14]. Such extension relations are useful
means for quality control of schema transformations. Quite differently, the work presented in this
paper focuses on how to reuse the existing design and specification under the schema modifications and
requirement changes, and how the existing propagation patterns can be reused or extended incrementally
to cover the new requirements, especially when both structural and operational changes are required.
Another interesting project based on graph and class hierarchy is the OQL proposal [?]. A OQL query
is specified as a subgraph of the Schema graph. The subgraph contains the traversals of object classes
with AND and OR branches and association operators. Comparing with the calculus-based language
OQL, our approach emphasizes more on adaptive design and specification of databases to facilitate
program adaptation and change propagation in anticipation of schema changes.
6 Concluding Remarks
We have shown the viability of our approach to the incremental design and reuse of object-oriented
database specifications. We argue for raising the level of abstraction for specification of object methods
and database queries and show that this helps to avoid or minimize the reprogramming of methods
and queries due to schema modifications. The salient features of this approach are the use of propagation
patterns and propagation pattern refinement. The main benefits of using our polymorphic reuse
mechanisms in object-oriented database specifications are the following.
ffl The concept of propagation patterns presents a promising technique for enhancing the robustness
of methods and query programs with respect to schema modifications. Using propagation patterns
provides method designers and query writers with an opportunity to specify operations without
knowledge of detailed navigational information. Compared with most existing object-oriented
languages, the effort required for manually reprogramming methods and queries due to schema
modifications is largely avoided or minimized.
ffl The concept of propagation pattern refinement is an important mechanism for the abstraction
and reuse of propagation patterns. It promotes incremental design of methods and is especially
useful for dealing with a class of operational requirement changes. To our knowledge, none of
the existing object-oriented specification languages provides a similar support for the incremental
definition of methods.
ffl We have studied the formal semantics of both propagation patterns and propagation pattern
refinement. This formal basis provides a sound framework for the implementation and the further
development of the ideas presented here.
As shown by the examples in the previous sections, propagation patterns are currently well-supported in
a CASE tool called the Demeter System TM / C++. Therefore, they can easily be adopted to any C++
based object-oriented database system through the Demeter C++ tool. In order to give a road map
of possible implementation considerations of propagation patterns and propagation pattern refinement,
we would like to add a brief illustration on how the Demeter C++ tools translate propagation patterns
into C++ code. The Demeter tools can be divided into three categories: consistency checker, code
generators, run-time library. Both consistency checker and code generators are used before compile-time,
and each can be further divided into a structural and a behavioral part, which apply to class dictionary
graphs and to propagation patterns respectively. In other words, at compile time, the application
under development consists exclusively of C++ code. No run-time constructs are needed to implement
propagation patterns. This has gained two advantages: 1) the system has no speed degradation due
to propagation pattern run-time overhead. 2) if desired, the system could be decoupled from the
Demeter system at any time to become a stand-alone application. The structural consistency checker
first checks the class dictionary graph for validity. Then the structural code generator generates C++
class definitions in accordance with the class dictionary graph. The task of the behavioral consistency
checker takes as input a list of propagation patterns and a class dictionary graph, and examines whether
those propagation patterns are syntactically correct and whether they are compatible with the given
class dictionary graph. If so, the behavioral code generator generates the appropriate member function
headers and C++ implementations.
Future work on research and development of propagation patterns and behavioral refinement of propagation
patterns continues. We are interested in further investigation on both theoretical justification and
practical applicability of our approach. For example, it would be interesting to extend the polymorphic
reuse mechanisms discussed in this paper and use them as a candidate for object-oriented view defini-
tions. Object view is an important feature of persistent OODB systems and it becomes more and more
popular to use view approach to deal with interoperability in a distributed and heterogeneous database
environment. We believe that using the polymorphic reuse mechanisms would greatly enhance the adaptiveness
and robustness of a global (and virtual) view schema and thus of those application programs,
which were developed by users from different sites, against local schema changes. We are also interested
in further exploring issues such as what are the critical rules for achieving a good understanding and
an effective translation of propagation patterns and behavioral refinement of propagation patterns; and
how the polymorphic type theory may further enhance the formal development of propagation patterns
and other kinds of behavioral abstractions of propagation patterns.
Acknowledgement
We would like to thank the subject editor Stanley Su, the editor-in-chief Benjamin Wah, and the
reviewers for their helpful comments and suggestions. Our thanks are also due to Susan Even, Markku
Sakkinen, Ignacio Silva-Lepe and Cun Xiao for the discussion and remarks on an earlier version of this
paper. An extended abstract was published in the proceedings of ICDE'94 [?].
--R
Semantics and implementation of schema evolution in object-oriented data bases
Maintaining Behavioral Consistency during Schema Evolution.
A view mechnism for object-oriented databases
A semantics of multiple inheritance.
On understanding types
Development of an object-oriented dbms
Specifying behavioral compositions in object-oriented systems
Automating Change Management of Object-Oriented Systems
The breakdown of the information model in multi-database systems
The Art of Growing Adaptive Object-Oriented Software
Formulations and Benefits of the Law of Demeter.
Experience with a graph-based propagation pattern programming tool
Adaptive object-oriented programming using graph-based customization
Formal Foundations for Object-Oriented Data Modeling
Activity model: a declarative approach for capturing communication behavior in object-oriented databases
Efficient Implementation of Adaptive Software.
A Report on Demeter/C
Type evolution in an object-oriented data base
Schema modification without database reorganization.
A framework of schema updates in an object-oriented database
A framework of schema updates in an object-oriented database system
--TR
--CTR
Zahir Tari , Xue Li , Ling Liu, Type Safety in the Context of Method Updates, Journal of Intelligent Information Systems, v.13 n.3, p.279-298, Nov.-Dec. 1999
Salvatore T. March , Charles A. Wood , Gove N. Allen, Research Frontiers in Object Technology, Information Systems Frontiers, v.1 n.1, p.51-74, July 1999 | program adaptation;adaptive software specification and development;object-oriented database systems;schema evolution;software reuse;knowledge reasoning |
627818 | Object-Based Semantic Real-Time Concurrency Control with Bounded Imprecision. | AbstractThis paper describes a concurrency control technique for real-time object-oriented databases that supports logical consistency and temporal consistency, as well as bounded imprecision that results from their trade-offs. The concurrency control technique uses a semantic locking mechanism within each object and user-defined conditional compatibility over the methods of the object. The semantics can specify when to sacrifice precise logical consistency to meet temporal consistency requirements. It can also specify accumulation and bounding of any resulting logical imprecision. We show that this technique, under certain general restrictions, can preserve global correctness and bound imprecision by proving it can guarantee a form of epsilon serializability specialized for object-oriented databases. | Introduction
Real-time applications such as air traffic control, autonomous vehicle control, and automated manufacturing
involve large amounts of environmental sensor data. These applications are supported
by real-time database systems (RTDBMS) [1]. In addition to supporting typical logical consistency
requirements, a RTDBMS concurrency control technique must maintain temporal consistency con-
straints. Data temporal consistency constrains how "old" data can be while still being considered
valid. Transaction temporal consistency constrains when transactions can execute and be considered
correct.
Traditional DBMS concurrency control techniques are designed to enforce only logical consistency
constraints, but not temporal consistency constraints on data values and transaction execu-
tion. For instance, a typical serializability-based concurrency control technique might disrupt and
earliest-deadline-first transaction scheduling order by blocking a transaction with a tight deadline
This work has been sponsored by the Naval Undersea Warfare Center and the National Science Foundation
in favor of a transaction with looser deadline in order to maintain logical consistency by preserving
the serialization order of transactions. Serializability-based techniques can also be a problem
in a RTDBMS because they restrict allowed concurrency, often more than is required for logical
correctness [2]. This over-restriction impedes a real-time transaction scheduler's ability to preserve
transaction temporal consistency because requiring serializability reduces the likelihood of creating
a schedule that meets timing constraints [3]. Data temporal consistency is also ignored by
serializability-based concurrency control techniques. For instance, a serializability technique would
block a transaction t update that updates temporally inconsistent data if another transaction t read
is reading the data. This blocking might cause t read to receive temporally inconsistent data. On
the other hand, relaxing serializability by allowing transaction t update to preempt transaction t read
could violate the logical consistency of t read . As this example indicates, the requirements of meeting
logical and temporal consistency constraints can conflict with each other.
There have been proposals for techniques that relax serializability [2, 4, 5, 6, 7, 8]. Many of
these techniques use semantic knowledge of the system to determine logical correctness, instead
of mandating a serializable schedule. However, these techniques were not intended for RTDBMSs
and thus do not incorporate semantics associated with temporal consistency. A survey of real-time
database concurrency control issues is presented in [9]. Many of these techniques relax serializability,
but still neglect data temporal consistency considerations. The exception is work presented in [10],
that replaces serializability with a correctness criterion called similarity. Similarity is a semantically
defined relation between a pair of data values that indicates that the values are recorded "close
enough" in time to be considered equal. It is used to define a concurrency control technique
that incorporates temporal consistency considerations. This technique, however, does not directly
address both logical consistency and temporal consistency.
We have designed a concurrency control technique [3] called semantic locking that supports
expression and enforcement of the trade-offs between logical and temporal consistency constraints
for real-time object-oriented database management systems. Our technique is designed for soft real-time
data management, which means that it makes an effort to preserve temporal consistency, but
can offer no a priori guarantee of meeting timing constraints. Due to its lack of guarantees, the
technique is not appropriate for hard real-time data management, where timing constraints must
be predictably met. In our semantic locking technique, concurrency control is distributed to the
individual data objects, each of which controls concurrent access to itself based on a semantically-
defined compatibility function for the object's methods [3]. This semantically-defined compatibility
is similar to that described in [8, 12] which use the notion of commutativity to define operation
conflict. The semantics allowed in our technique are richer than those allowed in [8, 12] because
our semantics include, among other things, expression of conditions under which logical consistency
should be relaxed in order to maintain temporal consistency. For instance, in the above example, the
semantics could express that transaction t update be allowed to write the data item that transaction
t read is reading only under the condition that temporal consistency of the data item is threatened
or violated.
If a RTDBMS concurrency control technique sacrifices logical consistency to maintain temporal
consistency, it may introduce a certain amount of logical imprecision into data and/or transactions.
For instance, in the above example, if the concurrency control technique allows transaction t update
to write the data item while transaction t read is reading the data item, then t read might get an
imprecise view of the data. The data item itself might become imprecise if two transactions that
write to it are allowed to execute concurrently. While imprecision in a database is not desirable, it
is often tolerable. For instance, in an air traffic control application, a transaction that queries for
the position of all airplanes within an airspace may read-lock the position data items for several
seconds. During this transaction's execution, it could be desirable to allow updates to the read-locked
data items in order to maintain their temporal consistency. Updates of read-locked data
could introduce imprecision into the querying transaction's view of the positions of the tracked
aircraft. However, the application may specify that it is sufficient for the values of the relative
position data to be within a specified range of exact values. That is, the application may allow
some bounded imprecision in the transaction's return values. However, allowing imprecision to
become unbounded in the database is not acceptable.
In this paper, we describe our semantic locking technique and how it can specify accumulation
and bounding of logical imprecision that results from the trade-off of logical consistency for temporal
consistency. We also derive two general restrictions on the expressed semantics and show that
these restrictions are sufficient for bounding logical imprecision in the system. We formally prove
the sufficiency of the restrictions by demonstrating that our semantic locking concurrency control
technique, under the restrictions, guarantees a form of epsilon serializability (ESR) [13] specialized
for object-oriented databases. ESR is a formal correctness criteria which specifies that a schedule
for transaction execution is correct if the results of the schedule (both data values and transaction
return values) are within specified limits of a serializable schedule. By demonstrating that our
technique can maintain a version of ESR, we show that it can provide logical correctness while
better enforcing temporal consistency.
Section 2 presents our model of a real-time object-oriented database. Section 3 describes the
semantic locking technique. Section 4 first describes the ESR correctness criteria and extends it
to our model of a real-time object-oriented database. The section then presents the two general
restrictions on the expressed semantics and proves that the semantic locking technique, under
these restrictions, meets the object-oriented ESR correctness criteria. Section 5 summarizes and
compares our work to related work.
Our RTDBMS semantic locking concurrency control technique is based upon our model of a real-time
object-oriented database called RTSORAC (Real-Time Semantic Objects, Relationships And
Constraints) [14]. This model extends object-oriented data models by incorporating time into objects
and transactions. This incorporation of time allows for explicit specification of data temporal
consistency and transaction temporal consistency. The RTSORAC model is comprised of a database
manager, a set of object types, a set of relationship types and a set of transactions. The database
manager performs typical database management operations including scheduling of all execution
on the processor, but not necessarily including concurrency control. We assume that the database
manager uses some form of real-time, priority-based, preemptive scheduling of execution on the pro-
cessor. Database object types specify the structure of database objects. Relationships are instances
of relationship types; they specify associations among the database objects and define inter-object
constraints within the database. Transactions are executable entities that access the objects and
relationships in the database. This paper focuses on bounding imprecision in objects and transac-
tions, so in presenting the RTSORAC model, we concentrate describing the model for object types
and transactions. The model for relationship types is described in more detail in [14].
We illustrate our real-time object-oriented database model using a simplified submarine command
and control system. The application involves contact tracking, contact classification and
response planning tasks that must have fast access to large amounts of sensor data [15]. This sensor
data is considered precise and thus provides a periodic source of precise data to the database.
Since sensor data is only valid for a certain amount of time, the database system must ensure
the temporal consistency of the data so that transactions, such as those for contact tracking and
response planning, get valid data. The data in the system may be accessed by transactions that
have timing constraints, such as those involved with tracking other ships in a combat scenario.
Transactions in this application may also allow for certain amounts of imprecision depending on
the semantics. For instance, a transaction that requests position information involving a friendly
UpdateSpeed
UpdateBearing
IncPosition
GetSpeed
GetCountry
C A
CF
Speed
Bearing
Position
Size
Signature
Captain
Torpedoes
Country
| Speed.time -
Bearing.time| < 3
Speed.time >
Now - 5
Figure
1: Example of Submarine Object Type
ship may allow more imprecision than a transaction tracking ships in a combat scenario. Figure 1
illustrates an example of a Submarine object type in the database schema.
2.1 Object Types
An object type is defined by hN; A; M;C;CF i. The component N is the name of the object type.
The component A is a set of attributes, each of which is characterized by hvalue; time; ImpAmti.
Here, value is an abstract data type that represents some characteristic value of the object type.
The field a:time defines the age of attribute a. If an attribute a allows any amount of imprecision,
then it must belong to a metric space. A metric space is a set of values on which a distance
function is defined. The distance function has the properties of positivity and symmetry and it
upholds the triangle inequality [13]. The field a:ImpAmt is the same type as a:value. It represents
the amount of imprecision that has been introduced into the value of a. The attributes of the
submarine include Speed, Bearing and Country. While Speed and Bearing may allow a certain
amount of imprecision in their values (they are of the real number metric space), Country is not a
metric space attribute and must therefore remain precise at all times.
An object type's M component is a set of methods that provides the only means for transactions
to access instances of the object type. A method is defined by hArg; Op; Exec; OCi. Arg is a set of
arguments each of which has the same structure as an attribute (value,time,ImpAmt). An input
argument is one whose value is used by the method to update attributes. A return argument is
one whose value is computed by the method and returned to the invoking transaction. We define
the sets InputArgs and ReturnArgs to represent the subsets of Arg that contain the method's
input arguments and the method's return arguments respectively. Op is a sequence of programming
language operations, including reads and writes to attributes, that represents the executable code
of the method. Exec is the worst case execution time of the method computed using techniques
described in [11]. OC is a set of constraints on the execution of the method including absolute
timing constraints on the method as a whole or on a subset of operations within the method [14].
In
Figure
osition is a method of the Submarine object type which adds the value of its
input argument to P osition:value.
The C component of an object type is a set of constraints that defines correct states of an
instance of the object type. A constraint is defined by hPr; ERi. Pr is a predicate which can
include any of the three fields of attributes: value, time, and imprecision. Notice that both logical
and temporal consistency constraints as well as bounds on imprecision can be expressed by these
predicates. For instance, in Figure 1 the predicate Speed:time ? Now \Gamma 5 expresses a temporal
consistency constraint on the Speed attribute that it should not be more than five seconds old.
A logical constraint on Speed is represented by the predicate Speed:value ?= 0. The predicate
defines the maximum amount of imprecision that may be allowed in the
value of the Speed attribute. The component ER of a constraint is an enforcement rule which
is a sequence of programming language statements that is executed when the predicate becomes
(i.e. when the constraint is violated).
The CF component of an object type is a boolean compatibility function with domain M \Theta
M \Theta SState. The compatibility function uses semantic information about the methods as well as
current system state (SState) to define compatibility between each ordered pair of methods of the
object type. We describe the CF component in detail in Section 3.1.
2.2 Transactions
A transaction is defined by hMI ; L; C;P i. MI is a set of method invocation requests where each
request is represented by hM; Arg; temporali. The M component of a method invocation request
is an identifier for the method being invoked. Arg is the set of arguments to the method. Recall
that a method argument can be a return argument or an input argument. A return argument
specifies a limit on the amount of imprecision allowed in the value returned through r as
import limit r . An input argument i 2 Arg specifies the value, time and imprecision amount to be
passed to the method, as well as the maximum amount of imprecision that may be exported by
the transaction through i, export limit i . Note, the concurrency control technique we describe in
Section 3 does not limit the amount of imprecision that a transaction may export. However, for
generality, the model supports such a limit. The temporal field of a method invocation request
specifies whether a transaction requires that temporally consistent data be returned.
The L component of a transaction is a set of lock requests and releases. Each lock request is
associated with a method invocation request. A transaction may request a lock prior to the request
for the method invocation, perhaps to enforce some transaction logical consistency requirement.
In this case, the lock request is for a future method invocation. The transaction may also request
the lock simultaneously with the method invocation, in which case the lock is requested for a
simultaneous method invocation. This model of a transaction can achieve various forms of two-phase
locking (2PL) [16] by requesting and releasing locks in specific orders. Other more flexible
transaction locking techniques that do not follow 2PL can also be supported.
The component C of a transaction is a set of constraints on the transaction. These constraints
can be expressed on execution, timing, or imprecision [14]. The priority P of a transaction is
used by the database manager to perform real-time transaction scheduling (for a survey of real-time
transaction scheduling see [9]). Each method invocation requested by the transaction is to
be executed at the transaction's priority. Because a transaction is made up of a set of method
invocations, our model assumes that a transaction cannot perform any intermediate computations.
For example, assume that a user of the submarine database wants precise location information
on all submarines in the database. A transaction to perform such a task would request a lock and
a simultaneous invocation of the GetP osition method on each submarine object in the database,
specifying an imprecision import limit of zero for the arguments that return the locations. The
transaction would hold the locks for these methods until all of the invocations are complete.
3 The Semantic Locking Technique
This section describes our real-time concurrency control technique for database objects under the
RTSORAC model. The technique uses semantic locks to determine which transactions may invoke
methods on an object. The granting of semantic locks is controlled by each individual object which
uses its compatibility function to define conditional conflict. Our description of the technique concentrates
on concurrency control within individual objects because we are concerned with bounding
imprecision within objects and transactions. We briefly address inter-object concurrency control
at the end of this section.
3.1 Compatibility Function
The compatibility function (CF ) component of an object (Section 2.1) is a run-time function,
defined on every ordered pair of methods of the object. The function has the form:
act represents a method that has an active lock, and m req represents a method for which
a lock has been requested by a transaction.
The boolean expression may contain predicates involving several characteristics of the object
or of the system in general. The concept of affected set that was introduced in [17], is used as a
basis for representing the set of attributes of an object that a method reads/writes. We modify
this notion to statically define for each method m a read affected set (ReadAffected(m)) and a
write affected set (WriteAffected(m)). The compatibility function may refer to the time field
of an attribute as well as the current time (Now) and the time at which an attribute a becomes
temporally invalid (deadline(a)) to express a situation in which logical consistency may be traded-off
to maintain or restore temporal consistency. The current amount of imprecision of an attribute
a (a:ImpAmt) or a method's return argument r (r:ImpAmt) along with the limits on the amount
of imprecision allowed on a (data ffl-spec a /citeDrewPu93) and r (import limit r ) can be used to
determine compatibility that ensures that interleavings do not introduce too much imprecision.
The values of method arguments can be used to determine compatibility between a pair of method
invocations, similar to techniques presented in [7].
Imprecision Accumulation. In addition to specifying compatibility between two locks for
method invocations, the semantic locking technique requires that the compatibility function express
information about the potential imprecision that could be introduced by interleaving method
invocations. There are three potential sources of imprecision that the compatibility function must
express for invocations of methods m 1
1. Imprecision in the value of an attribute that is in the write affected sets of both m 1
2. Imprecision in the value of the return arguments of m 1
reads attributes written by
3. Imprecision in the value of the return arguments of m 2
reads attributes written by
Compatibility Imprecision Accumulation
Increment S1 :ImpAmt by
Increment Speed:ImpAmt by
data ffl-spec Speed \Gamma jS1 :value \Gamma S2:valuej
C: CF (IncPosition(A); GetPosition(P Increment P:ImpAmt by jA:valuej
Figure
2: Compatibility Function Examples
Compatibility Function Examples. Figure 2 uses the submarine example of Section 2.1 to
demonstrate several ways in which the compatibility function can semantically express conditional
compatibility of method locks. Example A shows how a compatibility function can express a trade-off
of logical consistency for temporal consistency when a lock is currently active for GetSpeed and
a lock on UpdateSpeed is requested. Under serializability, these locks would not be compatible
because GetSpeed's view of the Speed attribute could be corrupted. However, if the timing constraint
on Speed is violated, it is important to allow UpdateSpeed to restore temporal consistency.
Therefore, the two locks can be held concurrently as long as the value that is written to Speed
by UpdateSpeed (S 2
:value) is close enough to the current value of Speed (Speed:value). This
determination is based on the imprecision limit of GetSpeed's return argument S 1
and the amount
of imprecision that UpdateSpeed will write to Speed through S 2
:ImpAmt). Also shown is
the potential accumulation of imprecision that could result from the interleaving. In this case,
GetSpeed's return argument S 1
would have a potential increase in imprecision equal to the difference
between the value of Speed before the update takes place (Speed:value) and the value of
Speed after the write takes place (S 2 :value), plus the amount of imprecision that is written to
Speed by UpdateSpeed (S 2 :ImpAmt).
Example B in Figure 2 illustrates how an attribute can become imprecise. Two invocations of
UpdateSpeed may occur concurrently if a sensor writes one value and a human user also updates
the Speed. Two locks on UpdateSpeed may be held concurrently as long as the difference between
the values written by the associated invocations does not exceed the allowed amount of imprecision
for the Speed attribute. In this case, the object's Speed attribute would have a potential increase
in imprecision equal to the value of jS 1
:valuej if this interleaving were allowed.
Semantic Lock Request for m req Step
granted := TRUE /* initialization */
for every ((m act 2 ActiveLocks) OR LA
((m act in priority queue) AND
(m act :priority ? m req :priority)))
if CF (m act
save ImpAmts for return args of m act
Increment imprecision LA 2
else
granted := FALSE
endif
end for
if not granted then
in priority queue LB
else
to ActiveLocks LC
endif
endif
Compatibilities
Enqueue
Request
Done
NO YES
Request
Add Lock
to
Active Locks Set
LA
Figure
3: Mechanism for Semantic Lock Request
Example C of Figure 2 represents the compatibility function for a method that is more complex
than the other examples. The method IncP osition reads the P osition attribute, increments it by
the value of input argument A and then writes the result back to the P osition attribute. A lock for
an invocation of this method may be held concurrently with a lock for an invocation of GetP osition
only if the amount by which IncP osition increments the P osition is within the imprecision bounds
of the return argument P of GetPosition. In this case, GetP osition's return argument P would
have a potential increase in imprecision equal to the value of IncP osition's argument A if this
interleaving were allowed.
3.2 Semantic Locking Mechanism
The semantic locking mechanism must handle three actions by a transaction: a semantic lock
request, a method invocation request and a semantic lock release. As described in Section 2.2, a
semantic lock may be requested for a future method invocation request or for a simultaneous method
invocation request. Future method invocation requests can be useful if a transaction requires that
all locks be granted before any execution occurs, as with strict two-phase locking. Figures 3 and
4 show the procedures that the semantic locking mechanism executes when receiving a semantic
lock request and a method invocation request respectively. A priority queue is maintained to hold
requests that are not immediately granted.
Method Invocation Request for m req
Step
InitialImprecision(m req
if any Precondition fails then B
in priority queue L
else
for every a 2 W riteAffected(m req
save original a:ImpAmt
end for
for every r 2 ReturnArgs(m req
save original r:ImpAmt
end for
if already locked then D
Execute m req I
Semantic Lock Update J
Check the queue K
else
Semantic Lock Request E
if lock granted then F
Execute m req H
else
for every a 2 W riteAffected(m req
restore original a:ImpAmt
for every r 2 ReturnArgs(m req
restore original r:ImpAmt
for every saved return argument r
of an active method invocation
restore original r:ImpAmt
endif
endif
Method
Invocation
Preconditions
Already
Update
Enqueue
Request
Check the
Queue
Done
YES
NO
YES
NO
Initial
Imprecision
Update
Imprecision
Lock
Request
Granted?
YES
Execute
Method
Restore
ImpAmts
A
G
F
Execute
Method
I
NO
Figure
4: Mechanism for Method Invocation Request
3.2.1 Semantic Lock Request
When an object receives a semantic lock request for method invocation m req , the semantic locking
mechanism evaluates the compatibility function to ensure that m req is compatible with all currently
active locks and with all queued lock requests for method invocations that have higher priority
than m req (Figure 3, Step LA 1
). For each compatibility function test that succeeds, the mechanism
accumulates the imprecision that could be introduced by the corresponding interleaving (Step LA 2
Recall that the boolean expression in the compatibility function can include tests involving value,
time and imprecision information of the method arguments involved. A semantic lock request for
a future method invocation does not have values for arguments at the time of the request. Thus,
when evaluating the compatibility function for CF (m act ; m req ), if either m act or m req is a future
method invocation, then any clause of the compatibility function that involves method arguments
must evaluate to FALSE.
If all compatibility function tests succeed, the semantic locking mechanism grants the semantic
lock and places it in the active lock set (Step LC). If any test fails, the mechanism places the request
in the priority queue to be retried when another lock is released (Step LB).
3.2.2 Method Invocation Request
When an object receives a method invocation request, the semantic locking mechanism evaluates a
set of preconditions and either requests a semantic lock for the invocation if necessary or updates
the existing semantic lock with specific argument amounts. After the preconditions are successfully
evaluated and locks are granted or updated, the semantic locking mechanism allows the method
invocation to execute. The mechanism also accumulates the imprecision that could result if the
requested method were to execute. In the following paragraphs we describe the steps in Figure 4
of the semantic locking mechanism for a method invocation request m req .
Initial Imprecision Calculation. Given method invocation request m req , the semantic locking
mechanism first computes the potential amount of imprecision that m req will introduce into the
attributes that it writes and into its return arguments. This computation takes into account
the imprecision in the attributes read by the methods and in the input arguments as well as
any computations that are done by the method on these values (Figure 4, Step A). An initial
imprecision procedure computes the amount of imprecision that m req will write to each attribute a
in the write affected set of m req (m req :ExportImp(a)). The procedure also computes the amount
of imprecision that m req will return through each of its return arguments r (m req :ImportImp(r)).
The procedure computes these values by using the amount of imprecision already in the attribute or
return argument and calculating how the method may update this imprecision through operations
that it performs. This initial imprecision procedure may be created by the object designer or by a
compile-time tool that examines the structure of m req to determine how the method will affect the
imprecision of attributes in its write affected set and of its return arguments.
Preconditions Test. The next phase of the semantic locking mechanism for method invocation
request m req tests preconditions that determine if executing m req would violate temporal consistency
or imprecision constraints (Step B). The mechanism evaluates the following preconditions
when m req has been requested:
Preconditions
(m req :ExportImp(a) - data ffl-spec a ) (b)
Precondition (a) ensures that if a transaction requires temporally valid data, then an invoked
method will not execute if any of the data that it reads will become temporally invalid during
its execution time. Precondition (b) ensures that executing the method invocation will not allow
too much initial imprecision to be introduced into attributes that the method invocation writes.
Precondition (c) ensures that the method invocation executes only if it does not introduce too much
initial imprecision into its return arguments.
If any precondition fails, then the semantic locking mechanism places the request on the priority
queue (Step L) to be retried when another lock is released. If the preconditions hold, the semantic
locking mechanism updates the imprecision amounts for every attribute a in the write affected set
of m req with the value m req :ExportImp(a). Similarly, it updates the imprecision amounts for every
return argument r of m req with the value m req :ImportImp(r) (Step C). The mechanism saves the
original values for the imprecision amounts of the attributes and return arguments involved so that
they can be restored if the lock is not granted.
Because the preconditions can block a transaction if the data that it accesses is too imprecise for
its requirements, there must be some way of restoring precision to data so that transactions are not
blocked indefinitely. Certain transactions that write precise data are characterized as independent
updates [18]. Such a transaction, which may come from a sensor or from user intervention, restores
precision to the data that it writes and allows transactions that are blocked by the imprecision of
the data to be executed.
Associated Semantic Lock. The semantic locking mechanism next determines whether or not
m req is already locked by a semantic lock requested earlier (Step D). If not, a semantic lock is
requested (Step E) as described in Section 3.2.1. If the lock is granted, the semantic locking
mechanism allows the method invocation to execute (Step H). Otherwise, the mechanism restores
the original values of any imprecision amounts that were changed (Step G).
If the semantic lock associated with m req was granted earlier, the semantic locking mechanism
allows m req to be executed (Step I). The mechanism then performs a semantic lock update (Step
J). This procedure entails updating the existing semantic lock associated with m req with specific
argument information that was not available when the lock was granted. Updating existing locks
potentially increases concurrency among methods because with values of arguments, the compatibility
function is more likely to evaluate to TRUE. After the semantic lock is updated, the lock
requests waiting on the priority queue are checked for compatibility with the newly updated lock
(Step K).
3.2.3 Releasing Locks.
A semantic lock is released explicitly by the holding transaction. Whenever a semantic lock is
released, it is removed from the active locks set and the priority queue is checked for any requests
that may be granted. Since the newly-released semantic lock may have been associated with a
method invocation that restored logical or temporal consistency to an attribute, or the lock may
have caused some incompatibilities, some of the queued lock requests may now be granted. Also,
method invocation requests that are queued may now pass preconditions if temporal consistency
or precision has been restored to the data. The requests in the queue are re-issued in priority order
and if any of these requests is granted, it is removed from the queue.
3.3 Inter-Object Concurrency Control
The semantic locking mechanism described in this paper maintains consistency for individual objects
and transactions. In addition, transactions in the current technique can obtain multiple locks
and therefore can enforce inter-object consistency themselves. This enforcement is similar to techniques
used in traditional database systems - it requires that transactions are written to maintain
inter-object consistency.
Extending semantic locking to provide system enforcement of inter-object consistency is possi-
ble, but outside the scope of this paper. We outline the approach here. As mentioned in Section
2, inter-object constraints are expressed in RTSORAC relationships. An inter-object constraint
is defined on the methods of the objects participating in the relationship and is enforced by the
enforcement rule of the constraint. An enforcement rule of an inter-object constraint may invoke
methods of the participating objects. Thus, to automatically support an inter-object constraint,
the semantic locking technique should propagate semantic lock requests through relationships to
ensure that the enforement rule that maintains the inter-object constraint can execute. For in-
stance, assume that a semantic lock is requested on a method m 1
of an object
that participates
in relationship r. Relationship r has an inter-object constraint c between and an object . The
enforcement rule of constraint c requires that a method m 2
be executed under some conditions of
's execution. So, upon a request for a semantic lock on m 1
, the semantic locking mechanism
should also propagate a semantic lock request for m 2
to
. All propagated locks should be granted
before the original lock request is granted. Propagated semantic locks would be released when the
original lock is released.
This paper concentrates on semantic locking and imprecision management for individual objects,
which is a significant problem. We are working on extending the semantic locking technique to
automatically support inter-object constraints as outlined here, but a further description is outside
the scope of this paper.
3.4 Implementation
We have implemented the RTSORAC model in a prototype system that extends the Open Object
Oriented Database System (Open OODB) [19] to support real-time requirements. These real-time
extensions execute on a Sun Sparc Classic workstation under the Solaris 2.4 operating system.
RTSORAC objects are implemented in main memory using Solaris' shared memory capability.
Transactions can access objects in the shared memory segment as if the objects were in their own
address space. This design provides fast, predictable access to data objects. Before accessing
objects, transactions execute the semantic locking mechanism to provide concurrency control. Performance
measurements on the prototype system indicate that requesting a semantic lock requires
approximately there are no other locks on the object. This time increases linearly for each
active lock and each pending request. The implementation is described fully in [20].
Bounding Imprecision
In this section we show how our semantic locking technique can bound imprecision in the objects
and transactions of the database. To do this, we prove that the semantic locking technique, under
two general restrictions on the design of each object's compatibility function, ensures that the
epsilon-serializability (ESR) [13] correctness criteria, defined for object-oriented databases, is met.
First, we summarize the definition of ESR from [13, 18] and then extend its definition to object-oriented
databases. Second, we present the two general restrictions on the compatibility function.
Third, we formally prove the sufficiency of these restrictions for ensuring that our semantic locking
technique maintains object-oriented ESR. Finally, we describe an example of how the restricted
semantic locking technique bounds imprecision in the submarine tracking example.
4.1 Epsilon Serializability
Epsilon serializability (ESR) is a correctness criterion that generalizes serializability by allowing
bounded imprecision in transaction processing. ESR assumes that serializable schedules of transactions
using precise data always result in precise data in the database and in precise return values
from transactions. A value resulting from a schedule H is imprecise if it differs from the corresponding
value resulting from each possible serializable schedule of the transactions in H . In order
to accumulate and limit imprecision, ESR assumes use of only data items that belong to a metric
space (defined in Section 2) [13].
A transaction specifies limits on the amount of imprecision that it can import and export with
respect to a particular data item. Import limit t;x is defined as the maximum amount of imprecision
that transaction t can import with respect to data item x, and export limit t;x is defined as the
limit on the amount of imprecision exported by transaction t to data item x [13]. For every data
item x in the database, a data ffl-specification (data ffl-spec x ) expresses a limit on the amount of
imprecision that can be written to x [18].
The amount of imprecision imported and exported by each transaction, as well as the imprecision
written to the data items, must be accumulated during the transaction's execution.
Import imprecision t;x represents the amount of imprecision imported by transaction t with respect
to data item x. Similarly, export imprecision t;x represents the amount of imprecision exported by
transaction t with respect to data item x. Data imprecision x defines the amount of imprecision
written to the data item x.
ESR defines Safety as a set of conditions that specifies boundaries for the amount of imprecision
permitted in transactions and data. Safety is divided into two parts: transaction safety and
data safety. Safety for transaction t with respect to data item x is defined in [13] as follows: 1
TR-Safety t;x j
import imprecision t;x - import limit t;x
export imprecision t;x - export limit t;x
Data safety is described informally in [18]. We formalize the definition of data safety for data item
x:
data imprecision x - data ffl-spec x
The original definition of ESR [13, 18] can now be stated as: ESR is guaranteed if and only if all
transactions and data items are safe. Or, more formally as:
guaranteed if and only if TR-Safety t;x and Data-Safety x are invariant for
every transaction t and every data item x.
It is this definition that we adapt for the object-oriented data model and use to show that our
semantic locking technique maintains bounded imprecision.
4.2 Object-Oriented ESR
The above definitions of data and transaction safety were general; we now define safety more
specifically for the RTSORAC real-time object-oriented data model. Although this model allows
arbitrary attributes and return arguments, we assume in the following definitions and theorem that
each attribute value is an element of some metric space.
Data Safety. Data in the RTSORAC model is represented by objects. Safety for an object o is
defined as follows:
data ffl-spec a )
is the set of attributes of o. That is, if every attribute in an object meets its specified
imprecision constraints, then the object is safe.
Transaction Safety. Transactions in the RTSORAC model operate on objects through the methods
of the object. Data values are obtained through the return arguments of the methods and are
passed to the objects through the input arguments of methods. Let t MI be the set of method
invocations in a transaction t and let o M be the set of methods in an object o. We denote the
In [13] the terms import inconsistency t;x and export inconsistency t;x are used. We have renamed them to
import imprecision t;x and export imprecision t;x .
method invocations on invoked by t as t MI We define safety of a transaction (OT ) t with
respect to an object o as follows:
OT -Safety t;o j
That is, as long as the arguments of the method invocations on object invoked by OT t are
within their imprecision limits, then t is safe with respect to o.
We can now define Object Epsilon Serializability (OESR) as:
guaranteed if and only if OT -Safety t;o and Object-Safety are invariant
for every object transaction t and every object o.
This definition of OESR is a specialization of the general definition of ESR.
4.3 Restrictions on The Compatibility Function
The RTSORAC compatibility function allows the object type designer to define compatibility
among object methods based on the semantics of the application. We now present two restrictions
on the conditions of the compatibility function that are sufficient to guarantee OESR. Intuitively,
these restrictions allow read/write and write/write conflicts over affected sets of methods as long
as specified imprecision limits are not violated.
The imprecision that is managed by these restrictions comes from interleavings allowed by the
compatibility function. Any imprecision that may be introduced by calculations performed by
the methods is accumulated the initial imprecision procedure before the compatibility function is
evaluated (see Section 3.2.2).
Let a be an attribute of an object o, and m 1
be two methods of o.
Restrictions
R1: If a 2 WriteAffected(m 1
then the compatibility function for CF (m 1
and CF (m 2
may return TRUE only if it includes the conjunctive clause:
(data ffl-spec a \Gamma a:ImpAmt), where z 1 and z 2 are the values written to a by
respectively. Furthermore, the compatibility function's associated imprecision
accumulation must specify the following for a: a:ImpAmt := a:ImpAmt
R2: If a 2 ReadAffected(m 1
then for every r 2 ReturnArgs(m 1
be the value of r using a's current value, let x be the value written to a my m 2
and let w be
the value of r using x. Then:
a) the compatibility function for CF (m 2
may return TRUE only if it includes the
conjunctive clause: jz \Gamma wj - (import limit r \Gamma r:ImpAmt). Furthermore, the compatibility
associated imprecision accumulation must specify the following for r:
b) the compatibility function for CF (m 1
may return TRUE only if it includes the conjunctive
clause: Furthermore, the
compatibility function's associated imprecision accumulation must specify the following
for r: r:ImpAmt := r:ImpAmt
Restriction captures the notion that if two method invocations interleave and write to the
same attribute a, the amount of imprecision that may be introduced into a is at most the distance
between the two values that are written (jz 1
j). To maintain safety, this amount cannot be greater
than the imprecision limit less the current amount of imprecision for a (data ffl-spec a \Gamma a:ImpAmt).
The accumulation of this imprecision in a:ImpAmt is also reflected in R1.
As an example of restriction R1, recall the compatibility function example of Figure 2B of Section
3.1. Notice that the Speed attribute is in the write affected set of the method UpdateSpeed and
thus restriction R1 applies to the compatibility function CF (UpdateSpeed 1
UpdateSpeed 2
)).
The value written to the Speed attribute by UpdateSpeed 1
and the value written to Speed by
UpdateSpeed 2
is
. Thus, the compatibility function, CF (UpdateSpeed 1
UpdateSpeed 2
may return TRUE only if
Restriction is based on the fact that if a method invocation that reads an attribute (m 1
is interleaved with a method invocation that writes to the same attribute (m 2
), the view that m 1
has of the attribute (in return argument r) may be imprecise. In R2a the amount of imprecision in
's view of the attribute is at most the distance between the value of the attribute before m 2
's
takes place and the value of the attribute after m 2
's write takes place (jz \Gamma wj). This amount
cannot be greater than the imprecision limits imposed on r less the current amount of imprecision
on r (import limit r \Gamma r:ImpAmt); it also must be accumulated in the imprecision amount of r.
Restriction R2b differs from R2a in that in R2b m 1
is currently active and m 2
has been re-
quested. The initial imprecision procedure for m 1 computes the amount of imprecision that m 1 will
return through r (m 1
is invoked, and thus r:ImpAmt does not include
the amount of imprecision that m 2
might introduce into a (x:ImpAmt). Because allowing the
interleaving between m 1
could cause any imprecision introduced into a to be returned by
through r, the additional amount of imprecision introduced to a by m 2
(x:ImpAmt) must be
taken into account when testing for compatibility between m 1
. It must also be included in
the accumulation of imprecision for r.
Figure
2A of Section 3.1 presents an example of a compatibility function that meets restriction
R2b. Notice that the function will evaluate to TRUE only if the difference between the value of the
Speed attribute before the update takes place (Speed:value) and the value of the attribute after
the update takes place (S 2
:value) is within the allowable amount of imprecision specified for the
return argument of the GetSpeed method. Notice also that this allowable amount of imprecision
must take into account the amount of imprecision already in the return argument (S 1
and the amount of imprecision in the argument used to update the Speed attribute (S 2
:ImpAmt).
Each of the restrictions requires that non-serializable interleavings are allowed only if certain
conditions involving argument amounts evaluate to TRUE. Thus, for CF (m
or
is a future method invocation, then the restrictions require that only serializable interleavings
be allowed. Therefore, no imprecision will be accumulated when one or both method invocations
being tested for compatibility is a future method invocation.
We call the concurrency control technique that results from placing Restrictions R1 and
the compatibility function, the restricted semantic locking technique.
4.4 Correctness
We now show how the restricted semantic locking technique guarantees OESR. First, we prove a
lemma that Object-Safety remains invariant through each step of the semantic locking mechanism.
We then prove a similar lemma for OT-Safety. Both of these lemmas rely on the design of the restricted
semantic locking technique, which contains tests for safety conditions before each potential
accumulation of imprecision.
It is sufficient to demonstrate that safety is maintained for semantic lock requests for simultaneous
method invocations only, since this is the only part of the semantic locking mechanism that
can introduce imprecision into data and transactions. A semantic lock request for a future method
invocation m does not introduce imprecision because the argument amounts are not known. Thus
restrictions require that no imprecision be accumulated when interleaving m with any
other method invocation. Lock releases also do not introduce imprecision.
Lemma 1 If the restricted semantic locking technique is used, then Object-Safety o is invariant
for every object o.
Proof:
Let o be an object and o A be the set of attributes in o. We assume that o is initially
safe and that the restricted semantic locking technique is used. Consider the steps
in the semantic locking mechanism (Figure 4) in which the imprecision amount of a,
a:ImpAmt, is updated:
ffl (Step C) Imprecision is accumulated if the preconditions for a requested method
invocation m hold and a 2 WriteAffected(m). Since the preconditions hold, Step
m:ExportImp(a), and from Precondition (b):
data ffl-spec a . Combining these two relations we have that
data ffl-spec a , which is the requirement for Object Safety. Thus,
Object Safety remains invariant after Step C.
ffl (Step LA) Imprecision is accumulated in Step LA 2
if the compatibility function
evaluation in Step LA 1
for method invocations m 1
evaluates to TRUE
and a 2 WriteAffected(m 1
). In this case, the imprecision
after Step LA 2
is a:ImpAmt new = a:ImpAmt old
j, where z 1
and z 2
are
the values written to a by m 1
respectively. From Restriction R1 we have
that jz 1
data ffl-spec a \Gamma a:ImpAmt old . This inequality can be rewritten as
data ffl-spec a . Combining this relation with the above
relation involving a:ImpAmt new yields: a:ImpAmt new - data ffl-spec a , which is
the requirement for Object Safety. Thus, Object Safety remains invariant after
Lemma 2 If the restricted semantic locking technique is used, then OT -Safety t;o is invariant for
every transaction t with respect to every object o.
Proof:
Let o be an object, t be a transaction, m be a method invocation on invoked by t,
r be a return argument of m, and i be an input argument of m. We assume that t
is initially safe with respect to o and that the restricted semantic locking technique is
used. We show that r:ImpAmt - import limit r first for the case when a semantic lock
for m is requested by t and then for the case when t holds the semantic lock for m.
Case 1. Transaction t requests a semantic lock for m and a semantic lock is held for
another method invocation m 1
. Consider the situations in which r:ImpAmt is
updated by the semantic locking mechanism:
ffl (Step C) Imprecision is accumulated if the preconditions for m hold. Since the
preconditions hold, Step C 2
ensures m:ImportImp(r), and from
Precondition (c): m:ImportImp(r) - import limit r . Combining these two
relations we have that r:ImpAmt - import limit r , which is the requirement
for OT Safety. Thus, OT Safety remains invariant after Step C.
ffl (Step LA) Imprecision is accumulated in Step LA 2
if the compatibility function
evaluation in Step LA 1
for CF (m 1
m) evaluates to TRUE and ReadAffected(m)
In this case, the imprecision after Step LA 2
is
z is the value of r using the
current value of a, and w is the value of r using the value written by m 1
to a.
From Restriction R2a we have that jz \Gamma wj - import limit r \Gamma r:ImpAmt old .
This inequality can be rewritten as r:ImpAmt old
Combining this relation with the above relation involving r:ImpAmt new yields:
r:ImpAmt new - import limit r , which is the requirement for OT Safety. Thus,
OT Safety remains invariant after Step LA. 2
Case 2 Transaction t holds the semantic lock for m and a semantic lock is requested for
. In this case, r:ImpAmt can only be updated in Step LA of the semantic locking
mechanism and only when the compatibility function evaluation in Step LA 1
for
evaluates to TRUE and ReadAffected(m)
;. In this case, the imprecision after Step LA 2
is r:ImpAmt new = r:ImpAmt old
value written to a by m 1
, z is the value of r using
a's current value and w is the value of r using x. From Restriction R2b we have
that old +x:ImpAmt). This inequality can be
rewritten as r:ImpAmt old . Combining this
relation with the above relation involving r:ImpAmt new yields: r:ImpAmt new -
import limit r , which is the requirement for OT Safety. Thus, OT Safety remains
invariant after Step LA. 2
The other OT safety property, i:ImptAmt - export limit i , is trivially met because the
semantic locking technique does not limit the amount of imprecision that is exported
by a transaction to other transactions or to objects. As stated in [18], if transactions
execute simple operations, the export limit can be omitted and the transaction can rely
completely on data ffl-specs for imprecision control. The simple model of transactions
of Section 2.2 allows us to define for all input arguments i, export limit
regardless of the value of i:ImpAmt, OT safety is invariant. 2
Theorem 1 If the restricted semantic locking technique is used, then OESR is guaranteed.
Proof: Follows from Definition 2, Lemma 1, and Lemma 2. 2
Theorem 1 shows that if the restricted semantic locking technique is used, the imprecision
that is introduced into the data and transactions is bounded. Because OESR is guaranteed across
all objects and all transactions, this result shows that the restricted semantic locking technique
maintains a single, global correctness criterion that bounds imprecision in the database.
4.5 Example
We use an example of a Submarine object, which is an instance of the object type in Figure 1
of Section 2, to illustrate how the semantic locking technique maintains the imprecision limits of
a data object and therefore guarantees OESR. The object's method UpdateSpeed(S) writes the
value S to the value field of the object's Speed attribute. We assume that the Speed attribute is
initially precise that the only active lock is for a simultaneous invocation
of UpdateSpeed(10:0), and that the object's priority queue is empty. Let a transaction request
a lock for a simultaneous invocation of UpdateSpeed(10:6), where the value 10:6 has 0:3 units of
imprecision in it. As indicated in Figure 1, the imprecision limit on Speed is data ffl-spec Speed = 1:0.
When the Submarine object receives the request for the UpdateSpeed(10:6) method invocation
it executes the semantic locking mechanism of Figure 4. First it computes the initial imprecision
procedure (Step A). Speed is the only attribute in the write affected set of UpdateSpeed and
UpdateSpeed has no return arguments, so the initial imprecision procedure computes
UpdateSpeed:ExportImp(Speed). Because the invocation UpdateSpeed(10:6) writes 10.6 to Speed
with 0.3 units of imprecision,
The preconditions for the requested UpdateSpeed(10:6) method invocation are tested next (Step
B). Precondition (a) trivially holds because ReadAffected(UpdateSpeed)=;. Precondition (b)
also holds since UpdateSpeed has no return
arguments, Precondition (c) holds as well.
of the semantic locking mechanism then initializes the imprecision amount for the Speed
attribute to the value of UpdateSpeed:ExportImp(Speed), so
Because the semantic lock was requested for a simultaneous method invocation, the condition in
Step D is TRUE and a semantic lock request is performed (Step E). In Step LA 1
, the object's semantic
locking mechanism checks the compatibility of the requested invocation of UpdateSpeed(10:6)
with the currently locked invocation of UpdateSpeed(10). Recall from Figure 2 and the example
in Section 4.3 that CF (UpdateSpeed 1
UpdateSpeed 2
data ffl-
Speed:ImpAmt. The test of the compatibility function uses the imprecision amount
for Speed that was stored in Step C and thus: jS 1
data ffl-spec Speed \Gamma 0:7, the method invocations are
compatible in Step LA 1
Now the object's semantic locking mechanism executes Step LA 2 to accumulate imprecision for
the Speed attribute into the imprecision amount for Speed stored in Step C. Recall from Figure
2: CF (UpdateSpeed 1
UpdateSpeed 2
Thus, the mechanism computes a new value for the imprecision amount for the Speed
attribute as: Speed:ImpAmt := 0:3
Because there are no other active locks to check for compatibility, the compatibility function
evaluates to TRUE. The object's mechanism grants a semantic lock for the invocation of
and adds the lock to the object's active lock set (Step LC). Finally the semantic
locking mechanism executes UpdateSpeed(10:6) (Step H). Note that the imprecision amount for
the Speed attribute is now 0.9. Both UpdateSpeed method invocations execute concurrently and
the imprecision limits are maintained.
Although we have only demonstrated relatively simple method interleavings in this example
(essentially two writes to a single attribute), the use of read affected and write affected sets in the
semantic locking technique allows it to perform in a similar fashion for more complicated object
methods.
5 Conclusion
This paper has presented a model and an object-based semantic real-time concurrency control
technique capable of enforcing both temporal and logical consistency constraints within real-time
database objects. Moreover, it demonstrated that the technique can bound the imprecision that is
introduced when one constraint is traded off for another. This was done by showing that, under
certain general restrictions, the technique guarantees a global correctness criterion - a specialization
of epsilon serializability for object-oriented databases.
Although our technique is designed for soft real-time databases and therefore offers no guarantees
of meeting timing constraints, the support that it provides for real-time is in its treatment
of temporal consistency requirements. The user-defined compatibility function provides support
for maintaining data temporal consistency by allowing the specification of the trade-off between
temporal and logical consistency. Because our technique allows for relaxing serializability among
transactions, the likelihood that the real-time scheduler will be able to determine a schedule that
maintains transaction timing constraints is increased.
Our technique differs from most previous real-time concurrency control work [9] and semantic
concurrency control work in [2, 5, 6] because it is based on an object-oriented data model. It differs
from the object-based concurrency control work in [7, 8, 12, 17, 21] because it incorporates temporal
consistency requirements. It differs from all of these approaches and the real-time concurrency
control work in [10] in that it can manage and bound imprecision that can be accumulated due to
trading off logical consistency for temporal consistency. It differs from other ESR-based techniques
because it can limit logical imprecision to be introduced only if temporal consistency of data
or transactions is threatened.
Our semantic locking technique is closest to the concurrency control protocol presented in [8].
This protocol uses commutativity with bounded imprecision to define operation conflicts. It is
similar to our protocol in that the user defines the allowed amount of imprecision for a given
operation invocation and the protocol uses a modified commutativity table to determine if the
operation can execute concurrently with the active operations. However, the protocol in [8] does
not take temporal considerations into account. Furthermore, our restrictions on the compatibility
function, defined in Section 4.3, provide the user with a guide for defining the compatibility function
to maintain correctness. There is no similar guide in [8] for defining commutativity with bounded
imprecision.
Two drawbacks of our technique are the complexity posed to the system designer and the
additional overhead required for the run-time system to grant locks. One reason for the complexity
is that applications that require real-time database management, such as submarine command and
control, are generally more complex than those that can be supported by traditional databases.
Adding support for imprecision maintenance, while providing a potential increase in concurrency,
also adds to the complexity of the technique. We are currently developing a tool to ease some
of the burden on system designers. The tool computes read and write affected sets of methods,
along with other static characteristics, and proposes default object compatibility functions and
imprecision accumulation. The designer can then interactively modify the compatibility function
or the constraints of objects or transactions based on application-specific semantic information.
Although the performance measurements for our technique in our prototype system indicate
that it takes on the order of hundreds of microseconds (depending on the number of current locks
and requests) to execute semantic locking, the extra overhead is not prohibitive. It does indicate
that semantic locking is not appropriate for applications with short method executions and lock
durations. For longer-lived method executions and transactions, the increased concurrency of semantic
locking will easily justify the increased overhead. We are currently performing simulation
studies to more accurately specify under what circumstances semantic locking is superior. Unfor-
tunately, bounding the overhead and the blocking time that are introduced by the semantic locking
technique is not feasible due to the complexity of the technique; this limits its usefulness in hard
real-time databases.
We believe that the generality of our technique (a conditional compatibility function and semantic
locking mechanism distributed to each object), the treatment of temporal consistency, the
definition of restrictions that are sufficient to bound imprecision, and the definition of an object-oriented
version of ESR, are valuable contributions towards expressing and enforcing imprecision in
object databases as well as providing support for maintaining both temporal and logical consistency
found in real-time databases.
Acknowledgements
. We thank Joan Peckham, Janet Prichard, Paul Fortier, and Krithi Ramamritham
for their helpful comments and suggestions. We thank John Black for his work in
implementing the prototype system and his feedback along the way.
--R
"Real-time databases,"
"Using semantic knowledge for transaction processing in a distributed database system,"
"Object-based semantic real-time concurrency control,"
"Concurrency control in advanced database applications,"
"Multilevel concurrency - a new correctness criterion for database concurrency control,"
"Using semantic knowledge of transactions to increase concurrency,"
"Synchronizing shared abstract types,"
"Tolerating bounded inconsistency for increasing concurrency in database systems,"
"On real-time databases: Concurrency control and scheduling,"
"SSP: A semantics-based protocol for real-time data access,"
"Semantic locking in object-oriented database sys- tems,"
"A formal characterization of epsilon serializability,"
"RTSORAC: A real-time object-oriented database model,"
"Real-time considerations in submarine target motion analysis,"
Concurrency Control and Recovery in Database Systems.
"Synchronizing transactions on objects,"
"Asynchronous consistency restoration under epsilon serializability,"
"Architechture of an open object-oriented database management system,"
"RTSORAC: Incorporating real-time into object-oriented database management,"
"Commutativity-based concurrency control for abstract data types,"
"Divergence control for epsilon-serializability,"
--TR
--CTR
Victor Fay Wolfe , Lisa Cingiser Dipippo , Roman Ginis , Michael Squadrito , Steven Wohlever , Igor Zykh , Russell Johnston, Expressing and Enforcing Timing Constraints in a DynamicReal-Time CORBA System, Real-Time Systems, v.16 n.2-3, p.253-280, May 1999
Kwok-wa Lam , Sang H. Son , Victor C. S. Lee , Sheung-lun Hung, Relaxing consistency requirement for read-only transactions, Information SciencesInformatics and Computer Science: An International Journal, v.143 n.1-4, p.115-146, June 2002
Salvatore T. March , Charles A. Wood , Gove N. Allen, Research Frontiers in Object Technology, Information Systems Frontiers, v.1 n.1, p.51-74, July 1999 | semantic concurrency control;bounded imprecision;real-time object-oriented databases |
627837 | Reusing Analogous Components. | AbstractUsing formal specifications to represent software components facilitates the determination of reusability because they more precisely characterize the functionality of the software, and the well-defined syntax makes processing amenable to automation. This paper presents an approach, based on formal methods, to the search, retrieval, and modification of reusable software components. From a two-tiered hierarchy of reusable software components, the existing components that are analogous to the query specification are retrieved from the hierarchy. The specification for an analogous retrieved component is compared to the query specification to determine what changes need to be applied to the corresponding program component in order to make it satisfy the query specification. | Introduction
The major objectives of a reuse system are to classify the reusable components, to retrieve
them from an existing library, and to modify the retrieved components to satisfy the query
specification [1, 2]. In previous investigations, the construction and retrieval processes have been
formally specified and implemented [3, 4, 5, 6]. From a set of reusable software components
formally specified, a two-tiered hierarchy of software components is constructed. The formal
specifications represent software that has been implemented and verified for correctness. The
lower-level hierarchy is created by a subsumption test algorithm that determines whether one
component is more general than another; this level facilitates the application of logical reasoning
techniques for a fine-grained, exact determination of reusable candidates. The higher-level hierarchy
provides a coarse-grained determination of reusable candidates and is constructed by applying a
hierarchical clustering algorithm to the most general components from the lower-level hierarchy.
The hierarchical structure provides a means for representing, storing, browsing, and retrieving
reusable components. Furthermore, the formal specifications provide a means for verifying that
a given software component correctly satisfies the current problem. Figure 1 shows the two-tiered
hierarchy of a set of container-based software components that are formally specified, where
rectangular nodes represent the specifications of individual components and oval nodes represent a
collection of specifications that have been clustered according to syntactic similarities.
Once the reusable components are retrieved, they typically cannot be used directly for the
implementation of the query specification.
Numerous software reuse projects have explored the use of analogy and other similarity-based
techniques to determine software reuse. Due to space constraints, the descriptions of the projects
are not given here, but may be found in [22]. This paper describes a new approach to retrieving
components based on analogies between existing and query specifications. This paper describes a
new approach to modifying retrieved components based on analogies between existing and query
specifications. Analogical relationships between the query specification and the specification of
the existing component can be used to guide the changes to the program code for the existing
specification. Analogical reasoning has long been recognized as an important tool to overcome the
search complexity of finding solutions to novel problems or inducing generalized knowledge from
experience [7]. Analogy presents a basic and challenging question: when are two specifications
(problem representations), for a given purpose, alike? [8].
The development of programs based on a series of transformations has been extensively
investigated [9, 10, 11, 12]. Program modification is different from traditional program
transformation because a program transformation is typically correctness preserving with respect to
Figure
1. The two-tiered hierarchy of ADT software components.
the original specification, but the program modification approach needs a program that satisfies its
original input-output specification along with the specification for a new program. Dershowitz [13]
developed an approach to program construction by modification based on the observation that
programmers only devote a limited amount of time and effort to newly develop code for a given
specification. Programmers often apply their knowledge about earlier programs to the development
of similar problems. Our work focuses on augmenting Dershowitz's methods in order to make it
amenable to automatic applications and facilitate software reuse.
The remainder of this paper is organized as follows. Section 2 presents the formal specification
notation used to describe a reusable software component. Section 3 presents the analogical matching
process, that is, how to find a set of analogical matches between an existing specification and the
query specification. Section 4 describes our program modification model based on analogy. Section 5
gives an example of modifying an analogous component based on the analogical matches between
the existing and query specifications. Section 6 describes related projects that have used analogy
or similarity-based techniques to determine software reuse. Section 7 gives concluding remarks and
briefly overviews future work.
Formal Specifications of Software Components
First-order predicate logic (FOPL) has been commonly used to specify programs [14, 15, 16, 17]. In
order to specify and reason about programs with data types other than arrays and simple variables,
sorts (types) are added to FOPL to obtain order-sorted predicate logic (OSPL). Moreover, order-sorted
specifications have been shown to be a useful tool for describing partially defined functions
and error handling in the specification of abstract data types [18, 19]. Order-sorted predicate logic
(OSPL) based on order-sorted specifications can be used to represent typed knowledge, where a
hierarchy gives the relationships among different types. A sort refers to the data types of a
given system. The sort hierarchy begins with primitive data types, such as int, float, and addr,
and is recursively built using structures, arrays, and sets. We use order-sorted predicate logic to
specify software components. The relationship between two components, that is, the reusability of
one component with respect to another, is based on the sort information and a logical subsumption
test applied to the specification body. For further details regarding the syntax and the semantics
of OSPL, the reader is referred to the Appendix.
In general, a software component can consist of requirements, design knowledge, code segments,
or test plans. A component can be used as the vehicle for encapsulation and data hiding, and it also
provides the basic unit of reusability. We define a component explicitly to be a user-defined type
whose behavior is described by a formal specification. The skeleton of a component specification is
shown as:
component component name identifier
f
inherit: component name identifier*
(method: method name identifier)*
The key word inherit indicates that the current component inherits the properties from the
components of previously defined components. The specifications in the method section define
the behavior of methods in this component. The format of a method specification is:
method method name((V ar : DomainSort)
requires pre-expression
modifies variables
ensures post-expression
The expressions used to specify a method of a given component, including pre-expression and
post-expression, are based on OSPL. For each method, the interface specifies both the domain sorts
and the range sort. The requires clause describes restrictions on the arguments, which defines how
the method may be invoked. Although equality is not defined in OSPL, the expressions containing
equality can always be transformed into pure OSPL expressions [20]. Variables that have a prime
latest value of a given variable. We interpret an omitted requires
clause as equivalent to "requires true." The ensures clause places constraints on the behavior of
the method. The requires and ensures clauses relate two states of the program: the state when
the method is called, which we call a precondition, and the state when it terminates, which we
call a postcondition. A requires clause only refers to the values in the precondition. An ensures
clause may refer to values in the pre- and the postconditions. A modifies clause describes which
variables can be changed. An omitted modifies clause is equivalent to the assertion modifies
nothing, meaning no objects are allowed to change in value during the execution of the method.
An example method specification for the component Stack is shown in Figure 2, which consists of
a function prototype followed by a body specified in terms of pre- and postconditions.
component Stack
f
method create:
modifies stack;
ensures
method destroy: (stack
modifies stack;
ensures trashed(stack);
method
requires :full(stack);
modifies stack;
ensures top(stack',newElement)
method
requires :empty(stack);
modifies stack;
ensures top(stack, topElement) "
method
requires :empty(stack);
ensures top(stack,topElement);
Figure
2. Component specification for Stack.
Figure
2 asserts that the method create is a constructor of this component, the method destroy
is a destructor of this component, the method push adds an element to a stack, the method detach
deletes an element from a stack, and the method topElement returns a top element belongs to some
stack. Moreover, the variables in the expressions without quantifiers are assumed to be universally
quantified.
3 Analogical Matching
An analogical match is defined to be a group of pairings between symbols in terms of candidate
and query specifications, where the pairings are based on some type of similarity. Consider
the following two expressions from the theory of abstract data types: f top(push(stack;
matching process may generate the following set of analogical
matches: ftop 7! head; push 7! enque; stack 7! queue; ff 7! fi; =7!=g: The above example exhibits
a bijective mapping between terms top(push(stack;
However, some features are needed in order to increase the flexibility of an analogical match.
For example,
ffl The variable, predicate, function, and constant symbols may be matched with different
variable, predicate, function, and constant symbols, respectively.
ffl The arguments for predicates and functions that are matched may be permuted by the
matching process. Since the argument order of functions and predicates is often arbitrary,
it is obviously unreasonable to insist that matches preserve argument order. Therefore, we
allow for permutations of arguments in order to increase the scope of applicability.
ffl Semantic information, such as the sort hierarchy and equivalence classes should be
incorporated into the analogical matching process.
ffl Techniques that seek syntactical similarities can be used to reduce the computational
complexity of the analogical matching process.
ffl Some symbols and terms may be left unmatched after the analogical matching process, i.e.,
loosening the restriction of bijective mapping.
In general, there is no universally accepted or recognized algorithm for determining software
reuse based on analogy. Furthermore, there is no formal theory or rule that rigorously describes
a process that will guarantee the generation of a useful analogical match [21]. Therefore, most
analogical matching algorithms use heuristics to direct searches for useful analogical matches. A
given heuristic captures system-defined criteria as to what constitutes a reasonable analogy.
3.1 Heuristics
Using the same example in the previous section:
A matching is an association between the two terms; i.e., a subset of the Cartesian product of
the sets of symbol occurrences in the terms. In this example, each term contains 6 symbols, so
the Cartesian product contains 6 \Theta 6 = 36 symbols, and hence has 2 36 subsets. Clearly, some
heuristics are needed to prune the search space.
When a heuristic is used in analogical reasoning systems, it must be determined as to what kind
of information the system should have to enhance the applicability of the heuristics, that is, what
contextual knowledge should be included in the heuristics. In order to develop a reuse system based
on analogy, it should support both the use of domain-specific knowledge and domain-independent
techniques in the search and the modification processes. It is assumed that the majority of the
domain-specific information is supplied interactively by the user, guided by a framework provided
by the domain-independent techniques.
associations are often believed to make good analogues, similarly, the matches
containing high proportions of identical associations make good analogies. Here, identical
associations refer to those that are purely based on syntactic information. We call this approach
the identity heuristic. However, most interesting analogies involve a significant proportion of non-identical
associations. The similarities are incorporated to determine which analogical match
is more promising. The similarity of a match can be defined by the distance between the
associated terms. (The definition of distance is defined in Section 3.2.) We call this heuristic the
similarity-based heuristic. This heuristic has already been incorporated into our system to classify
a set of software components and to retrieve a set of candidate components from a component
library [3, 4, 5, 6].
Another promising analogical approach is to consider matches that take into account the
structure of the terms. We call this approach a structure-based heuristic. Most analogical reasoning
systems use some form of structural mapping to find the analogies between two problems. One
approach is to make use of a sort hierarchy (i.e., primitive types as well as those types developed
constructively) or other type information to make a similarity judgement, thus earning the name
sort-based heuristic. We define two terms to be analogous if they have common ancestors in the sort
hierarchy. An analogical matching process should favor the association of two analogous terms.
There are some heuristic criteria that prefer matches between items of the same or similar types
according to an equivalence class partition of symbols (predicates, function symbols, or constants).
We call this approach the equivalence-based heuristic, which requires the system designers to define
the equivalence classes for the predicate and function symbols that specify the software components.
In our system [3, 4, 5, 6], the construction process assesses the equivalence class for each of the
predicates and functions and constructs a unified hierarchy of software components. For example,
both the function length(queue) that gives the length of a queue and the function size(stack)
that gives the size of a stack belong to the equivalence class cardinality(container) that gives
the cardinality of an entity container. In all the analogical matches considered in this reuse
framework, predicates are only mapped to predicates, operators are matched only to the operators
with the same number of arguments, and propositional connectives are mapped only to propositional
connectives.
Therefore, the analogical matching process begins by retrieving a set of components that have
some type of syntactic similarity as a means to filter the components considered for reuse. Based
on this set, analogies are sought between the query specification and the retrieved specifications.
The analogies found between specifications are then used to guide the changes to the corresponding
source code.
3.2 Computing Similarity
In this section, a simple evaluation method for computing similarity is given. A set of candidate
components that are similar to the query specification are retrieved from a software library based
on the degree of similarity between the existing specifications and the query specification. In this
paper, similarity is quantified by a nonnegative magnitude value called distance. Distances are
computed by several evaluation functions based on the knowledge available from a sort hierarchy
and the concept of an equivalence class. Conceptual distance between two terms is evaluated by
the distance of the shortest path between their corresponding sorts in the sort hierarchy, which
is used in turn to evaluate the similarity between the query specification and the specification of
existing components.
(Distance between two
Let the distance between two sorts s 1 and s 2 be denoted by D s
sorts. The distance from sort s 1 to sort s 2 is defined as the distance of the shortest path from s 1 to
s 2 in the sort hierarchy. If no such path exists, then the path value is set to +1. If s 1 and s 2 are
the same, then the distance is zero.
Figure
3 gives a simple sort hierarchy, in which Set is a subsort of Container, and
Partial-Order-Set is a subsort of Set and so on. The distance between the sort Stack and
the sort DoubleList is 3 according to Definition 1, i.e., D s (Stack;
Container
Stack Set List Queue
Partial-Order-Set Integer-Set DoubleList SingleList
Figure
3. A simple sort hierarchy.
(Distance between terms)
Let the distance between two terms t 1 and t 2 be denoted by D t refers
to terms. For some operator with at least one argument, let w 1 be the weight associated with the
operator and w 2 be the weight associated with its arguments. Assume 0 w i 1 for all i, and
1:0. The distance between two terms is defined as follows.
equivalence class) then % Terms in same equivalence class
considered similar
(3) if either t 1 or t 2 is a variable, then
\Theta w 2 .
Calculate recursively for functions
m\Thetan
\Theta w 2 .
where the weights w 1 and w 2 represent how the distances between the operators and their
corresponding arguments contribute to the distance between the terms t 1 and t 2 . These weights can
be provided either by the domain analysts or the component specifiers to reflect design decisions
or domain-specific information.
(Distance between expressions)
Let the distance between two expressions ff and fi be denoted by D e (ff; fi). The distance between
two expressions is defined as follows.
pred fi (same equivalence class) then D e (ff; fi) j 0.
pred fi then D e (ff; fi) j D t (ff; fi).
0:5.
where the binary operator op represents the predicate connectives: " (and), (or), ) (implication),
and , (iff). Before computing the distance between two expressions, the input expressions should
be skolemized, hence, all variables are assumed to be universally quantified. Based on the distances
between terms and expressions, the distance between two methods can be defined as follows.
Definition 4 (Distance between two methods)
Let the distance between two methods m 1 and m 2 be denoted by Dm (m
is the precondition of m 1 (m 2 ), and post(m 1 is the postcondition of m 1 (m 2 ),
then the distance between two methods is as follows.
Finally, the distance between two components can be defined in terms of the distances of their
corresponding methods.
(Distance between two components)
Let the distance of two components C 1 and C 2 be denoted by D c (C 1 ; C 2 ). Provided that C 1 is a
candidate component, and C 2 is a query component (specification), then the distance between C 1
and C 2 is defined by:
min
3.3 Top-Down Matching Approach
The second step in the analogical matching process is to assess the analogical relationships
between the query specification and the specifications retrieved based on similarity. In the context
of recognizing reusable candidate specifications for solving query specifications, the analogical
matching process should have a flexible notion of an analogy-based match rather than imposing a
design bias towards any single heuristic. It should be easily tailorable to any particular domain-specific
strategies. The heuristics mentioned in Section 3.1 should be refined to be more specific.
That is, some numerical metric should be created to measure the usefulness of a given analogy.
Once a precise definition for the goodness of an analogy match is given, the analogy problem can
be regarded as an optimization problem. For an optimization problem, finding a global optimum
is not typically feasible, therefore we might want to generate a set of local optimal analogies. At
this point, the analogical matching process can be regarded as a recursive problem solving process.
The initial problem is to match the specifications of two terms. As the matching process continues,
new subproblems are produced and recursively solved. No new subproblems will be produced in
two cases:
(1) one of the subproblem's terms is a constant or variable;
(2) no new analogical match is applicable for this subproblem.
An analogical matching process generates a set of analogical matches between two input expressions.
Let a list of terms be of the form [t are terms. The empty list is denoted by
Similar to PROLOG's definition of lists, if list then head is the first element
of L and Tail is a list that is the same as L except that the first element head is deleted from the
list. We define the analogies of a pair of lists with the same length. The analogies of two lists with
different cardinality will be discussed later.
Definition 6 (The analogies between a pair of lists)
For two lists [a 1 ; a the following condition holds.
The above definition says that the result of applying an analogical matching to a pair of lists
is equal to the union of the results of matching the corresponding terms in the two input lists.
Since OSPL is used to express the behavior of software components, the object language of our
analogy system is based upon first-order logic with sort hierarchies. The sort- and equivalence-based
heuristics suggest the matches of two terms but the analogical matches are not limited to the terms
in the same equivalence class or the terms with the same sort. We explore how to incorporate
the properties of commutativity into the matching process, because it is unsatisfactory that the
analogical matches from a pair of terms be based on an arbitrary preservation of argument order.
The generation of subproblems from the matching process can be classified into two kinds of
branches: or-branch and and-branch. When we want to match terms containing a commutative
operator, the set of derived subproblems may suggest more than one way of solving the problem.
Therefore, the or-case applies, that is, the current problem should branch into a set of new
subproblems, each generating a new group of analogical matches. For example, consider the
following g. Since max and min are commutative operators,
the matching process produces a set of partial matches: fmax 7! ming.
Respectively, the matching process generates the following two or-branch subproblems:
h(y)]g.
Thus, the current state of matching between a pair of terms involves a set of partial matches and
two sets of or-branch subproblems. We only need to solve one of the or-branch subproblems in
order to proceed to the subsequent stages of the matching process. The or-branch subproblems are
generated by permuting the order of arguments to obtain new sets of argument mappings.
Let us consider another example: f f(i; are
not commutative operators, the matching process generates a set of partial matches: f7!?g;
and two and-branch subproblems are generated as follows: f [f(i; j)]; [h(i
)]g. such that the current problem is split into two new sets of subproblems,
with each represented as an and-branch together with a partial matching. The newly generated
matches from these two subproblems should not conflict with each other, that is, no inconsistent
analogical matches will be generated. We will define consistency shortly.
From the above examples, we may conclude that and-branch subproblems are generated
whenever the argument matching within an identical argument mapping is performed; or-branch
subproblems are generated whenever the matching process encounters commutative terms and
attempts to perform argument matching of permuted argument mappings of the terms. The terms
in the latter case include ordinary predicative connectives, for example, " and .
According to the previous discussion, we define the matchability of two expressions, i.e., as an
attempt to answer the question "when are two expressions matchable?" The following definition
checks the matchability of two expressions recursively, where the simplest notion of matchability
is that two terms have a common ancestor in the sort hierarchy or the two terms are in the same
equivalence class.
Two expressions t 1 and t 2 are matchable, denoted by matchable(t 1 ,t 2 ), iff one of the following
conditions holds.
are analogous, i.e., have a common ancestor in the sort hierarchy.
(2) either t1 ' term t 2 or t1 ' pred t 2 .
are in the same term or predicate equivalence class, respectively).
For an analogical matching process A and two inputs t 1 , t 2 , if t 1 is not matchable with t 2 then
returns an empty set. The definition of consistency for analogical matches is defined in
terms of conflicts, where a conflict occurs when a given term has more than one match in the set of
matches \Theta. Three separate cases are enumerated to indicate simple variables, functions, and lists
of expressions.
Definition 8 Conflict.
Let the match oe can both be simple identifiers,
functions, or lists of terms. oe 1 has a conflict with an existing set of matches \Theta, denoted by
one of the following conditions holds.
conflict(ff 7! gg; \Theta)
Therefore, a given match oe is consistent with a set of matches \Theta, when there are no conflicts
within \Theta.
Definition 9 Consistent.
Some match oe is consistent with an existing set of matches \Theta if oe has no conflicts with \Theta,
denoted by consistent(oe,\Theta). That is, consistent(oe; \Theta) j :conflict(oe; \Theta).
If the matching algorithm is restricted to the preservation of argument order, then the second
requirement of Definition 8 will never be applicable in the matching process.
Scoring function.
Let \Phi be a set of analogical matches of the form fx 1 7! y g. Then the score of \Phi is
computed by a scoring function
where distance function D is overloaded and applied to identifiers, functions, or lists of terms.
3.4 Matching Algorithm
An analogical matching process generates a set of analogical matches of two formally specified
input expressions. We define the application of an analogical matching process M to expressions
as follows. Given a matching subproblem that consists of a pair of specifications ff and fi as well
as an existing set of matches \Phi, the matching algorithm attempts to find a new set of consistent
matches and returns it to the user. We assume all variables of the pair of inputs have been either
skolemized or universally quantified. The algorithm, Match Expr, matches two expressions and is
given in Figure 4. The algorithm for matching two terms, Match Term, is given in Figure 5. The
algorithms are based on the analogical matching approach presented in Section 3.3.
Considering the algorithm for matching two expressions, if both inputs are terms then the
algorithm for matching two terms (Match Term) is invoked under the condition that these two
terms are matchable. Case 3 in Algorithm Match Expr gives an example of generating and-branch
subproblems, in which the consistency of the analogical matches between the corresponding items
in two input lists should hold. Cases 5, 6, and 7 in Algorithm Match Expr give the examples of
generating or-branch subproblems. The set of analogical matches with the smaller score is returned
to the calling function.
Let us consider the algorithm for matching two terms (Algorithm Match Term). Case 1 matches
two terms with either one of them being a variable, and the algorithm returns these two terms as
a new match if they are matchable. Similar to Case 3 in Algorithm 1, Case 3 in Algorithm
Match Expr matches two lists. Case 4 matches two operators and at least one of their arguments is
non-commutative. Case 5 matches two operators and both of the input operators have commutative
arguments. Case 6 matches two operators with different numbers of arguments.
Several heuristics are exploited in the matching algorithms. These two algorithms use a top-down
scheme to compare two input expressions or terms. The predicate connectives of two input
expressions or the functor symbols of two input terms should be matched before their arguments'
matchings are performed, hence a structure-based heuristic is applied. However, the commutativity
of the arguments is incorporated into the matching algorithms. If some term is a commutative
Algorithm 1 Match
Input: Two \Sigma-expressions
, and current partial matches \Phi.
4, be \Sigma-expressions; t, t 1
, and t 2
be \Sigma-terms.
Let\Omega and fi be the predicate connectives.
Output: A new set of partial matches.
Procedure:
begin
switch
case matching two terms
then return(Match
else return(\Phi);
case matching two empty lists
case 3 ([e 1
matching two lists of expressions
return(Match Expr (T ail
case matching two expressions
return(Match Expr
case 5 ((e
matching two expressions
Match
Match
case 6 ((e
matching an expression with a term
case 7 (t; (e
matching a term with an expression
end.
Figure
4. Matching two expressions.
Algorithm 2 Match
Input: Two terms and a set of partial matches \Phi.
Output: A new set of partial matches.
Procedure:
begin
case 1 either T 1 or T 2 is a variable:
g) else return(\Phi);
case
matching two empty lists
case 3 T matching two lists
return(Match
case
matching two operators either f or g is non-commutative
if consistent(ff 7! gg; \Phi) (f ' term g) (f ' pred g) then
return (Match
else return(;);
case
matching two commutative operators
if consistent(ff 7! gg; \Phi) (f ' term g) (f ' pred g) then
Match
Match
else return(;);
case 6 T
matching two operators with different numbers of arguments
return(Match
end.
Figure
5. Matching two terms.
operator, then several variations of the term with permuted arguments are created to generate the
or-branch subproblems, otherwise and-branch subproblems are generated for each pair of the input
arguments. The scoring function based on the distance notions is used to determine which
set of matches is returned, therefore, the similarity-based heuristic is incorporated into the matching
algorithms. The notion of equivalence class is used to determine if two functors can be matched, for
example Cases 4 and 5 in Algorithm Match Term, hence the equivalence-based heuristic is applied
in the algorithm for matching two terms.
The definition of predicate consistency is given in Definition 9. Therefore, implicitly, we
exclude the possibility of a mismatched analogical match that allows a mapping that is not one-
to-one, even though it may be a useful analogy in the real world. If mismatches are allowed
in the analogical matching algorithms, then a considerably large amount of domain knowledge
would need to be encoded in the system's knowledge base. Case 6 in Algorithm Match Term
deals with the condition when the arguments of two input terms have different sizes (m 6= n).
In this case, we need some transformation rules to "rephrase" the input terms to make their
arguments have the same cardinality. The transformation rules require domain knowledge. For
example, suppose we want to match
p a and c
d , since the functions square-root and division
have different numbers of arguments that need to be transformed. If the system has the rule
a
1 , then a match can be easily found: f= 7! =;
p a 7! c; 1 7! dg: If the system only
has the rule
a \Theta 1, then we have the following
Our algorithm provides a framework for a domain-independent matching process but the domain
knowledge is tailorable to more specific types of information. The complexity of this algorithm
increases when the matching process encounters a commutative operator. Let level(ff) denote the
"depth" of an operator ff. For example, the depth of a constant is 0, the depth of a
and level(op(arg )g. For each pair of commutative
operators ff and fi, the matching process generates two subproblems. Hence, this algorithm's upper
bound is minflevel(ff); level(fi)g \Theta 2 minflevel(ff);level(fi)g : For further details regarding the complexity
analysis of the algorithsm, please see [22]. In Algorithm Match Term, Case 5 always generates
two sets of analogical matches because, currently, the operators that are able to generate or-branch
subproblems are commutative and they consist of only two arguments. We assume that Case 6 of
Algorithm Match Term transforms the input terms to at most one pair of operators with the same
number of arguments.
Implementation
This section presents an implementation of the matching algorithm. A prototype system for
facilitating software reuse has been implemented in the Quintus ProWindows language, a dialect of
Prolog that supports the object-oriented organization of graphical elements. Our system provides
the functions of constructing the hierarchical library [3], retrieving the existing components that
have a logic-based generality relationship with the query component [5], and assisting users in the
modification of more general and analogous existing components to satisfy a query specification [6].
Let the existing specification Old Spec be the specification of the Stack class given in Figure 2.
A query specification Query Spec, the specification of DoubleList class, is given in Figure 6.
In addition to the constructor and destructor operations, this component defines four methods:
addAtHead, addAtTail, detachAtHead, and detachAtTail. For the purpose of conciseness, we
only consider the relationship between two methods Stack::push and DoubleList::addAtTail.
In order to find an analogous existing component based on the query specification for DoubleList,
we apply our matching algorithm to the methods of DoubleList. Figure 7 shows the results of the
application of the matching algorithm to the method DoubleList::addAtTail and the method
Stack::push.
The left part of Figure 7 displays the two-tiered hierarchy of a group of components described
by formal specifications. The Compute Analogies window displays the query and existing
components and methods, respectively. The Candidate Analogies window displays the matches
found by the Match Expr matching algorithm. The matches are helpful in terms of modifying the
existing components for reuse because the users may discover inherent similarities between two
components that have no logical relationships that can be found by automated reasoning. Given
these candidate matches, the user can reuse or redesign the query component. In this example, the
system, using Match Expr, suggests several matches that may be useful in the modification process.
For example, the result suggests that, in order to satisfy the query specification, the input object
should be changed from stack to dbllist and the new element should be added at the tail of
dbllist instead of the top of stack.
4 Program Modification Model
We regard the modification process as a problem solving process, where Figure 8 contains a
framework for modifying components based on analogies found between two formal specifications.
The "problems" in this process are defined by the specifications that represent reusable software
components and the "solutions" become the executable implementations of the corresponding
component DoubleList
f
method DoubleList
modifies dbllist: DoubleList;
ensures
method destroy: (dbllist
modifies dbllist;
ensures trashed(dbllist);
method DoubleList
modifies dbllist;
ensures head(dbllist',element)
method DoubleList
modifies dbllist;
ensures tail(dbllist',element)
method DoubleList
requires head(dbllist,element)
modifies dbllist;
ensures
method DoubleList
requires tail(dbllist,element)
modifies dbllist;
ensures
Figure
6. Component specification of DoubleList.
Figure
7. An implementation for matching process.
specifications. Because formal specifications are used as an indexing mechanism, more of the
traditional tasks such as classification, retrieval, search, and program modification are amenable to
automated reasoning techniques, which should greatly facilitate its scalability when compared to
keyword-based or manual approaches to software reuse.
The objective of software development in our context is to "solve" the problem defined by
specification Query Spec by finding an appropriate implementation. Old Spec is referred to as
a candidate specification whose implementation Old Program is known. An analogical matching
process is used to guide the modification process, to be performed by the software developer. A
set of analogical matches found between the two specifications is used as the basis for potential
changes to the existing specification and corresponding software component.
The satisfies relationship defines the relationship between the specification and implementation
modules. Given the specification of two methods, Old Spec and Query Spec, assume Old Spec is a
specification of the method of a given component in the library and Query Spec is the specification
Analogical matches
Query_Program
Old_Program
Query_Spec
Old_Spec
satisfies satisfies
Program
Analogical
Matching
Modification
Figure
8. Analogical Reuse Modification Process.
of a method of a query component. Let Old Program be an implementation that satisfies Old Spec.
Then the suggested steps of modifying an existing program Old Program to a new implementation
Query Program based on the analogical matches between Old Spec and Query Spec are as given
in
Figure
9.
For the method specification Old Spec in an existing component that is analogous to some
method specification Query Spec in a query component, a set of analogical matches \Theta can be
found by the analogical matching process, which is presented in Section 3. According to \Theta, the
existing implementation Old Program can be modified to be program Old Program'. In this step,
we can also rewrite any unexecutable statements in the modified program Old Program', e.g., type
incompatibility. Finally, Old Program' and Query Spec are supplied to a program synthesizer,
which can be either a semi-automated program synthesis system [12, 23] or a programmer using a
formal approach for program derivation [17, 24, 25, 26], in order to obtain a new implementation
Query Program that satisfies Query Spec.
Input: Old Spec, Query Spec, and Old Program.
Output: Query Program.
Procedure:
(1) Analogical Matching: find \Theta which is a set of analogical matches between
Old Spec and Query Spec.
(2) Program Replacement: refine Old Program to Old Program' according to \Theta.
(3) Program Adaption: synthesize new segments for Old Program' to obtain/derive
Query Program that satisfies Query Spec.
Figure
9. Modification process based on analogy.
If an analogical match between the candidate and query specifications can be found, then the
effort required to develop the appropriate implementation (Query Program) will be significantly
reduced since the needed changes at the specification level are clearly determined before any source
code is edited. We call this approach to the modification of an existing program based on an
analogical match between two specifications, the Analogical Reuse Modification Process (ARMP).
The problem of finding promising candidate specifications for some query specification from large
knowledge bases of known and implemented specifications is referred to as the base filtering
problem [27]. In our reuse system, a retrieval scheme based on the similarities among reusable
components finds a set of candidate specifications that are similar to the query specification. Our
retrieval process augments the ARMP model in the form of a pre-processing phase.
It is emphasized that because the reuse is fine-grained, that is, at the component level, a limited
amount of domain knowledge is needed in order to apply analogical reasoning. Furthermore in
comparison to current techniques for reuse, largely based on keyword searchers, specification-based
reuse can be at least as effective and, in addition, is more amenable to automated reasoning.
5 A Modification Example
Program modification is a combination of analogy, transformation, synthesis, and verification. In
this section, we give an example of program modification based on analogy. We show the matching
process plays an important role in the modification process. Consider the following specification of
a square root program:
are the pre- and postconditions of the square root program, respectively. We are
given two numbers a and e and the desired result is an approximation r, which is a real number, to
the square root of a with a tolerance value e. Assume we have an existing program that performs
real division as follows:
begin
while s ? e do
if d * (q+s) c then q := q
od.
end.
Then we apply the matching process (Algorithm Match Expr) to the pair of postconditions of
these two programs (case 1), i.e., R 1 and R (j
g: If there is the
transformation rule:
a
1 , then a match can be found as follows (using Algorithm Match Term,
case
a 7! c; d 7!
The match is then applied to the real division program and it becomes
begin
while s ? e do
od.
end.
However, 1
a contains the expression
a which is the objective of the implemented
program so we need to rewrite the statement
a
by
which preserves the semantics but eliminates
p a from the program. (Squaring, addition, and
comparison are regarded as elementary operations.) We obtain the following program:
begin
while s ? e do
od.
end.
In this algorithm, the result r falls in the range [0,s), i.e., 0 r ! s. However, 0 r
1. Once a will never find the
desired answer. This problem can be solved by replacing the initialization command s := 1 by
s := a because the square root of a is bounded from above by a + 1. Consequently, the desired
program becomes
begin
while s ? e do
od.
end.
Despite the simplicity of this example, the potential benefits of program by modification is
apparent. In the example of this section, the programmer can save programming effort by reusing
the modified program instead of having to program everything from scratch. Once an analogical
match is found, the programmer has to develop only those parts of the program that cannot be
reused from the old one, which, hopefully, requires much less than that which is necessary to
generate the entire program.
The reuse framework has been applied to the development of graphical user interfaces
based on the specification of existing graphical components [6]. Formal specifications for Motif
widgets were constructed and applying the classification scheme were organized into a hierarchical
structure. Then the user query was structured in terms of high-level, implementation-independent
specifications. At the specification level, the user is not concerned with setting specific attribute
values for a given widget or how widgets can be combined to achieve a given behavior. Providing
a means for a user to classify, browse, and retrieve graphical components based on the behavior
or functionality enabled users to focus on the high level requirements of graphical interface. The
search mechanism determined whether a given component should be modified or combined with
other components to satisfy a query specification.
6 Related Work
This section contains descriptions of software reuse projects that use techniques similar to those
presented in the paper. Two major categories of reuse techniques are described. First, those
techniques that involve the use of analogy are discussed. Next, techniques that calculate similarity
between software components based on some representation of software are described.
6.1 Analogies between Specifications for Software Reuse
Dershowitz [28] suggested the formulation of a program by using analogies as a basic tool in program
abstraction. An analogy is first sought between the specifications of the given programs; this process
yields an abstract specification that may be instantiated to any of the given concrete specifications.
The analogy is then used as a basis for transforming the existing program into abstract schemas
helping to complete the analogy. A given concrete specification of a new problem may then be
compared with the abstract specification of the schema to suggest an instantiation of the schema
that yields a correct program.
The ROSE-2 project [29] is based on the knowledge-based refinement paradigm, which is a
software development process in which user-supplied requirements are used to select and customize
a high-level design. The paradigm is supported by a knowledge base of high-level design abstractions
called design schemas and refinement rules. The schemas and rules are used to customize the user's
designs to satisfy the user's requirements and design decisions.
Bhansali [30] describes the derivation of a concrete program from a semi-formal specification of a
problem. He used a transformational approach based on a set of transformational rules that produce
a top-down decomposition of a problem statement down to the level of target language primitives.
The top-down decomposition process combines ideas from research in planning to generate programs
efficiently. The reuse of domain specific knowledge is emphasized in this approach. APU is a system
that uses the proposed paradigm to synthesize UNIX programs (shell scripts) from semi-formal
specifications of programs.
Maiden and Sutcliffe [31] investigated the potential of specification reuse by analogy and its
possible benefits for requirements analysis. They have developed two real-world examples to
determine the potential for specification reuse by analogy. The first example illustrates an analogy
between an air-traffic controller (ATC) and a flexible manufacturing system (FMS). The second
example identifies analogies between the ATC and a classroom administration system (CAS), and
the FMS and the CAS. They propose a software engineering analogy model based upon three types
of knowledge: solution knowledge, domain knowledge, and goal knowledge.
Lung and Urban [32] have proposed an analogy model for software reuse. In addition to the
constraints proposed by Maiden and Sutcliffe, that added constraints to handle software analogy
analysis information due to the complexity of the software system. They have proposed an analogy-based
domain analysis method that can support a high level reuse across domains. The purpose is
to help users better understand a domain and support potential future reuse in a different domain.
CAReT is an analogy-based retrieval system applied to software design reuse, where design
cases (actual designs) and design schemas (templates) are available for reuse. The knowledge base
for CAReT consists of background knowledge and a design library. The background knowledge
contains a basic object lattice, a data type lattice, and isa-part-of hierarchies for composite types.
The design library consists of a set of domain dictionaries (one per design family) and bookkeeping
information that facilitate retrieval. A set of design schemas (templates) exists for an application
domain, where a domain-specific design schema corresponds to a design family and more specialized
schemas correspond to design sub-families. CAReT uses a two-phased approach to retrieval. First,
the description of the query design case is used to determine if it belongs in one of the design
families, where the dictionary entry for that design family provides the similar design cases or the
corresponding design schema. If that search fails, then analogy-based search is pursued. Object-type
and data-type lattices are used to establish relationships between the query design and designs
in the library. A similarity value is calculated between the new design case and those retrieved
from the design library.
A computational model of similarity has been developed to support the software reuse based on
analogy [33]. The Telos language [34] is used to describe similarity between different artifacts (code,
design or requirement specifications). The language is a structurally object oriented data model
with multiple and meta classification, multiple generalization and typed attribution. Abstractions
used by the model are classification, generalization/specialization, and attribution. There are four
basic categories of Telos objects used to calculate similarity (entity tokens, attribute tokens, entity
classes, and attribute classes). The model is based on similarity and distance functions. Distances
are calculated between objects (classes) with respect to identification, classification (hierarchies),
generalization, and attribution.
6.2 Similarity-Based Techniques for Software Reuse
A number of projects have used case-based reasoning (CBR)-like techniques to facilitate software
reuse. In general, CBR has five general characteristics [35]. First, CBR attempts to recall old
cases to help solve new problems. Next, understanding a new situation in terms of old cases, where
more old cases may be applicable as the problem becomes better understood. Third, CBR involves
adapting old cases to fit new needs. Fourth, during the processing of evaluating and adapting old
cases, there is potentially a process of "learning" to solve a given problem in a new or novel way,
information that can be used in the future. Finally, there is a need to be able to integrate the new
experience into memory properly. A hierarchy of cases is built in a bottom-up fashion from existing
cases in order to facilitate recall and retrieval [35]. Cases are clustered or generalized according to
a set of common features. Types of indices used for cases include goals, constraints, and feature
combinations that describe how a given problem can be solved.
The remainder of this section overviews several projects that use CBR-like techniques for
addressing the problem of software reuse. Based on the major characteristics of CBR systems,
the reuse framework shares similar objectives, but the means to achieving the goals differ. The
major difference is that the formal specifications of the software components are used as the
means for classification, retrieval, and adaptation of the software, where automated reasoning
applied to the logical and analogical relationships between the specifications is used to determine
the set of candidate components for reuse. Also, the relationships between the specifications is
used to determine the type of adaptation needed to modify an existing component to satisfy the
requirements of a new component.
The AI-Reuse System (AIRS) [37] supports the browsing of a software library for components
that meet a user-specified requirement. The representation scheme is similar to a frame-based
system, and the search mechanism is based on similarity computations, much like those used for
case-based reasoning systems [35]. A component is defined by a set of (feature,term) pairs. A
feature represents information with respect to a given classification scheme and is defined by a
set of related terms. Candidate reuse components are retrieved based on a degree of similarity
between target and source descriptions. Similarity is described in terms of a distance factor, which
is proportional to the amount of effort needed to compose or modify the existing components
to satisfy the target component. The effort calculation is based on information obtained from
experienced software developers and domain specialists.
Prieto-Diaz [38] developed a faceted classification scheme to support the storage and retrieval
of reusable software components, where facets refer to important keywords obtained from program
descriptions and documentation. This approach makes use of a faceted scheme, thesaurus, and a
conceptual distance graph. Each software component has an associated descriptor that consists of
ordered terms for each facet. The thesaurus is used to help refine the definition of the component
and provide context information. The conceptual distance graph provides a means to measure
the similarity between facet terms, which is used in turn to evaluate the similarity between
required software specifications and available components [37]. The faceted approach requires
domain analysis in constructing the conceptual graph. Conceptual distances are assigned based on
experience, intuition, and common sense.
LaSSIE (Large Software System Information Environment) [39] uses a semantic-net based
approach to provide a structured representation of knowledge that can be reasoned with respect to
its semantic information. A knowledge-base is used to store different types of information about
large, complex software systems, focusing on the programmer's view. A semantic-based search
algorithm using logical inferencing is used to retrieve information about large software components.
A frame-based language is used to represent classes of objects and their actions. Domain analysis
is used to extract descriptions about objects and actions based on information gained from reading
large volumes of architecture documents and comments in source code files. A frame definition
contains super-frames (more general classes) and then a set of restrictions on the parent frames
to create the more specialized object. Slots are used to contain constraints or restrictions on the
frames. Using this type of representation, a hierarchy of frames is created. Information in the
knowledge-base is application-specific in order to achieve invisibility across the different types of
software artifacts (source code, documentation, error reports, etc.)
Case Assisted Reuse of Object Library (CAROL) [40] supports the reuse of class descriptions in
Object-Oriented programming. CAROL computes the similarity between existing class descriptions
and a target class specification. The most similar descriptions are returned. A case consists of a set
of Prolog facts that describe the class in terms of its name, attributes, and relationships. Classes
are stored in case format in terms of class names, class type, instance variables and others. Class
methods are classified according to the type of processing performed on the variables in the method,
such as modifying an instance variable, checking an instance variable, or returning the variable.
An attribute oriented approach is used to search for reusable classes. Attributes include the class
name (taking into consideration synonyms), position in the class hierarchy, or attribute importance
based on user-defined weights. Users specify a target class with the assistance of templates.
ReqColl (Requirements Collector) [41] is a tool that facilitates the requirements capture and
analysis processes. Conceptual graphs (CG) [42] are used to capture domain information. A graph
matching algorithm is used to determine whether an existing CG matches the CG for a new problem
description. In ReqColl, the nodes are either concepts or relations from the problem description,
where relations define how concepts are related to one another. Concepts may either be objects or
actions from the problem domain. Both concepts and relations have associated types, and thus,
have respective type hierarchies defined by the isa relationship. Directed arcs are used to connect
concepts and relations. ReqColl stores patterns of CGs for specific application domains. After the
user describes the problem in terms of a CG, the ReqColl invokes a matching process that determines
whether the new CG matches a CG stored in the system. The graph matching algorithm uses a
recursive approach to calculate similarities between pairs of arcs and the subarcs of the respective
CGs. Heuristics are used as means to attempt to reduce the number of computations necessary to
find the best possible permutation of arc pairs. In the case that the two CGs do not have the same
number of arcs, the matching algorithm has a constant weight factor to account for all unmatched
arcs. Depending on the amount of similarity between the stored CG and newly user-created CG,
the user may wish to add information to the new CG to make it more closely fit the stored CG or
pursue the requirements analysis process without the use of an existing CG.
7 Conclusion
This paper described an approach for applying analogical reasoning to reusing software components
that are described by formal specifications. Our studies have demonstrated that analogical
matching of specifications can be an effective means of software reuse. Our investigations also
show that supporting the understanding of candidate analogies is an important factor of successful
specification-level reuse. The reuse framework has been applied to the development of graphical
user interfaces (GUIs) from existing graphical user interface components [6].
In general, an ARMP needs didactic support for comprehension of candidate specifications,
which requires an explanation facility to help the software developer understand the target domain
and the base specifications. Since an automated ARMP is unlikely to achieve a perfect match,
explanation from systems or domain experts will also be necessary for evaluating the appropriate
target specifications.
Currently, we are investigating software reuse and program adaptation when existing
specifications are more general or abstract than the query specification [43]. In future investigations,
more sophisticated knowledge will be incorporated into the evaluation function in order to increase
the number of analogy candidates retrieved for a query specification. We are also investigating
the specification of design-level descriptions of systems in order to perform a more coarse-grained
determination of reuse. At this level, it is envisioned that more domain-specific information will
be incorporated in determining reuse. In order to facilitate the construction of specifications of
reusable components, complementary investigations are being pursued in reverse engineering in
order to abstract formal specifications from existing software [44, 45, 46] and in the area of formal
construction of specifications from requirements models [47, 48].
Acknowledgements
The authors are very grateful for the detailed comments given by the anonymous reviewers, which
has helped to greatly improve the presentation of the paper. Also, the authors greatly appreciate
the assistance provided by David Leake, George Spanoudakis, and Igor Jurisica.
--R
"Identifying and Qualifying Reusable Software Components,"
"Reuse software: Issues and research directions,"
"Using Formal Methods to Construct a Software Component Library,"
"Formal methods applied to reuse,"
"Using Automated Reasoning to Determine Software Reuse,"
Applying Formal Methods to Software Reuse.
"A Paradigm for Reasoning by Analogy,"
"Computational Approaches to Analogical Reasoning: A Comparative Analysis,"
"Reusability through program transformations,"
"Automating the transformational development of software,"
"Reusing Software Developments,"
"KIDS: A semiautomatic program development system,"
The Evolution of Programs.
"An axiomatic basis for computer programming,"
A Discipline of Programming.
"Applying formal methods in automated software development,"
"Order-sorted algebra I: equational deduction for multiple inheritance, overloading, exceptions and partial operations,"
"Semantics of order-sorted specifications,"
"An order-sorted logic for knowledge representation systems,"
Automated Acquisition and Refinement of Reusable Software Design Components.
"Reusing analogous components,"
Synthesis of Procedural and Data Abstractions.
System Software Development Using VDM.
The Science of Programming.
"Fundamentals of deductive program synthesis,"
"Mechanisms of Analogical Reasoning,"
"Program Abstraction and Instantiation,"
"The ROSE-2 Strategies for Supporting High Level Software Design Reuse,"
"Domain-Based Program Synthesis Using Planning and Derivational Analogy,"
"Exploiting Reusable Specifications Through Analogy,"
"Analogical Approach for Software Reuse,"
"Similarity for analogical software reuse: A computational model,"
"Telos: Representing knowledge about information systems,"
Morgan Kaufman
"Representation and management issues for case-based reasoning systems,"
"Computing similarity in a reuse library system: An AI-based approach,"
"Implementing faceted classification for software reuse,"
"LaSSIE: a knowledge-based software information system,"
"Application of case-based reasoning (cbr) to software reuse,"
"Matching conceptual graphs as an aid to requirements re-use,"
information processing in mind and machine.
"A formal approach to reusing more general components,"
"Constructing formal specifications from program code,"
"A two-phase approach to reverse engineering using formal methods,"
"Strongest postcondition as the basis to reverse engineering,"
"A graphical environment for formally developing object-oriented software,"
"A formal semantics of object models,"
--TR
--CTR
Guifa Teng , Xiaodong Liu, Support software evolution with abstration rules and programming knowledge patterns, Focus on computational neurobiology, Nova Science Publishers, Inc., Commack, NY, 2004 | program modification;analogical reasoning;software reuse;formal methods |
627843 | Efficient Bulk-Loading of Gridfiles. | AbstractThis paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows. | Introduction
We are developing a scientific database to support retrieval of subsets of Computational Fluid Dynamics
(CFD) data sets. Retrieval of subsets is required for visualization and data exploration. All of our data
is two or three-dimensional and thus requires multiattribute indexing. We are specifically interested in
partially qualified, fully qualified, and point queries. Gridfiles are a well known multi-attribute indexing
technique [5]. The basic idea is to partition each attribute range into subranges, thereby inducing a
multi-dimensional rectilinear partitioning on the entire multi-attribute space. Enough partitions are
chosen to ensure that all tuples sharing the same subrange in each dimension will fit on a disk page.
Any point query can be then be satisfied with two disk accesses, one to fetch a pointer to the data page,
and one to fetch the data page itself.
The data we wish to store is contained in files created by CFD simulations. Both the size of the data
sets and anticipated extensive use of the data sets require that we provide fast organization of new data,
and fast retrivial of existing data. Our two dimensional data is typically a large set of tuples of the form:
Current data sets measure only tens of megabytes, but are projected to be 2-3 orders of magnitude
larger soon. Although we are specifically concerned with CFD data sets, large physically oriented data
sets are common outputs to a wide spectrum of scientific computations.
In this paper we show how to quickly load entire data files into a gridfile indexing structure. This
is termed bulk loading. Note similar functionality is required from relational databases for reloading
relations when changing platforms, during recovery, or during reorganization. In a relational database
the relation is analogous to the data set in our work.
The main contributions of this paper are:
1. A partitioning algorithm which requires up to two to four orders of magnitude less CPU time
than the only known algorithm for partitioning data into gridfile blocks. We provide experimental
results for our partitioning algorithm.
2. An efficient algorithm to aggregate under-utilized logical grid-buckets to achieve better disk uti-
lization. We provide expermental results which demonstrate the utility of the aggregation phase.
3. A complete algorithm for bulk-loading of large data sets (significantly larger than main memory)
that guarantees no bucket overflows.
The rest of this paper is organized as follows: In the next section we relate our work to prior
efforts. In section 3 we present the general problem in more detail and provide an example. In section
4 we present the existing partitioning algorithm, our new algorithm, and our aggregation algorithm. In
section 5 we experimentally compare the execution times of the two algorithms, on a variety of data
sets including highly skewed CFD data sets. We also demonstrate the effectiveness of our aggregation
technique. In section 6 we present our two phase bulk-loading algorithm. We end with our conclusions
and plans for future work.
Previous Work
Bulk-loading of B trees [6] has been investigated, but only recently have bulk-loaded grid files been
considered. The single paper on this of which we are aware is that of Li, Rotem, and Srivastava [2].
Their main emphasis is bulk-loading of Parallel Grid Files, i.e. grid files that are distributed across
multiple sites in a shared nothing environment. They define logical partitioning as that of the gridfile
among the sites in the database system, and physical partitioning as that of the portion of a gridfile
located at one site, into the buckets that compose that portion of the gridfile. Their solution is based on
dynamic programming, for both the logical partitioning and physical partitioning of parallel gridfiles.
For physical partitioning their objective function is to minimize bucket overflow. We are concerned only
with physical partitioning at a single site, although a modified version of our algorithm could be used
for logical partitioning. The Li et al. algorithm optimally partitions one dimension, given a specific
number of partitions and a fixed partitioning in the other dimension (which is likely equally spaced,
but details on this fixed partition are lacking the the Li et al. paper). Our algorithm dynamically finds
the number of partitions, finds a partitioning much more quickly, and directly addresses the issue of
selecting the fixed partition. For uniformly distributed data it may be sufficient to assume an equally
spaced partitioning, but this is not the case when data is skewed.
We show that the dynamic programming approach is too inefficient to be considered for large
grid files. Li et al. recognize this problem themselves, and suggest sampling [7, 8] to accelerate their
algorithm. However, sampling may introduce overflows, the handling of which may be significant. For
each bucket that overflows an additional bucket must be created and the grid directory split. If the
number of overflows within a bucket is larger than the bucket capacity, multiple new buckets will need
to be created and the grid directory will be split multiple times. The earlier work inadequately assesses
the risks of sampling, focusing as it does on the probability that some block overflows rather than, say,
the average number of blocks which overflow and the average total number of overflow tuples.
For the problem specification given in Li et al. , i.e. given a fixed partitioning and fixed number
of partitions, the dynamic programming formulation is an excellent approach, but we propose that it
is better to reformulate the problem and find the smallest number of partitions for which the total
overflow is zero. The freedom introduced by allowing an arbitrary number of partitions enables us to
use a fast heuristic algorithm instead of an expensive dynamic programming algorithm. The possibly
larger number of buckets resulting from an increased number of partitions is reduced via a low cost
aggregation algorithm. Thus, our partitioning algorithm is capable of handling much larger grid files
and still guarantee no overflows while achieving good bucket utilization, although if the data set is too
large to fit into main memory the data must first be sorted. Furthermore, we consider more extensive
data sets than the earlier work, to better understand the effects of positionally skewed and clustered
data which is typical of CFD data sets.
Our partitioning algorithm is a modification of the rectilinear partitioning algorithm developed
by Nicol[4] for the purposes of load-balancing irregular data-parallel computations. The two principle
differences between our algorithm and this earlier one are that the number of subranges in each dimension
are not considered fixed in the present context, and that there is an upper limit on the number of tuples
in a bucket.
3 General Problem Description
Before considering algorithmic issues, let us first examine the general problem. Our exposition is of the
two-dimensional case; all the algorithms generalize immediately to higher dimensions. We also assume
that each attributed range is partitioned into the same number of subranges. This is not rigorously
necessary, but we have not addressed how one would choose the desired relationship between number
of subranges in each dimension.
Let S be a set of tuples (a 1 ; a 2 ; q[]) where attributes a 1 and a 2 are the indexed attributes and q[] is
the rest of the tuple. In our specific data sets a 1 and a 2 are x and y coordinates, and q[] is an array of
3-5 floating point values representing physical quantities such as pressure, density, directional derivative
information, chemical composition, and so on. For ease of exposition assume the domain of both a 1 and
a 2 are integers 2 the algorithms extend in a straightforward fashion to real-valued attributes
and generalized ranges. The empirical results we report are based on these extensions. Let F be a n \Theta n
frequency matrix which for each entry contains the number of tuples with that coordinate, i. e.
We use the following notation:
the number of tuples in data set S,
the number of partitions in each dimension,
the maximum number of tuples a bucket can hold,
the number of unique coordinate values in dimension i,
is the vector of cuts in dimension i, specifically C 1 is the vector of
horizontal cuts and C 2 is the vector of vertical cuts,
the P \Theta P occupancy matrix resulting from applying the cut vectors C 1 and C 2 to S,
total
We seek a pair (C overflow equals zero, and whose number of cuts is minimized.
To make these concepts more intuitive, in the left hand side of figure 1 we have the partitioned
data set for
(the horizontal (2,6). The partitioning (C divides the domain of S into
9 bins. Note, the dashed lines of (C are slightly offset to clearly show the occupancy of the bins.
In this case bin 1 contains points (1,1) and (2,2); bin 2 contains (1,3) and (1,4); bin 3 contains (2,8);
bin 4 contains (4,2), (5,1) and (7,2); bin 5 contains (4,3), (5,3), and (7,4); bin 6 contains (3,9); bin 7 is
contains (9,3); and bin 9 contains (8,8). Thus, the occupancy matrix, [O i;j ], is:
Figure
1: Partitioning Example; Left: total overflow equals 1; Right: total overflow equals 0
If we assume 2, then the total overflow for this partitioning is 2 because bins 4 and 5 each
contain 3 points. If we move the position of the second cut of C 1 to position 6, i. e. let C
shown in the right hand side of figure 1, then the total overflow would be zero.
4 Algorithm Descriptions
We now describe the algorithm of Li et. al., and our own. Our implementation of the earlier algorithm
is presented in 4.1 in some detail. We provide the detail because it is lacking in the Li et. al paper,
and we wish to show that we've made every effort to optimize the performance of their dynamic programming
solution. Section 4.2 gives our own partitioning algorithm, while 4.3 describes our method
for aggregating under-utilized buckets.
4.1 Dynamic Programming Solution
The dynamic programming equation to be described is precisely the one given in Li et al. [2]. We reword
that formulation and describe specifics of an optimized algorithm for solving that equation.
It is assumed that S is already partitioned in the horizontal dimension, i.e. C 1 is fixed. Our task
is to find a vector C 2 that minimizes the total overflow. Let R(i; j) be the n \Theta (j of
F obtained by restricting a 2 , i - a 2 - j. Now consider the column of bins resulting from partitioning
horizontally by C 1 . Let OV 1 (i; j) be the sum, over each member of this column, of the bin
overflow. For example, with and the matrix on the left of figure 1, OV 1 (2; since the
middle bin has 4 tuples, and no overflow is observed in the other two bins. To reduce overflows we
might consider partitioning R(i; vertically with l \Gamma 1 cuts, thereby creating a P \Theta l submatrix of bins
with an attendant total overflow value. There may be many ways of partitioning columns i through j
of R(i; l) be the minimum possible total overflow cost among all these
possibilities. The principle of optimality [1] then asserts that
(1)
Of particular interest is the value TOV and the partition that achieves this cost.
Solution of this equation is aided by precomputing values from which each OV 1 (i; j) can be derived
in O(P ) time, as follows. C 1 partitions F into P submatrices, . For each S k and column
to be the sum of entries in S i between column indices 1 and j, inclusive. Then,
for any pair of column indices i and j we have r k (i;
computed with a single subtraction, OV 1 (i; j) is computed in O(P ) time. The set of all
values can be computed in time proportional to n log(P ). With only slightly more computation
(a sort in each dimension) we can accommodate tuple sets that are sparse relative to n \Theta n. We project
the data set onto each coordinate axis and sort it, essentially working with a T \Theta T array containing
only T non-zeros. The indices we describe in this paper may be thought of as the ordinal positions
of the data projections on each axis. We take advantage of the sparse structure and still compute all
proportional to T log(P ).
The dynamic programming equation expresses a recursion in both column index j, and number of
cuts, l. Our approach is to unravel the recursion with j being the inner index, and l the outer one.
Specifically, we start by solving TOV 1 (1; j; 1) for all j; given these we solve TOV 1 (1; j; 2) for all j, and so
on. For l ? 1, when solving TOV 1 (1; j; l) we must make up to comparisons (actually, we must make
one comparison for every non-zero column of F between columns l \Gamma 1 and 1). If the tuple sets are not
sparse relative to n \Theta n, the complexity of the inner loop of the recursion is O(P n 2 ), and the outer loop is
executed giving a complexity of O(P 2 n 2 ). In addition, the complexity of the initial precalculation of the
thus the total complexity is O(P 2 If the data sets are sparse
relative to n \Theta n, then the complexity can be reduced to O(P 2 U 2
is the number of unique attribute values in dimension 2, and the additional T log(T ) is for sorting the
tuples which is needed to maintain the sparse representation. In the rest of this paper we will assume
the data sets are sparse relative to n \Theta n. Sparse data sets are especially relevant since the coordinates
of our unstructured CFD data sets are reals. The asymptotic complexity is O(maxfP 2 U 2
We will henceforth call this algorithm the DP algorithm.
The speed of the algorithm can be further increased by precalculating and storing all the values
8j. The complexity is then O(P U 2
)). The precalculation of
the OV 1 (i; proportional to O(P U 2
2 ), and is thus included in that term. This storage
cost can be very significant and hence limits the applicability of this optimization. For example, if U 2 is
5000, the space required for storing the OV 1 (i; j) is 95 megabytes. We will henceforth call this algorithm
the DP2 algorithm.
We have now described how to calculate the optimal overflow cost and partitioning of S given fixed
partitioning C 1 . So far the only difference from our work and that of Li et al. is that we have provided
the details of our implementation of the dynamic programming problem. We now come to the first
contribution of this paper, how to determine the fixed partitionings and how to determine the number
of partitions.
We assume that the number of partitions in each dimension is the same, thus resulting in square
gridfile directories. We presume the existence of an algorithm which, given a fixed set of cuts in one
dimension finds a "good" set of cuts in the other dimension. The paper by Li et al. provides one such,
but neglects to specify the origin of the fixed cut set. We follow Nicol [4] by using such an algorithm
as the basis for an iterative method: Given fixed cuts in one dimension, find good cuts in the other.
Treat the new cuts as fixed, and find better ones in the previously fixed dimension. The iterations are
maintained until some termination mechanism triggers. The initial fixed cut is uniformly spaced. In
the gridfile application of this idea, each application of the cut-finding algorithm attempts to find cuts
that yield zero overflow at all buckets. Termination of such a partitioning session is defined when either
an overflow-free cut-set is discovered, or after some specified number of iterations (we use 20) no such
cut-set is discovered. The sole parameter to a partitioning session is the number of partitions, P , in each
dimension. A partitioning session may be viewed as a probe that determines whether we can quickly
discovered an overflow-free partitioning using cuts in each dimension. Our overall strategy is to
do an intelligent search on P to find the smallest value for which we can quickly determine a desirable
partitioning.
Any cut assignment might be used in the approach above. The results we later report use both
the dynamic programming solution of Li et al., and our own algorithm (to be reported) within this
same framework. For skewed data sets it may be advantageous to have the number of partitions in
each dimension differ, but our aggregation phase described later minimizes the poor efficiency of using
square regions. In the future we intend to investigate non-square regions. Given a square region, strict
lower and upper bounds on the number of partitions needed in each dimension are:
We thus can do a binary search to find the minimal number of partitions P , lowerBound - P -
upperBound, for which the total overflow is equal to zero. In practice, we have found it is faster start
with the number of partitions equal to 2 \Theta lowerBound. Then, while the total overflow is greater than
zero keep doubling the number of partitions. Once a partition value has been found for which the total
overflow is zero, conduct a binary search with that value as the upper bound, and the previous value
as the lower bound.
4.2 Rectilinear Partitioning
We now come to the second contribution of our work, an alternative rectilinear partitioning algorithm.
Like that of Li et al., it optimizes the cuts in one dimension given a fixed set of cuts in the other. In
the discussion to follow we take C 1 as fixed.
At each step of the algorithm we seek to define a column of buckets whose width is as wide as
possible without any bucket in the column being assigned more than B tuples. To define the first column
we seek the largest index j for which OV 1 (1;
non-decreasing in j, we may identify with a binary search. Using the precalculated r k (i; j), each
candidate j requires O(P ) time to compute OV 1 (1; j), hence O(P log U 2 ) time is required to define the
first column. The second column is computed exactly as the first, only taking index as the
starting point, i.e., identify the largest j 2 for which OV 1 (j 1 This process continues until
either P or fewer adjacent overflow-free columns are discovered, or all are placed and the
last column suffers overflow. In the former case the partitioning session terminates; in the latter case
we may freeze the newly discovered cuts and choose new cuts in the other dimension. The complexity
of one partitioning session has several components. First there is an O(T log T ) cost for sorting the
tuples in each dimension. Now for each time we optimize in one dimension we first compute new
r k (1; j) values, which takes O(T log(P )) time. This is followed by a O(P 2 log U 2 ) cost for allocating
cuts. Since any partitioning session iterates a bounded number of times, the asymptotic complexity is
The original rectilinear application [4] was shown to converge to unchanging cut sets (given sufficiently
many iterations). Our algorithm too would converge, but we have found it more prudent to back
away to a larger number of partitions when a small number of iterations fails to find a suitable partition.
The original rectilinear partitioning problem was shown to be NP-hard in three dimensions; the same
proof suffices to show the intractability of finding minimal P for which a square overflow-free partition
exists. The tractability of the rectilinear partitioning problem in two dimensions is still unknown.
It is informative to consider an essential difference between our partitioning algorithm and that
of Li et al. We are uninterested in any partition that has overflow, and so expend no computational
energy on minimizing non-zero overflows. If, given C 1 it is possible to find C 2 yielding an overflow-free
partition, our algorithm will find it. If none exists, our algorithm determines that quickly. By contrast,
the previous algorithm seeks to find C 2 that minimizes overflow. We are uninterested in whether the
minimal overflow is two or three, only whether it is zero or non-zero. This distinction permits us to find
overflow-free partitions with substantially less work than the previous algorithm, as will be seen in the
empirical results.
4.3 Aggregation
Our third contribution is an algorithm for aggregating adjacent buckets with low utilization. After the
partitioning phase some of the buckets may have low utilization. If two adjacent buckets both have 50%
utilization or smaller we may combine them into a single bucket (even though the gridfile directory will
contain two pointers-they will be identical). Following partitioning, we apply an aggregation scheme
based on this observation.
Let B equal the bucket capacity. First assume the grid directory is of size 2 i \Theta 2 i , and view it as four
equal quadrants labeled NW,NE,SE,SW . Define a procedure CanMerge(A;B; that
returns logical true if neither A nor B has already been merged into some group at level j and their sum
of utilization is less than 100%. Define procedure Merge(A;B; j) to merge A and B into one bucket
at level j. Using CanMerge and Merge we define a recursive function function Aggregate(A; j) as
follows.
G1-A G1-C
G1-B
Figure
2: Aggregation Examples
1. If A consists of a 1 \Theta 1 gridfile or if A has already been merged into some group at level
return.
2. Partition A into four quadrants, NW,NE,SE,SW.
3. If the sum of utilizations of all four quadrants is less than 100%, aggregate them all into one
bucket, return.
4. if CanMerge(NW,NE,j) AND CanMerge(SW,SE,j) then:
call Merge(NW,NE,j), Merge(SW,SE,j)
5. if CanMerge(NW,SW,j) AND CanMerge(NE,SE,j) then:
call Merge(NW,SW,j), Merge(NE,SE,j)
6. if CanMerge(NW,NE,j) then: call Merge(NW,NE,j)
7. if CanMerge(SW,SE,j) then: call Merge(SW,SE,j)
8. if CanMerge(NW,SW,j) then: call Merge(NW,SW,j)
9. if CanMerge(NE,SE,j) then: call Merge(NE,SE,j)
10. call Aggregate(NW,j+1), Aggregate(NE,j+1), Aggregate(SW,j+1), Aggregate(SE,j+1)
Assuming the grid file directory D is initially 2 i \Theta 2 i , the aggregation is accomplished with the call
Aggregate(D; i).
As an example consider the grid directory in the left hand side of figure 2 and a bucket capacity of
10. Entries in the directory are the number of tuples in the bucket. We can not merge the whole into
one bucket, nor can we merge as two halves, but we can merge the NW and SW quadrants and then
call the aggregation strategy on the two remaining quadrants.
In practice there is no restriction to powers of two. Although our current partitioning algorithm
assumes the grid has an equal number of partitions in each dimension we present our aggregation
algorithm in the most general case. Without loss of generality assume the shape of the grid directory
is N rows by M columns, where We find the largest i such that 2 i ! N . Let G1 be the
subdirectory of the grid directory composed of the first 2 i rows, and let G2 be the
subdirectory composed of the complement of the original directory. We first aggregate G1. Let
M div N , this is the number of square 2 i \Theta 2 i subdirectories that can fit in G1. For each one of these
square subdirectories we apply the square region aggregation algorithm above. We are then left with
a We apply the algorithm recursively on these two regions.
In the right hand side of figure 2 we show an example for a 13 \Theta 20 grid directory. Subdirectory G1
is composed of G1-A, G1-B, and G1-C. The square power of two region aggregation policy above is
applied to G1-A and G1-B, while the entire aggregation policy is called recursively on G1-C and G2.
This algorithm could be improved to yield slightly better bucket utilizations, but is very fast and has
proved to sufficient for our needs so far.
Depending on the use of the gridfile, different aggregation strategies can be used. If the gridfile
is read only, as in our CFD database, then the buddy-system pairing approach needed to facilitate
splits for future insertion of tuples is not necessary. In this case regions of aggregated buckets need
not be rectangular and hence could allow for more aggregation resulting in improved bucket utilization.
We have not yet developed any algorithms to calculate this aggregation since the above algorithm has
been sufficient for our needs to date. On the other hand, if the gridfile is being used in a transaction
processing environment and tuples might later be inserted, the buddy pairing must be preserved.
5 Experimental Comparison
In this section we present experimental results for the two partitioning algorithms. We present both
run times and bucket utilization results. In all of our experiments we do not make any attempt to get
smooth curves or collect confidence intervals. The figures are the result of one experimental run and
thus often have some noise, presumably from use of the workstation by other jobs. All experiments
were run on a Sparc 10 workstation.
Sanity checks on the code were made by running both algorithms through a profiler to make sure
time was being spent in sections of the code where expected. The run time of the RP algorithm is
dominated by the startup costs of creating the pre-calculated r k (1; j) and sorting the records. For most
of the data sets in this paper over 40% of the run time is spent creating the r k (1; j) and over 20% of
the time sorting the data points. Note that even with this high cost of creating r k (1; j), the overall
algorithm significantly faster than when the r k (1; j) are not precalculated. In contrast, the run time of
the DP algorithm is dominated by the actually partitioning since it is O(P 2 U 2
In section 5.1 we present results for a single partitioning given a fixed partitioning in the other
dimension. In the following sections we present results assuming the number of partions and the
initial partitioning is not known. In section 5.2 we present results when the from uniformly distributed
synthetic data sets, while in section 5.3 we present results for highly skewed CFD data sets. In section
5.4 we present the bucket utilization results from our experiments and demonstrate the utility of the
aggregation phase.
5.1 Fixed Partitioning Given
We first compare the DP, DP2, and RP algorithms assuming that a fixed partitioning exists in one
dimension. We conduct these experiments since this is the exact scenario for which Li et al. proposed
their algorithm. Note again that how this fixed partitioning is obtained is not specified in Li et al. [2].
We consider a data set of 5,000 tuples where the x and y coordinates of each tuple are each chosen
from a uniform distribution from 1 to 2000. We obtain the initial horizontal partitioning by equally
spacing the cuts within the domain. In table 1 we present results for the number of partitions in each
dimension varied from 12 to 5 assuming a bucket capacity of 50 tuples. The columns headed "seconds"
record the amount of CPU time used for the partitioning, columns headed "overflow" are the total
number of tuples that did not fit within the bucket capacity, and the columns headed "BlocksOver"
are the number of blocks which overflowed. The overflow and BlocksOver numbers are identical for the
DP and DP2 algorithms since the algorithms find the exact same partitioning and only differ in run
time. First note that the RP algorithm is one to two orders of magnitude faster than the DP and DP2
algorithms for all values of P. Conversely, the dynamic programming algorithms minimize total overflow
better when there is a large number of partitions. Thus, for the specific problem and objective function
as formulated by Li et al. the dynamic programming algorithm proposed satisfies the objective function
better than our rectilinear partitioning algorithm, but at the expense of significantly more computation.
A premise of our work is that it is better to partition with a sufficiently large number of partitions to
ensure no overflows.
Note that although the DP algorithm does have a smaller number of tuples overflowed, it results
in a larger number of buckets which overflow when the number of partitions is less than 11. The blocks
which overflow when the RP algorithm is used are all in the last column of the partitioning, whereas
when the DP algorithm is used the overflow blocks are spread out in the partitioning space. Consider
the case where the number of partitions is 10. When the RP algorithm is used there are 10 overflow
blocks. These 10 blocks have 106, 106, 101, 111, 94, 94, 108, 112, 113, and 106 tuples allocated to them.
Since only 50 tuples fit per block new blocks will need to be created. One the other hand, when the
DP algorithm is used there are 40 overflow blocks, each of which has at most 68 tuples, requiring 40 new
blocks to be created. Hence, total overflow is not a good indicator of the optimality of a partitioning.
We propose that a better metric would be the number of new blocks needed to hold the overflows. We
will continue to use total tuple overflow in this paper since our algorithms dynamically find the number
of partitions needed to make the overflow zero.
5.2 Number of Partitions Not Given: Uniformly Distributed Data
We now assume that the number of partitions is not known and that no initial fixed partitioning is
given. We first consider the run time of the algorithm for uniformly distributed data. The x and y
coordinates of each tuple are each chosen from a uniform distribution from 1 to N, where N depends
on the experiment. In all reported experiments we do not allow any duplicate data set points since our
CFD data does not have any duplicate points. We have verified that inclusion of duplicates results in
similar relative performance. We first consider the relative performance of the algorithms as the number
of tuples is varied.
RP Algorithm DP DP2
seconds overflow BlocksOver seconds seconds overflow BlocksOver
9 6.20e-01 1672 9 1.59e+02 1.13e+02 950 76
7 6.30e-01 2958 7 1.22e+02 8.94e+01 2550 43
Table
1: CPU Times and Overflow, Fixed Partitioning Given
In figures 3 and 4 we plot the computation time in seconds versus the number of tuples in the
relation assuming coordinate values are uniformly distributed from 1 to 2000. Note that the y-axis is
logarithmic. From top to bottom we plot the computation time of the DP, DP2, and RP algorithms.
Remember that the DP2 algorithm is the same as the DP algorithm except it precomputes and stores
the OV 1 (i; j)8i 8j. The plot in figure 3 assumes 50 tuples fit per page, the plot in figure 4 assumes 300
tuples per page. If page size is 8192 bytes then tuples size would be 164 and 27 bytes respectively. A
tuple size of 164 bytes may be a typical size for transaction processing systems, and tuples in our data
sets are usually around 24-32 bytes. As the number of tuples increases the run time of the DP algorithm
becomes too long to be of practical use. A relation of 40,000 164 byte tuples is only 6.4 mega-bytes, for
byte tuples this is only 1.2 mega-bytes, hence it is reasonable to expect there to be sufficient memory
to partition data sets of at least 40,000 tuples.
For 40,000 164 byte tuples, figure 3, the DP algorithm requires 26600 seconds (about 7.4 hours),
and 100,000 tuples require 77200 seconds (21.4 hours). These times are clearly prohibitive. The DP2
algorithm requires 3000 seconds (50 minutes) and 6070 seconds (101 minutes) for 40,000 and 100,000
tuples respectively, but it requires 15 mega-bytes of space to hold the precomputed OV 1 (i; j). The RP
algorithm only requires 12 and 40 seconds for 40,000 and 100,000 tuples respectively. Thus, the RP
algorithm is a practical algorithm. The RP algorithm is about 2000 (250) times faster than the DP (DP2)
algorithm for 40,000 tuples. The difference in solution times is not unexpected given the complexities
of the DP, DP2, and RP algorithms which are O(maxfP 2 U 2
log Tg), and
log Tg) respectively.
We now consider how the number of unique attribute values in the data set impacts the relative
performance of the policies. In figure 5 we plot the computation time in seconds versus the maximum
of the attribute domain for a data set with 40,000 tuples and assuming 300 tuples fit per page. Note
that the y-axis is logarithmic. The curves from top to bottom are for the DP, DP2, and RP algorithms.
We did not run the DP2 algorithm when the storage space for the precalculated OV (i;
mega-bytes, thus there are no points plotted for maximum domain values of 5,000 and higher. Increasing
the maximum domain value increases the number of unique attribute values in the data set. The DP
and DP2 algorithms are highly sensitive to the number of unique values in the data set. Conversely,
the RP algorithm is relatively insensitive to the number of unique values. When the maximum domain
value is 2,000, the RP algorithm is 450 (110) times faster than the DP (DP2) algorithm. When the
maximum domain value is 10,000, the RP algorithm is 17,000 times faster than the DP algorithm. All
other experiments in this section assume a maximum domain value of 2000. For many of our CFD data
sets the number of unique values is almost equal to the number of tuples, thus even 10,000 is a very
small value.
We now consider how the tuple size effects the relative performance of the two algorithms. In figure
6 we plot the computation time in seconds versus the number of tuples per page assuming 40,000 tuples
with an attribute domain maximum of 2000. Once again the y-axis is logarithmic. As the number
of tuples per page decreases, hence the tuple size increases, the DP algorithms requires significantly
more computation. Conversely, the RP algorithm is relatively insensitive to the size of the tuples.
Thus, the RP algorithm remains a viable algorithm for a wide range of tuple sizes. The degradation
of the DP algorithm as tuple size increases is easy to predict from the complexity of the algorithm:
As tuple size increases the number of tuples per bucket decreases
and hence the number of partitions, P , increases. We would expect the runtime of the RP algorithm
to increase also since the complexity of the RP algorithm is O(P 2 log(U max )), but the majority of the
run time of the RP algorithm is spent sorting the tuples and creating the r k (1; j), thus obscuring the
sensitivity to tuple size.
In figure 7 we plot the ratios of the computation time of the DP and DP2 algorithms relative to
the RP algorithm. As the tuple size increases the ratio increases.
5.3 Number of Partitions Not Given: Unstructured CFD Data
We now consider the run time of the algorithm for highly skewed data. We use actual data sets from
unstructured grid CFD simulations. Here the term grid is used to describe the way the coordinates in
the data set are connected. The data set is composed of x,y real-valued coordinates. The data sets
are from computational models of cross sections of airflows around aircraft wings [3]. In figure 8 we
plot the data set for a set with 1034 points where x restrict the
range plotted since the majority of the data is in the central region and plotting the whole range would
make it difficult to distinguish the points in areas of high concentration. Only 94 of the 1034 points are
not plotted. The vertical and horizontal lines are the partitioning lines resulting from running the RP
algorithm on the data set. Note, there is one vertical line at which is not included in the plot.
As can be seen from the partitioning, a fixed equal space partitioning would be a bad choice.
In figure 9 we plot the partitioning computation time versus the number of tuples for three different
data sets. For the smallest data set, 1034 tuples, the DP (DP2) algorithm required 2370 (650) times
more computation than the RP algorithm for partitioning. For the data set with 3959 tuples, the DP
(DP2) algorithm required 38,817 (5629) times more computation than the RP algorithm. Thus, the DP
algorithm is especially impractical for highly skewed data. Since the DP algorithm required 42 hours
for the 3959 tuples data set we did not run the 15895 tuple data set. The RP algorithm required 66
seconds to partition a 15,895 tuple data set.
The four orders of magnitude difference in computation time is not surprising in light of the results
from the experiment plotted in figure 5. For unstructure grid data sets the number of unique attribute
values is almost equal to the number of tuples, hence as the number of tuples in the set increases not
RP Algorithm DP Algorithm
Bucket Utilization Bucket Utilization
Partitions pre-aggregation post-aggregation Partitions pre-aggregation post-aggregation
1000 5 8.00e-01 8.70e-01 5 8.00e-01 8.00e-01
5000 12 6.94e-01 7.94e-01 12 6.94e-01 7.81e-01
10000
20000
80000 48 6.94e-01 7.31e-01 47 7.24e-01 7.37e-01
100000 53 7.12e-01 7.27e-01 52 7.40e-01 7.43e-01
Table
2: Average Bucket Utilizations, Number of Tuples Varied
only does the number of partitions needed increase, but so does the number of unique attribute values.
The RP algorithm does not experience as much of an increase in computation time as the data sets get
larger since the majority of its time is spent in the precalculation of the r k (1; j) and the initial sort of
the data.
5.4 Bucket Utilizations and Aggregation Effectiveness
We now present the average bucket utilizations for some of the previous experiments both before and
after our aggregation phase is completed. In table 2 we present the utilizations for the uniformly
distributed data experiment in figure 3. The column label "Partitions" is the number of partitions in
each direction. This was the smallest number for which the algorithm returned a total overflow of zero.
Overall the average bucket utilization is quite good, about the same as would result from inserting the
tuples one at a time. There is little difference between the utilization for the DP and RP algorithms. In
addition, the aggregation phase does not significantly improve the bucket utilization. This is because
the bucket utilization is already good. For most experiments, the run time of the aggregation phase is
minimal, less than 2% of the RP runtime, hence it is worth aggregating even for a modest improvement.
In table 3 we present the utilizations for the uniformly distributed data experiment in figure 6.
Once again there is little difference in bucket utilization for the two algorithms. The average bucket
utilization tends to decrease as the number of tuples per page decreases. When only 5 tuples fit per
page the bucket utilization is only 28%, but after the aggregation it is better than 70%. Thus, the
aggregation phase can considerably improve the utilization for cases where the utilization is poor. The
runtime of the DP algorithm for 5 and 10 tuples per page was excessive and hence we do not present
aggregation results for those parameters.
For skewed data the aggregation phase results in substantial savings of disk space. In table 4 we
present the utilizations for the unstructured grid CFD data set for three different grids. The average
bucket utilization without aggregation is very poor but improves significantly with aggregation. Thus,
for highly skewed data aggregation is essential for achieving good bucket utilizations. Note, there is no
tuple data for the DP algorithm since its computation time on the 3959 tuple data set required
hours.
RP Algorithm DP Algorithm
Tuples Bucket Utilization Bucket Utilization
per-page Partitions pre-aggregation post-aggregation Partitions pre-aggregation post-aggregation
200 15 8.89e-01 8.89e-01 15 8.89e-01 8.89e-01
100 22 8.26e-01 8.26e-01 22 8.26e-01 8.26e-01
Table
3: Average Bucket Utilizations, Tuples Per Page Varied
RP Algorithm DP Algorithm
Bucket Utilization Bucket Utilization
Partitions pre-aggregation post-aggregation Partitions pre-aggregation post-aggregation
Table
4: Average Bucket Utilizations, Unstructured Grid CFD Data
6 Two-Phase Bulk Loading Algorithm Description
In this section we describe a two phase algorithm for bulk loading of data sets significantly larger than
available buffer space. Suppose the data set contains S tuples, and suppose that a maximum of A
tuples can be contained in memory at a time when applying the RP algorithm. Our approach has two
steps. First we partition the set into groups of size A or fewer. Each set will contain all points within
a rectangle in the x-y plane; however the collection of sets need not be rectilinear. In the second step
we apply RP to each individual set, and merge the individual grid files created. These steps are now
elaborated upon.
Given S and A we find the smallest perfect square integer R such that R ? S
A . We will partition
the data set into R groups, as follows. By sorting the data set on the x-coordinate value we may easily
divide the set into
R groups of
R successive elements in the sorted order. This serves to partition
the data set along the x-axis into "strips" of tuples. Each such strip may be sorted along the y-axis,
after which its points may be separated into groups of successive S
A points. This effectual divides a strip
into rectangles, with no rectangle containing more than the permitted number of points.
It remains to apply RP to each group, and write the buckets of data to disk. One possibility is
to partition each group separately, and define the final grid file as the union of all separately defined
gridfiles. Recognizing that a cut which is defined for a group on one side of the data domain must
propagate through
R-1 other groups (and cause splitting of grid directories in each) we consider a
different approach. As the groups are partitioned we build up a global grid file, initially empty. Upon
reading in a group we identify the set of cuts in the global grid file which affect this group, treat them
as immutable, and seek to find the minimum number of additional cuts needed to avoid overflow. This
requires a simple modification to the RP algorithm.
Another optimization is to first strip the attributes being indexed from the data set. Then the
two phase algorithm is applied to the coordinates without requiring I/O of the whole tuple. After
partitioning the set of coordinates and creating the overall grid directory, the buckets could be filled by
making a second pass over the data set. This may result in a faster load time if the tuple size is large.
If the data set (and hence the grid directory) is extremely large, another optimization uses a two
level directory scheme as suggested in [5] where the top level directory has one entry for each of the R
sub-directories. Note, this would mean that a point access could require three disk accesses instead of
two.
7 Conclusions and Future Work
We have proposed and implemented a new rectilinear partitioning (RP) algorithm for physical partitioning
of gridfiles. Our proposed RP algorithm is significantly faster than the recently proposed
dynamic partitioning (DP) algorithm of Li et al. [2]. The number of overflows RP permits is necessarily
larger than the DP algorithm (which minimizes them), however we argue that minimizing the number
of additional blocks created due to overflow is actually a better measure, and is one for which the RP
algorithm finds better solutions that the DP algorithm.
We considered the use of our greedy algorithm and the DP algorithm as kernels in a loop that
seeks to minimize the size of the grid file needed to achieve no overflows. For synthetic data sets of
uniformly distributed integers the RP algorithm is two to three orders of magnitude faster than the DP
algorithm. For actual CFD data sets, whose indexed attributes are highly skewed reals, the RP-based
algorithm is three to four orders of magnitude faster than the DP-based algorithm.
We have also developed an efficient aggregation algorithm for improving bucket utilizations of grid-
files resulting from bulk loading using the RP or DP partitioning algorithms. The algorithm has minimal
overhead, and can yield substantial improvements in bucket utilization when the bucket utilization after
partitioning is poor. This aggregation phase is necessary to achieve reasonable bucket utilizations when
the indexed data is highly skewed.
We have also proposed a two phase bulk load algorithm and several optimizations for loading
data sets that are significantly larger then the available buffer space. This algorithm guarantees no
bucket overflows and is proposed as a possible alternative to sampling based methods. We have yet not
investigated the performance of the algorithm.
In the future we plan to experimentally compare our two phase algorithm with inserting one tuple
at a time and sampling based methods. We also intend to consider more sophisticated aggregation
techniques and partitioning with differing numbers of partitions for each attribute.
--R
Fundamentals of Computer Algorithms
"Algorithms for Loading Parallel Grid Files,"
"Algebraic Turbulence Modeling for Unstructured and Adaptive Meshes,"
"Rectilinear Partitioning of Irregular Data Parallel Computations,"
"The Grid File: An Adaptable, Symetric Multikey File Structure,"
"Time and Space Optimality in B-Trees,"
"Probalistic Method in Query Processing,"
"Sampling Issues in Parallel Database Systems,"
--TR
--CTR
Y. Dora Cai , Ruth Aydt , Robert J. Brunner, Optimized Data Loading for a Multi-Terabyte Sky Survey Repository, Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p.42, November 12-18, 2005
Apostolos Papadopoulos , Yannis Manolopoulos, Parallel bulk-loading of spatial data, Parallel Computing, v.29 n.10, p.1419-1444, October
A. Rundensteiner, Merging R-Trees: Efficient Strategies for Local Bulk Insertion, Geoinformatica, v.6 n.1, p.7-34, March 2002
Gisli R. Hjaltason , Hanan Samet, Speeding up construction of PMR quadtree-based spatial indexes, The VLDB Journal The International Journal on Very Large Data Bases, v.11 n.2, p.109-137, October 2002 | gridfile;bulk loading;databases;dynamic programming;multidimensional indexing;rectilinear partitioning |
627846 | Logic as a Query Language. | AbstractResearch in nonmonotonic reasoning has focused largely on the idea of representing knowledge about the world via rules that are generally true but can be defeated. Even if relational databases are nowadays the main tool for storing very large sets of data, the approach of using nonmonotonic AI formalisms as relational database query languages has been investigated to a much smaller extent. In this work, we propose a novel application of Reiter's default logic by introducing a default query language (DQL) for finite relational databases, which is based on default rules. The main result of this paper is that DQL is as expressive as SO, the existential-universal fragment of second-order logic. This result is not only of theoretical importance: We exhibit querieswhich are useful in practicethat can be expressed with DQL and cannot with other query languages based on nonmonotonic logics such as DATALOG with negation under the stable model semantics. In particular, we show that DQL is well-suited for diagnostic reasoning. | Introduction
F OR the purpose of Knowledge Representation, non-monotonic
reasoning (NMR henceforth) formalisms
can be used in two dierent ways:
as languages for representing knowledge about the
world, via rules that are generally true but can be de-
feated. Retrieving information from a non-monotonic
knowledge base of this kind amounts to prove a theorem
As an example, we can use default logic to state that
\birds generally
y". In order to prove that the bird
Tweety
ies, we try to prove that a specic formula
follows {in the default logic semantics{ from the set of
general rules plus a set of specic facts;
as relational database query languages. Retrieving information
amounts to computing the set of tuples belonging
to an intensional relation, starting from some
extensional relations. As an example, we can query
a relational database by means of a DATALOG : program
{i.e., a DATALOG program with negated literals
in the body of the rules{ equipped with a specic semantics
for negation.
Research in NMR has focused largely on the former idea,
and remarkable results about the computational complex-
A preliminary and partial version of this paper appears in the
Proceedings of the Fourth International Conference on Principles of
Knowledge Representation and Reasoning (KR-94), Bonn, Germany,
May 1994. Morgan Kaufmann Publishers, Inc. San Francisco, Ca.
M. Cadoli is with the Dipartimento di Informatica e Sistemistica,
Universita di Roma \La Sapienza", Via Salaria 113, I-00198 Roma,
Italy. E-mail: [email protected]
T. Eiter and G. Gottlob are with the Information Systems Depart-
ment, Technical University of Vienna, Paniglgasse 16, A-1040 Wien,
Austria. E-mail: (eiterjgottlob)@dbai.tuwien.ac.at
ity of several formalisms have been obtained by many authors
(cf. [1] for a survey on this topic).
Even if relational databases are nowadays the main tool
for storing very large sets of data, the latter approach has
been investigated to a much smaller extent.
One of the most important aspects of a query language
for relational databases is its expressive power, i.e., the set
of relations that we can compute by querying. The expressive
power of relational database query languages has
been studied for some twenty years now (cf. [2]). Research
has focused mainly on monotonic query languages, i.e., languages
such that if the extensional relations grow then the
intensional ones grow as well.
Recently some interesting works investigating the expressive
power of non-monotonic query languages appeared.
Kolaitis and Papadimitriou study in [3] the expressive
power of two semantics for DATALOG : programs. In particular
they prove that DATALOG : with xed-point semantics
is as expressive as SO 9 , the existential fragment
of second-order logic. Schlipf proves in [4] an analogous
result for DATALOG : with stable model semantics for
logic programs [5] (stable models are called default models
in [6], [7]); in the following, we refer to this variant
of DATALOG as DATALOG :
stable . Sacca gives in [8] further
insight on the expressive power of interesting variants
of
stable . Van Gelder analyzes in [9] the expressive
power of DATALOG : with well-founded semantics
[10]. In all these papers, databases are modeled as
nite structures, i.e., nite interpretations of theories.
In this work we are concerned with default logic as a query
language. Default logic [11] is one of the most popular
NMR formalisms and has been extensively investigated
both from the semantical and the computational point of
view. It has also been proposed in [7] as a tool for inferencing
in logical databases (i.e., databases which are theories).
Anyway the behavior of default logic on nite structures
(i.e., on relational databases) has not been analyzed so far.
Here we propose a novel application of default logic by
introducing a default query language (DQL) for nite relational
databases, which is based on default rules. The
main result of this paper is that DQL is more expressive
than
stable . In particular DQL is as expressive
as SO 98 , the existential-universal fragment of second-order
logic. This result is not only of theoretical importance: We
exhibit queries {which are useful in practice{ that can be
expressed with DQL and can not with DATALOG :
stable .
Two of the queries are taken from the realm of economics,
while another one deals with diagnostic reasoning on a cir-
cuit; it appears that DQL allows for an easy formalization
of this process.
An alternative way of describing our main result is to
say that DQL \captures" the complexity class p
2 of the
CADOLI, EITER AND GOTTLOB: DEFAULT LOGIC AS A QUERY LANGUAGE 101
polynomial hierarchy, while DATALOG :
stable \just" captures
the class NP. Therefore DQL is more expressive than
stable provided p
hierarchy does not collapse {a property that has
been widely conjectured and that will be assumed through-out
this work.
We remind that p
2 -completeness of credulous propositional
default reasoning has been recently proven [12],
[13]. It is therefore important to remark that the expressive
power of a language is not necessarily the same as
its complexity. Several languages with this property are
known, cf. [14], [15]. As an example, a language which does
not capture NP {even if it has an underlying NP-complete
problem{ has been shown by Stewart in [16].
To show that DQL is well-suited for formulating useful
and practical queries in dierent domains, we present
several examples. One of the examples deals with troubleshooting
for electric circuits. It appears that DQL allows
for an elegant implementation of advanced diagnostic
reasoning principles (in particular, abductive model-based
diagnosis [17], [18]). Other examples deal with relevant
queries in business administration.
The structure of the paper is the following. Section II
provides some motivating examples. In Section III we state
necessary preliminaries on query languages for relational
databases and provide a brief introduction to default logic.
In Section IV we give the denition of the query language
DQL, providing syntax and semantics. In Section V we give
a formal proof of the fact that DQL captures p
2 . More-
over, we brie
y address the computational complexity of
DQL, and we consider the sublanguage of normal DQL
queries, which is analogous to normal default theories. In
Section VI we show how complex problems like those presented
in Section II can be solved in DQL. We draw some
conclusions and compare DQL with some query languages
arising from logic programming in the nal Section VII. In
order to increase the readability, the proofs of all results
have been moved to an appendix.
II. Motivating examples
Before diving into the technical part of the paper, let us
motivate with some examples, why query languages with
higher expressiveness than DATALOG :
stable are needed.
These examples will be dealt with in much more detailed
way in Section VI.
Our rst example here is from the domain of model-based
diagnosis (MBD) of electric devices. This very promising
eld is an active research area in AI cf. [19], [20]. It would
therefore be interesting to integrate the paradigm of MBD
into relational databases, as the descriptions of large electric
systems (e.g., of power networks) should benecially
be stored in a database.
Example 1: (Electric circuit troubleshooting) An
electric circuit has several components. All of them might
be in some way malfunctioning, or even misdesigned (e.g.,
the resistance value of a resistor could be too small).
In the (simplied) electric circuit represented in Figure 1,
we have a battery, two resistors and two fuses, and a conF
Fig. 1. Electric circuit
trol light with a bulb. The higher the battery's voltage the
higher the current's amperage. Similarly, the smaller the
resistor's resistance the higher the amperage. If the am-
perage in the circuit exceeds a certain amount, one of the
fuses melts; the circuit is interrupted, and the control light
is o. It is not predictable which one of the fuses will melt,
or even if both of them will.
In the kind of MBD known as abductive diagnosis, it is
customary to to say that when a component does not work
properly, one or more eects can show up. Moreover, an
eect is originated by a set of causes, and a set of causes
does not originate univocally an eect. In the above circuit,
melting of a fuse is a eect, and battery's voltage being too
high is a cause. Similarly, a melted fuse {as well as a broken
bulb{ is a cause for the eect of the control light being
(cf. [21], [17], [18], [22] for a background and overview on
results on abductive diagnosis).
In abductive diagnosis, knowledge about a circuit such as
the above one is typically described by a diagnostic theory
that consists of rules of the form
where the C i 's are causes and the E j 's are eects. As an
example, the following rule states that melting of one of the
fuses will occur whenever both resistance values are low
Not necessarily all of the antecedents of a rule of the kind
(1) are possible causes. As an example, we might be sure
that the rst resistor is not underdimensioned; in such a
case, R 1 -low would not be a possible cause. The possible
causes are called hypotheses. Finally, in each specic situation
we make a set of observations, e.g., the control light
is o.
Given a situation, described by a set of rules of the kind
(1), a set of hypotheses and a set of observations, abductive
diagnosis looks for an explanation of the observations we
make. An explanation is a minimal set of hypotheses (wrt
inclusion), such that their validity implies through the rules
all observations. In a general situation it is possible that
one, many, or no explanations exist. The hypotheses that
belong to at least one explanation of the diagnostic problem
are called relevant for the problem; computing the relevant
facts of a diagnostic problem is an important subtask in
troubleshooting, for being able to focus fault localization
on a subset of the possible causes.
It is easy to conceive a database that stores rules on
causes and eects, as well as hypotheses and observations.
Anyway, from results on the complexity of propositional
abduction in [22], it follows easily that computing the relevant
facts for troubleshooting problems is a problem that
is hard for the complexity class p
2 of the polynomial hierarchy
(cf. [23]); in fact, already deciding if a specic fact
is relevant is hard. This means that it is not possible to
compute the relevant facts in DATALOG :
stable (the reason
why is discussed in Section III-A). Similarly, it is not possible
to formulate a Boolean query in DATALOG :
stable that
decides if a specic fact is relevant. On the other hand, in
Section VI we will show how it is possible to write a DQL
query that computes all facts relevant for the problem. 2
The remaining examples in this section deal with business
applications in a somewhat simplied economic world.
Example 2: (Strategic companies) Suppose a holding
owns some companies. Each company produces a set of
products. Each product is produced by at most two com-
panies. As an example, pasta is produced by Barilla and
Saiwa, while wine is produced only by Barilla 1 .
Suppose the holding experiences a crisis and has to sell
one company. The holding's policy is to keep on producing
all products. This clearly makes it impossible to sell
some companies {as an example the Barilla company in the
above situation, because it would be impossible to produce
wine. Anyway the managers are even more cautious: They
know that in the future it may be necessary to sell more
companies, and they do not want to get into a situation in
which they will not be able to produce all products. More
formally, they are interested in the minimal sets of companies
that produce all products. A company is strategic if it
is in at least one of such minimal sets. Therefore a query
which is very relevant to the managers is whether a company
is strategic or not: They prefer to sell a non-strategic
company rst, because after the transaction the minimal
sets of companies that produce all products remain the
same.
Now let us consider a slightly more complex situation,
in which up to three (say) companies can control another
company. As an example, companies Barilla and Saiwa
together have control over company Frutto.
Assume now that the following constraint is imposed for
determining strategic companies: A company A which is
controlled by the companies can be sold only
if also one of the companies is sold. This constraint
completely changes the minimal sets of companies
that produce all products.
In the former case {no controlled companies{ the problem
of deciding whether a company is strategic is in the
complexity class NP (cf. [24]), while in the latter case the
same problem is easily shown to be complete for the class
coincidence with existing companies is purely
casual.
2 of the polynomial hierarchy (cf. [25]). As a consequence
(cf. Section III-A), the former query is expressible
in
stable , while the latter is not. In Section VI
we will show how it is possible to write two DQL queries
that compute the sets of the strategic companies in the two
cases. 2
Example 3: (Maximal trust) In a set of companies it
is possible to make agreements. A company may have an
agreement with another company, and of course it can have
simultaneously several agreements with dierent compa-
nies. A trust is a set T of companies such that each one
has an agreement with each other company in T ; a trust T
is maximal if there is no trust T 0 that has more companies
than T , i.e., jT 0 j > jT j. Now, the following query rises:
which companies belong to a maximal trust?
Computing this query is an NP-hard problem, and also a
co-NP-hard problem. As a consequence (cf. Section III-A)
the query can not be written in DATALOG :
stable (unless
some unexpected collapse of complexity classes). On the
other hand, the query can be expressed in DQL. 2
III. Preliminaries
A. Relational databases
For a background on relational databases and query lan-
guages, the reader is referred to [26], [27]. A database
schema R is a nite set fR of relation schemata.
A relation schema R i has a name N i and a nite list of attributes
i. It will sometimes be denoted
as R i
). The number l i is the arity of the relation
We assume that there is an underlying set
U of objects that can be used in relations (the so-called do-
main). The domain U is arbitrarily large but nite. Given
a relation schema R i , a relation instance is a set of tuples
of the form ha
database instance W is a set of relation instances. The
set of objects occurring in a database (the so-called active
domain) is a subset {possibly not strict{ of U .
A database query (or often, simply query) is a mapping
from the set of all database instances of a xed database
schema R (intuitively, the input relations) to the database
instances of a xed database schema S (the output rela-
tions). It is assumed that the mapping is computable and
generic (i.e., invariant under renamings of the constants in
U).
A query language, with its syntax and semantics, evaluates
a well-formed formula (the query) over any database
instance and returns an answer, which is a database in-
stance; thus, each query denes a database query. The
answer of an I/O query is a set of relation instances, while
the answer of a boolean query is either yes or no (in this
case, the schema S consists of a single propositional letter,
or technically, of a 0-ary relation).
The expressive power of relational database query languages
is one of the most studied topics in database theory.
Basically one is interested in knowing which relations can
be expressed by a query language and which relations can
not. A relation is expressible in a query language, if there
CADOLI, EITER AND GOTTLOB: DEFAULT LOGIC AS A QUERY LANGUAGE 103
is a query in the language that, on any input database instance
returns precisely the desired relation as the answer.
For an example, transitive closure of a graph is expressible
in DATALOG [26].
One way of presenting a result in this area is to say that a
query language can/can not express a specic relation. For
example, it is well-known that relational calculus can not
express the transitive closure of a relation [28]; the relation
of satisable propositional clause sets can be computed by
a xed program in DATALOG :
stable but not in DATALOG
(unless P=NP), cf. [4].
Measuring the complexity of Boolean queries is straight-
forward. Usually, this is dened by referring to the following
problem: Given a database instance, decide if the query
evaluates to yes. The complexity of a query language is
the complexity of this problem for the queries of this lan-
guage. This is also termed as data complexity. If the query
is not xed, i.e., the query and the database are given for
input, we analogously obtain the combined complexity of
the query language, which is often much higher.
In order to deal with the complexity and expressive
power of I/O queries, we use the key concept of query recognizability
Denition III.1: (cf. [29]) Let C be a complexity class.
A query mapping database instances over R into database
instances over S is C-recognizable, if deciding whether a
tuple t belongs to a certain output relation S i 2 S is in C.
Then, the data and combined complexity of an I/O query
language can be dened in terms of query-recognizability
similar as for Boolean queries.
Typically, the expressive power of a query language is
represented as a set of logical sentences. As an exam-
ple, the results in [4] entail that the expressive power of
stable under brave semantics is SO 9 , the existential
fragment of second-order logic, i.e., the set of sentence
where S is a list of predicate symbols and (S) is a function-free
rst-order formula in which (among possibly others)
the predicates in S occur.
The traditional notion of a query language capturing a
complexity class C is that the queries denable in this language
are precisely those which are recognizable in C. For
example, Fagin's celebrated theorem is that class of SO 9 -
queries captures the class NP [30].
Figure
describes well-known relations between the expressive
powers of several query languages and correspondences
to complexity classes, cf. [31], [32], [2]. Each
edge denotes inclusion, i.e., less expressive power, which
{assuming that complexity classes do not collapse{ is always
strict.
As well-known, DATALOG does not express all queries
computable in polynomial time, and is incomparable to
rst-order logic. Stratied DATALOG and in
ationary
DATALOG have increasing expressive power; they can express
all rst-order queries, but still not all polynomial time
computable queries queries. The latter is possible {already
for DATALOG with negation of input relations{ if a linear
ordering < of the universe is given, cf. [33].
First-order logic is equivalent with relational algebra [28].
Note that DATALOG can express queries whose computation
is P-complete, and hence, unless
is not included in the class of queries computable by
NC-circuits. The queries denable in rst-order logic with
a linear ordering < are the same as the AC 0 -queries [31]
(AC 0 is the class of problems solvable by uniform families
of constant depth circuits of polynomial size, where the fan-in
of gates is unbounded). The AC 0 -queries are a proper
subclass of the NC-queries (NC is the class of problems
solvable in polylog parallel time with polynomial amount
of total work), which are a {most likely proper{ subclass
of the queries computable in polynomial time.
The While-queries constitute a class of queries denable
in a programming-style query language, which provides assignments
of rst-order expressions to tuple variables, sequencing
of statements, and while statements. This class
appears to be quite powerful, since PSPACE-hard queries
can be expressed. On the other hand, not all queries computable
in PSPACE can be expressed (e.g., if the universe
has even size); however, this is possible if a linear ordering
< of the universe is provided.
As already mentioned above,
stable under
brave semantics is equivalent to the SO 9 fragment of
second-order logic and thus captures NP. The queries den-
able in the existential-universal second-order logic, SO 98 ,
i.e., the set of sentences
on a relational function-free vocabulary where S; T are disjoint
lists of predicate symbols and (S; T) is rst-order,
are of particular interest in this paper. The query language
based on default logic we present in Section IV, DQL, can
express exactly SO 98 . This fragment of second-order logic
captures the class p
2 of the polynomial hierarchy, which
consists of all decision problems that can be solved by a
non-deterministic Turing machine in polynomial time with
use of an oracle for a problem (i.e., a subroutine for solving
this problem in unit time) in NP (cf. [23]). Consequently,
unless p
(which is widely conjectured to be false),
DQL is much more expressive than DATALOG :
stable . Fi-
nally, the queries denable in full second-order logic are
those computable within the polynomial hierarchy [34],
[35].
B.
Default logic has been introduced by Reiter in [11]; it
is one of the most extensively studied non-monotonic for-
malisms. For a detailed treatment of this formal system,
the reader is referred to [36]. Interesting relations between
default logic and database theory have been shown by
Bidoit and Froidevaux in [37]. They used default logic for
dening a semantics for negation in deductive databases.
In default logic the knowledge about the world is divided
into two parts, representing certain knowledge and defeasible
rules, respectively. The rst part (denoted with W )
(Horn clause queries)
queries
First-order queries
9 First-order queries =
First-order queries \ DATALOG
Stratified
existential fixed-point queries
First-order queries
Relational Algebra
While-queries
NC-queries
Inflationary DATALOG
Fixed-point queries
Polynomial-time queries
Fixed-point-queries
stable
Second-order queries
While-queries
Fig. 2. Query languages (noncollapsing complexity classes)
is a set of closed rst-order formulae, while the second one
(denoted with D) is a collection of special inference rules
called defaults. A default is a rule of the form
where
are well-formed formulas
whose free variables are among those of
(x) is called the prerequisite of the default, 1
are called justications and
(x) is the con-
sequence. When then the propositional constant >
(true) is implicitly assumed as the justication of the de-
fault. For convenience, we will omit writing (x) if the pre-requisite
is >. A default is closed if none of ;
contains free variables. A default theory hD; W i is closed
all the defaults in D are closed. A default or default
theory which is not closed is called open.
The semantics of a closed default theory hD; W i is based
on the notion of extension, which is a possible state of the
world according to the knowledge base. Formally, an extension
can be dened using a quasi-inductive construction
as follows. Dene that for any set E of rst order formulae,
where Cons() denotes classical deductive closure. Then, E
is an extension of hD; W
E is deductively closed, and hence an innite object.
Each extension E of hD; W i is identied by its generating
defaults, which allow for a compact representation
of E. The generating defaults of E, which we denote by
GD(E; hD; W i), are the defaults
from D such
that
2 E, for all
constructible from W and GD(E; hD; W i) as follows:
Lemma 1: ([11, Theorem 2.5]) Let E be an extension
of the closed default theory
The denition of extension is extended to open default
theories by assuming that the defaults with free variables
implicitly stand for the innite set of closed defaults obtained
by replacing the free variables with terms of the
Herbrand Universe of the default theory.
A default theory can have one, multiple or no extensions
2 Actually, this is a an equivalent characterization for extensions
rather than the original denition in terms of the xed-points of an
operator .
CADOLI, EITER AND GOTTLOB: DEFAULT LOGIC AS A QUERY LANGUAGE 105
in general. Therefore, dening entailment of a formula
from a default theory hD; W i is not straightforward. The
standard variants are credulous entailment, under which
is entailed if belongs to at least one extension of hD; W i,
and skeptical entailment, under which follows if belongs
to all extensions of hD; W i. From the computational
side, credulous and skeptical reasoning have been extensively
studied in the literature [38], [12], [13].
We conclude this section with a well-known example for
a default theory.
Example 4: (Nixon diamond) Assume that
where Q, and R are propositional atoms with meaning
that someone is a pacist, a quaker, and a republican, re-
spectively. The default theory has two exten-
sions: Cons(f g). In the
rst extension, a republican that is also a quaker is a paci-
st, while in the second he is not; the former extension has
the generating default Q:P
P , and the latter R::P
:P .
Thus, under credulous semantics, we can conclude that the
person is a pacist, as well as that (s)he is not a pacist;
under cautious semantics, we can conclude neither. 2
IV. Default Query Language (DQL)
In this section we give syntax and semantics of the default
query language DQL, and we consider a simple example
for more examples and applications will be
considered in Section VI.
A. Syntax
A DQL Input/Output query Q is pair (B; D) of a set B of
rst-order formulas (the background knowledge) and a set
D of open default rules, where the rst-order language is
function-free and quantier-free, plus a set of output relation
schemata g. The set of predicate symbols
occurring in the defaults of Q contains all the names
of the relation schemata of the database (the extensional
relations) and possibly other symbols (the intensional re-
lations). Output relations are intensional. The intuitive
meaning of the query is the following: We want to compute
all tuples in the S i relations which can be inferred under
the credulous default semantics. (See the next section for
a formal denition.) In particular we apply the credulous
default semantics to the propositional instantiation of the
open defaults and the background knowledge in the query,
plus the database.
A DQL Boolean query is a set of open default rules plus
a ground formula . The intuitive meaning of the query is
the following: We want to know whether follows {under
the credulous default semantics{ from the propositional instantiation
of the defaults in the query plus the database.
Example 5: (Mary's oce) We have two relation
schemes PROG (for \programmer") and MGR (for \man-
which both have a single attribute NAME. The
database instance W is as in the following table:
Peter
Paul
Mary
MGR NAME
Mary
The I/O query Q has (B; D) as follows: the background
knowledge B is empty, and
small-o ice(x)
S consists of the single relation scheme SMALL-
OFFICE(NAME).
The relational database states that Peter, Paul, and
Mary are programmers and that Mary is a manager. The
query is made out of three open defaults. The rst one
states that a person who is provably a programmer {and
that can not be proven to be exceptional{ has a small oce
by default. The second default states that people who can
not be proven to be exceptional should be regarded as not
exceptional. The third default states the rule that people
who are provably both programmers and managers are ex-
ceptional. The intuitive meaning of the query is that we
want to know the set of people having small oces.
For an example of a Boolean query, consider the same set
of defaults, plus the ground formula small-o ice(P eter).
The intuitive meaning of the query is that we want to know
whether Peter has a small oce or not. 2
B. Semantics
The semantics of a DQL Q is dened in terms of a default
theory obtained by instantiating the query over the domain
of the database, cf. [2], [14].
Let W be a database instance over the set of relation
schemata g. For each relation instance R i , let
R i jW be the set of tuples in W belonging to R i . We denote
as COMP (W ) the completion of the database, i.e., the set
of the following ground literals:
for each tuple ha
(This is the standard translation from databases-as-models
into databases-as-complete-theories (essentially), as shown
for example in [39]).
Let (x) be a formula whose free variables are among
be a list of objects from
U . Then, we denote by [x=] the result of simultaneously
substituting i for x i in , for all
Let (B; D) be pair of background knowledge and open
defaults of a (Boolean or I/O) query Q. We denote by
INST (B) the instantiation of B, which is the set of ground
Similarly, we denote by INST (D) the instantiation of D,
which is the set of ground defaults
106 IEEE TRANSACTIONS ON KNOWLEDGE AND
Then, for any database W for R over domain U , we
denote by Q+W the default theory with defaults INST (D)
and rst-order formulas COMP (W
Now, the credulous semantics of query Q is dened as follows
If Q is an I/O query, i.e., the pair (B; D) plus a set
of output relations, then the answer to
Q is the database instance W 0 for S over domain U dened
as follows: For each S i 2 S, S i jW 0 is the set of all ground
tuples t over U such that S i (t) follows from Q+W under
credulous semantics, i.e., it is in at least one extension of
Q+W .
If Q is a Boolean query, then the answer is yes if the
distinguished ground formula in Q follows from Q
under the credulous default semantics; otherwise, the answer
is no.
The skeptical semantics of query Q is dened analo-
gously. (That is, if Q is an I/O query, the S i jW 0 contains
all tuples t such that S i (t) follows from Q+W under the
skeptical semantics; if Q is Boolean, the answer is yes if
follows from Q skeptical semantics and no
otherwise.)
Remark: We notice that, in the semantics for DQL queries,
two sorts of non-monotonic reasoning are involved: rst of all, the
database is completed (COMP (W )), secondly, default rules are applied
(D)). Completion of the database prohibits that new
positive facts are concluded about input predicates. This is a usual
requirement in a setting of strict relational databases.
In fact, the whole mechanism could be made homogeneous by using
default rules for obtaining completion of the database as well. One
way to achieve this is to use the following method:
for each extensional relation R i
, introduce a new predicate R 0
(of the same arity) that does not occur elsewhere;
build a set D 0 consisting of the following defaults (1 i n):
as the set of all ground atoms R 0
each tuple
jW and R i
It can be shown that the default theory
provides the same answers (to both Boolean and I/O queries) as
Q+W . Intuitively, the R 0
predicates serve to transfer the extension
of the input relations to the respective predicate letters R i
. In fact,
in any extension of , R 0
is complete (i.e., every ground atom is true
or false) by the rst default, and R 0
must coincide with R i
by the
second and third default.
Note that if one does not allow occurrence of extensional relations
in the conclusions of user defaults (a similar restriction is often made
in logical query languages, e.g. in DATALOG) and in B, then also
the default theory
(D
where W is seen as set of ground atoms R i
and R i
R, provides the same answers as Q+W .
Note that the background theory can be used to state integrity constraints
on the input. This is e.g. possible by using a designated atom
\invalid", whose derivability indicates that the integrity constraints
are violated. E.g., a functional dependency constraint on relation R
can be implemented by the formula r(x; y)^r(x; z)^y 6= z ! invalid.
It appears that the credulous and the skeptical variant
of DQL are dual with respect to complements in their expressive
capability. In fact, all forthcoming results about
complexity and expressiveness of credulous DQL apply to
skeptical DQL in the dualized form, i.e., if each complexity
class is replaced by its complementary complexity class.
For instance, forthcoming Theorem 1 for credulous DQL
can be rephrased for skeptical DQL by saying that the
Boolean (skeptical) DQL queries precisely capture the class
2 . We will not carry out such rephrasements,
but leave this (as well as adaptations of proofs) to the interested
reader. Moreover, we will for brevity mainly use
credulous DQL in our examples and applications; notice
that credulous semantics and skeptical semantics coincide
if the Q + W has only one extension, which is often the
case.
Let us see how the semantics of DQL works in the example
shown in the previous subsection.
Example 5 (Mary's oce {continued) We assume that
the domain is fP eter; P aul; Mary g, i.e., that the domain
is the same as the active domain. Then,
prog(P aul); :mgr(P aul); prog(Mary); mgr(Mary)g;
prog(Mary)::ex(Mary)
small-o ice(Mary)
small-o ice(Paul)
prog(P eter)::ex(P eter)
small-o ice(P eter)
ex(P eter)
The default theory Q + W has one extension, whose generating
defaults are:
small-o ice(Paul)
prog(P eter)::ex(P eter)
small-o ice(P eter)
The answer to the I/O query (under credulous as well as
skeptical semantics) is the relation instance:
Peter
Paul
In other words, Peter and Paul have small oces, and Mary
does not have a small oce. The answer to the Boolean
query is yes. 2
V. Expressive power of DQL
In the previous section we have seen that default logic
is suitable as a query language, i.e., as a language for manipulating
relations. A very interesting question is the fol-
lowing: Which relations can be computed by DQL? Which
CADOLI, EITER AND GOTTLOB: DEFAULT LOGIC AS A QUERY LANGUAGE 107
relations can not? In other words, what is the expressive
power of DQL?
In this section we show that the expressive power of DQL
is SO 98 , the existential universal fragment of second order
logic. This result is derived for Boolean queries as well as
for general I/O queries. In order to do so, we rst derive the
result on Boolean DQL queries and, by using this result,
derive then the result for general DQL I/O queries.
Having established the expressive power of DQL, we
will then analyze the expressive power of two natural sub-languages
of DQL, namely normal DQL and semi-normal
DQL, which allow only for normal and semi-normal defaults
in the query, respectively. These restrictions of DQL
correspond to the classes of normal and semi-normal default
theories, which constitute important fragments of default
logic.
A. Boolean DQL queries
We are now ready to prove our main result, which concerns
DQL Boolean queries. The following theorem says
that DQL is capable of all and only those Boolean queries
whose complexity is in the class p
2 . For the practitioner,
this provides very helpful information about what can be
expressed in DQL and what not. As soon as we know
the complexity of a query that we want to implement, we
can often immediately tell whether this query is feasible in
DQL or not. For example, all queries in Section II have
complexity in p
. As a consequence, they all can be expressed
in DQL (cf. Section VI). On the other hand, if
the complexity is higher than p
then it is impossible (or
extremely unlikely) that the query can be implemented in
DQL. As a consequence, a query that tells whether a player
has a win in a given situation of the GO-GAME (stored in
the database) can not be written in DQL, since this is a
PSPACE-complete problem (cf. e.g., [40]).
Theorem 1: The Boolean DQL queries precisely capture
the class p
.
Notice that there are query languages that can express
part of the queries in p
{and even p
-hard queries{ but
fail to express very simple queries [15]. For such languages,
it is in general not easy to tell whether a certain (even
simple) query can be expressed, and leaves the programmer
with the uncertainty whether the query can be implemented
at all.
From the result on the expressiveness of DQL, we can
immediately derive a result on the complexity of DQL. In
particular, we obtain the following characterization of the
data complexity, i.e., of evaluating a xed query Q over
varying databases. The following theorem tells us that
this is a p
-complete problem in general. Roughly speak-
ing, this means that query evaluation is still NP-complete,
even if we have an oracle (i.e., a subprogram) for solving
NP-complete problems available and do not account
for calls to this oracle. Practically speaking, this means
that we can not reduce query evaluation to solving an NP-complete
problem (e.g., checking classical satisability of
a set of propositional clauses) eciently. In particular, an
ecient reduction to integer programming methods, which
have been investigated in the context of logic programming
[41], is not feasible.
Theorem 2: The data complexity of Boolean DQL is p
complete.
We remark that complexity grows a lot if the query
is not xed, i.e., in case of evaluating a given query on
a given database (combined complexity). In fact, there
is an exponential increase in complexity, and the combined
complexity can be shown to be complete for the
class NEXPTIME NP , which is the exponential analogue
for p
B. DQL I/O queries
Now we consider DQL queries that compute output rela-
tions. Using Theorem 1, we show the following more general
theorem, which tells that all and only the I/O queries
with complexity in p
2 can be expressed in DQL. As in the
case of Boolean queries, this gives again valuable information
about what can be implemented and what not. For
example, we immediately know that a query computing the
transitive closure of a graph can be implemented in DQL,
since its complexity is polynomial.
Theorem 3: A database query is p
-recognizable if and
only if it is denable as a DQL I/O query.
C. Normal DQL queries
Fragments of default logic resulting by imposing syntactical
restrictions on the default rules, have been considered
already by Reiter in his seminal paper [11].
Common restrictions are rules of the forms
called normal default rules, and
(x) , which
are called seminormal default rules; a default theory is
called normal (resp. semi-normal) i every default is normal
semi-normal).
The classes of normal and semi-normal default theories
are important and well-studied fragments of default logic.
In particular, normal default theories model jumping to a
conclusion if and only if it is possible. Semi-normal defaults
allow for expressing priorities between otherwise normal
defaults. In many applications default knowledge can be
represented by a normal or semi-normal default theory (cf.
e.g. [42] for an extensive treatment.)
This motivates to consider the sublanguages of DQL
which correspond to normal and semi-normal default the-
ories, respectively.
Denition V.1: A default query Q is called normal
each default in D is normal
semi-normal), i.e., of the form
Clearly, normal DQL queries are also semi-normal, and
thus a proper syntactical restriction.
Example (Students and employees) Assume that
R consists of the two relation scheme R relation schemes
MARRIED and STUDENT, which both have a single
attribute NAME. The I/O query specied by (B; D)
is used to compute instances of the output relations
ADULT(NAME) and EMPLOYEE(NAME):
The background knowledge is that married people are
adults. The rst default in D states that students are typically
not employed, and the second that adults are usually
employed. Consider the following instance W of R, where
the domain is f John; Sue; Betty g:
MARRIED NAME
Sue
John
Sue
The query computes the following output:
Sue
SueThe question rises if normal DQL and semi-normal DQL
are less powerful than general DQL. It turns out that normal
DQL is already as powerful as general DQL, and thus
can express all queries in p
2 . This can be shown by slight
modications of the default theories constructed in the
proofs of Theorems 1 and 3. Again, these theorems sharply
describe what can be implemented in this language and
what not; the advantage is that from the complexity of a
query, one can mostly immediately tell whether it can be
implemented or not (although, this does not give us a clue
of how the query program may look like).
Theorem 4: The Boolean normal DQL queries precisely
capture the class p
.
Theorem 5: The normal I/O DQL queries precisely capture
the class of p
-recognizable queries.
As an immediate corollary, we obtain analogous results
for seminormal DQL.
Corollary 1: The seminormal Boolean DQL queries
capture the class p
2 (resp. the class
of p
-recognizable queries).
We remark that Theorems 4 and 5 can be proved, though
less instructive, by combining results in [37], [43] and recent
results of some of the authors [15].
VI. Applications of DQL
In this section we will show how to write DQL queries
solving some of the problems mentioned in Section II.
Example 1 (Electric circuit troubleshooting {
continued) Rules on causes and eects of the kind (1)
are stored in an appropriate relation.
Note that we can assume that all rules are of a uniform
type
since we can encode rules with more than two causes and/or
eects easily by a small number of such rules using new
dummy facts. A conjunction disjunction
can be represented by a new fact F (resp. G),
and the rules
Then, for instance the rule C 1
rewritten to F ^C 3 ! G_D 3 ; it should be clear how other
rules can be rewritten. Note that a general clause set can
be transformed by this method into a uniform clause set in
polynomial time.
It is assumed that symbols T and F represent the empty
cause (which is intuitively always true) and the empty effect
(which is always false), respectively, and that there are
special relations TRUE(CAUSE) and FALSE(EFFECT)
which contain always only the single tuple T and F , respectively
In our example, the following relation instances describe
rules for causes and eects of the circuit in Figure 1:
B-high
-melts F 2
-melts
-low R 2
-low F 1
-melts F 2
-melts
-melts T Light-o F
Bulb-broken T Light-o F
The meaning of the rst tuple in RULE is: whenever the
battery's voltage is too high, one of the fuses or both of
them might melt, even if the resistors are ok. The second
tuple states that melting of fuses will occur whenever both
resistors are low (cf. rule(2)). The other tuples state that
a melted fuse or a broken bulb causes the control light to
be o.
The set of possible causes are stored in another relation.
As already mentioned, not necessarily all of the names cited
in the rst three columns of the above relation are possible
causes.
HYPOTHESIS FACT
B-high
Bulb-broken
This organization of knowledge is useful when one wants to
decouple static knowledge (such as the knowledge on the
circuit) and dynamic knowledge (such as knowing whether
a specic device is malfunctioning).
The set of observations we are making in a spe-
cic situation is stored in a third relation, OBSERVA-
TION(SYMPTOM).
OBSERVATION SYMPTOM
As already mentioned, an explanation is a minimal set S of
facts (wrt inclusion), taken from the relation HYPOTHE-
SIS, such that their validity implies from the causes-eects
CADOLI, EITER AND GOTTLOB: DEFAULT LOGIC AS A QUERY LANGUAGE 109
relationships in RULE all facts in the relation OBSERVA-
TION, i.e., in every truth value assignment to the remaining
facts which is compatible with all causes-eects rela-
tionships, the facts from OBSERVATION must be true.
In our example, the set fBulb-brokeng is an explanation:
all of its elements occur in the HYPOTHESIS relation, and
the fact Light-o is explained from the rule represented by
the last tuple in RULE. Note that the set fBulb-broken,
R 2 -lowg is not an explanation, since it is not minimal wrt
to inclusion. fB-highg is the only alternative explanation
possible.
The set of explanations of a diagnosis instance can be
computed by means of the following background knowledge
B and open defaults D:
The intuitive meaning of t(x) is \x is true", i.e., it can be
proven using knowledge on the circuit and observations,
and the intuitive meaning of exp(x) is \x is part of the
explanation". The other predicates exp 0 (x; y), t 0 (x; y), and
remove(x) are used for assuring the minimality of an explanation
(we will explain this in more detail below).
The rst and the second default in D represent a choice
whether a fact F is considered to be in the explanation or
out; the rst formula in B represents the constraint that in
the former case, F must appear in HYPOTHESIS and is
considered to be true (by assumption). The third default
in D claries how a fact can be inferred to be true using the
diagnostic rules; the fourth default states that each fact F
in OBSERVATION must be provable. (For if not, i.e., it is
consistent to assume the negation :F , then we can infer a
contradiction.)
The remaining part of the default theory serves for assuring
that the explanation is indeed minimal wrt inclusion;
it exploits the simple fact that a set of facts S entailing
all observations is an explanation, if for any fact F from
S, after removal of F no longer all observed facts are entailed
(cf. [22] for the easy proof). Now, for each fact F ,
represents by the second formula in B the set
S described by exp after removal of F ; the last formula in
states that the facts from this set are considered to be
true. The fth default in D produces a copy of the diagnostic
rules for F , which is used to verify that, if F occurs
in S, S fFg does not entail all observations. This test is
implemented by the last two defaults in D: if F occurs in
S, then :remove(F ) must be provable.
It can be shown that the extensions of the instantiated
default theory correspond one-to-one to the explana-
tions, such that fact F belongs to an explanation exactly
belongs to the corresponding default extension.
Consequently, the credulous DQL I/O query with output
relation EXP(FACT) computes all the facts that belong to
at least one explanation of the diagnostic problem, i.e., the
relevant facts. Notice that the analog cautious version of
this DQL I/O query computes all facts necessary for an ex-
planation, i.e., the facts that belong to every explanation.
As a nal remark, we note that requiring minimality
of explanations is not a source of complexity. By the results
in [22], p
2 -hardness of deciding relevance holds also if
non-minimal explanations are admissible; in fact, a default
query for this case would be even much simpler. 2
Example 2 (Strategic companies {continued) A possible
situation for products and companies is described by
the following database instance:
PRODUCERS
PRODUCT COMPANY #1 COMPANY #2
Pasta Barilla Saiwa
Tomatoes Frutto Barilla
Wine Barilla {
Bread Saiwa Panino
Here, the symbol \{" means that the entry is void (for-
mally, we represent this by a special object in the domain,
which is specied by a relation VOID(OBJECT) ).
In such a situation, both Barilla and Saiwa are strate-
gic, because fBarilla, Saiwag produce all products, while
neither fBarillag nor fSaiwag do that. On the other hand,
Frutto is not strategic.
The following relation instance describes control of companies
over other companies:
CONTROL
CONTROLLED CONT #1 CONT #2 CONT #3
Frutto Barilla Saiwa {
The meaning of the tuple is that companies Barilla and
Saiwa together have control over Frutto.
Taking into account that Frutto can be sold only if either
Barilla or Saiwa are also sold, the minimal sets of companies
that produce all products are completely changed. As
an example, fBarilla, Saiwag is no longer such a set, while
fBarilla, Saiwa, Fruttog is.
In the case there are no controlled companies, a manager
can easily express a DQL Boolean query whose answer tells
if a specic company C is strategic, by writing the set of
open defaults D:
producers(x;
strat(y) _ strat(z)
and adding as the formula strat(C) (B is empty). The
intuitive meaning of the defaults is that for each product
x at least one of the producers y; z are strategic compa-
nies, and that companies are non-strategic by default. The
answer to the above Boolean query is yes precisely if the
company C is strategic.
If we add the default
strat(w)
to D, then the Boolean query having strat(C) as gives
the desired answer for the case in which there are controlled
companies.
As a nal remark, we note that in this example we could
allow unbounded numbers of producers for each product
and controllers for each company, although the queries
would get more involved. 2
Example 3 (Maximal trust {continued) We can
straightforwardly represent agreements in our database
of companies by using a relation AGREE(COMP#1,
COMP#2), where a tuple (Barilla, Saiwa) in the relation
stands for an agreement between the companies Barilla and
Saiwa.
The query: \which companies belong to a maximal
trust?" (which computes an output relation MAX(COMP)
listing the searched companies) is p
-recognizable, where
2 is the class of problems decidable by a deterministic
Turing machine in polynomial time with an oracle for an
NP problem (cf. [23]). Thus, since p
2 , the query is
-recognizable, and can be expressed in DQL. In fact, a
little extra work shows that the query is even p
recognizable, where p
[O(log n)] is the subclass of p
2 in
which the number of oracle calls is bound by O(log n),
where n is the size of the input (cf. [23]).
On the other hand, we have that the maximal trust
query can not be expressed in DATALOG :
stable unless
[O(log n)], which is widely assumed to be false. Indeed,
computing the maximal trust query is p
since deciding, given an instance of AGREE and a certain
company C, whether C belongs to a maximal trust is an
n)]-hard problem; this follows trivially from the
[O(log n)]-hardness of deciding whether a given node belongs
to some clique of maximum size in a graph [44]. 2
VII. Conclusions
In this paper we have dened DQL, a query language for
relational databases based on default logic. The expressiveness
and complexity of DQL have been investigated
both for boolean queries and for I/O queries. The results
we have shown are not only of theoretical importance:
We have presented queries which are useful in practice
that can be handled with DQL, but can not with other
query languages based on non-monotonic logics such as
stable .
Let us compare and contrast DQL's expressive power
in more detail with expressive power and computational
paradigms for logic programs with stable semantics; the
results are displayed in Figure 3.
The expressive power of DATALOG :
stable under credulous
semantics is NP, i.e., precisely those queries can be
formulated by general logic programs using stable mod-
els, whose complexity is in NP (this follows by the results
in [4]). This expressiveness result has been preceded by
complexity results for credulous reasoning from propositional
logic programs, which is NP-complete for the stable
semantics [37], [45]. These results also explain that it is
possible to reduce the computation of stable models e-
L-stable
Polynomial-time queries
choice
Fig. 3. Expressiveness of stable models (noncollapsing complexity
classes)
ciently to linear programming, which has been shown as
a promising approach for implementing stable model semantics
[41]. On the other hand, the expressive power of
DQL is p
2 . This means that strictly more queries can be
formulated, and that linear programming methods can not
be eciently applied for implementing DQL queries (un-
less complexity classes collapse that are strongly believed
to be dierent). Disjunctive logic programs are endowed
with the same expressive power as DQL: As shown in [15],
the expressive power of disjunctive DATALOG programs
using stable models (DATALOG _;:
stable ) is p
. The same
expressive power is also available with so called least undened
partial stable (L-stable) models [46] for normal logic
programs (DATALOG :
stable ), which are a relaxation of
the concept of stable model.
An alternative to allow negation in DATALOG in order
to increase expressive power is the choice operator introduced
in [47]. Roughly, the choice operator allows for
atoms choice(x; y) in rule bodies, which choose precisely
one instantiation of the y variables for each instantiation of
the x variables. Since several possibilities exist for that in
general, the language computes non-deterministic queries
(i.e., several outputs are possible) rather than deterministic
ones (which we consider here). It is shown in [48], the semantics
of the choice operator is subsumed by stable mod-
els, and in [49], that DATALOG 6= augmented with dynamic
choice expresses exactly the non-deterministic queries computable
in polynomial time. The deterministic fragment of
this language (DET-DATALOG precisely
all queries computable in polynomial time (cf.[50]);
however, it can not be recognized. Notice that it is conjectured
that no query language for precisely the polynomial
time queries exists (see [29]).
We conclude by outlining possible issues for future work
on DQL. In the denition of query {Section IV-A{ open defaults
are quantier and function-free. One direction is a
generalization of DQL by allowing rst-order formulas with
quantiers and/or functions. Another direction is identifying
tractable fragments of DQL, in particular fragments
which resemble a subclass of the polynomial-time
CADOLI, EITER AND GOTTLOB: DEFAULT LOGIC AS A QUERY LANGUAGE 111
computable queries, and determining their expressive pow-
ers. A good starting point for this project is [38], where
several important tractable fragments of default logic have
been identied.
Appendix
Theorem 1 The Boolean DQL queries precisely capture
the class p
.
PROOF The easy part is to show that every Boolean
DQL can be expressed as a p
-recognizable Boolean query.
To this end, it is sucient to prove that the data complexity
of DQL (i.e., regarding the query as being xed)
is in p
2 . We notice that the semantics of DQL given in
Section IV-B transforms query answering into credulous
reasoning in a propositional default theory. The transformation
is clearly polynomial in the size of the database. It
has been proven in [12], [13] that the problem of credulous
inference in propositional default theories is in p
this part of the proof is complete.
The more dicult part is to show that each Boolean
query expressible as a sentence of SO 98 can be expressed
in DQL. As we already noticed in the introduction, we can
not take advantage of the fact that propositional credulous
default reasoning is p
-hard, because the expressiveness of
a query language is not necessarily the same as its complexity
Without loss of generality, we assume that sentence (3)
is of form
where S, T are lists of predicate symbols, x; y are lists of
individual variables, and is a rst order formula in which
no function symbol or quantier occurs. The pass from
(3) to (4), which is well-known in Logic, is justied in the
report [51].
Now we have to show that for each query QSO98 of the
form (4) there is a DQL query QDQL such that the two
queries give the same answer on all possible database instances
W over the unquantied relations in (4).
We outline the idea for QDQL . The formula
encoded as follows. We use a predicate
< that denes a linear order on the set of all y-tuples,
together with associated predicates F (y),
which state that y is the rst tuple, y 0 is the successor of
y, and y is the last tuple in <, respectively. Furthermore,
we use a predicate Z(x; y) which intuitively states that for
each tuple y 0 from the initial segment of < up to y, the
designated propositional letter
A indicates if for some x-tuple a, Z(a; b) is true, where
b is the last y-tuple. Then, (9x)(8y) (x; y) will be true
just in case A is derivable. We encode this by default rules,
such that in every extension that contains A, a valuation
for the S-predicates is dened and for every valuation of
the T-predicates some Z(a; b) is true, i.e., the sentence (4)
is true over the underlying database W .
Formally, QDQL consists of the ground formula A and
the pair (B; D) as follows. The background knowledge B
contains a set of axioms which state that < is a linear order,
i.e.,
(y < y
(Equality between tuples
The set D contains the following open defaults.
For each predicate P from S and for <:
y < y
:(y < y
rules for the associated predicates:
:(y < y
derivation of A (i.e., checking if (9x)(8y) (x; y) is true):
Let W be any database instance, and denote by the
default theory QDQL +W .
It is easy to see that has only consistent extensions.
(Formally, this can be easily proved by Lemma 1.)
We claim that the atom A belongs to an extension of
if and only if W
\(". Assume that W That
is,
for some valuation S 0 of the S predicates. Dene a set
F of formulas as follows. Let < 0 be an arbitrary linear
order of all y-tuples, and let F 0 be the associated
extensions for F , S, and L. In what follows, let a; a
denote tuples of appropriate arities over the domain. The
set
contains the following formulas:
is an extension of such that
To show this, we rst note that E is consistent.
Indeed, extend the valuations given by W ,
assigning \true" to A, by letting Z 0 be the set of
all possible tuples, and by letting the remaining predicates
T have an arbitrary extension. This valuation satises F .
Moreover, E is an extension of . This can be easily
shown from the denition of extension: we obtain that
Hence,
is an extension
of .
To prove the claim, it remains to show that A 2 E. Since
suces to show that A is true in every
valuation that satises F . Consider an arbitrary valuation
that satises F . Let T 0 be the valuation of T and Z 0 the
valuation of Z. Then, from (5) we have that
Let a be an arbitrary tuple such that
By nite induction on < 0 , we show that (a; b) 2 Z 0 , for all
b.
(Basis). Let b 0 be the rst tuple of < 0 . Since b 0 2 F 0 , we
have (a; b 0
true. Thus, (a; b 0
(Induction). Assume the statement holds for tuple b. We
show that it holds also for b 0 , where (b; b 0 We have
that by the induction
hypothesis, (a; b) 2 Z 0 , and by (6), (a; b 0 ) is true. Hence,
it follows that Z(a; b 0 ) is true, i.e., (a; b 0 This shows
the induction case.
in particular we have that
is the last tuple in < 0 , i.e., b 1 2 L 0 .
However, we have that Z(a; b 1
that A has value \true" in the valuation. Consequently, we
have shown that A 2 E.
This concludes the \(" part of the proof.
be an extension of such that A 2 E. Notice
that E is consistent.
denes a valuation S 0 for the S predicates, i.e., for
each P from S, we have P (a) 2 E or :P (a) 2 E for each
tuple a. This follows since the defaults : P (a)
:P (a)
are in INST (D). Moreover, E denes a valuation < 0 for
< such that < 0 satises the axioms for a linear order of all
tuples of the arity of y.
Furthermore, it follows from the instantiated default
rules for the predicates associated to < that E denes a
total valuation of F , S, and L, and that :F (a)
is not the rst tuple in < 0 , that :S(a; a 0
the successor of a 0 in < 0 , and that :L(a) 2 E i a is not
the last tuple in < 0 .
We claim that
To prove this, assume this is false. Hence, there exists a
valuation T 0 of T such that
We extend this valuation to a model of E that has A false.
Let for each tuple a for x be m(a) the rst tuple b for y
in < 0 such that W;S b). Notice that m(a)
exists for each a.
Dene a valuation Z 0 for Z by Z
m(a) g; and assign A the value \false".
This valuation of the predicates denes a model of E.
To prove this, it is by Lemma 1 sucient to show that the
valuation is a model of
G;
GD(E;)g. Since COMP (W )[
are left to show that G
is satised. It is easy to see that every
in which Z does
not occur is satised. Consider any remaining
(i.e., a
conclusion of a default for deriving A). Then, we have
three cases:
(a)
is is the rst tuple in
the formula is certainly satised.
(b)
is is the
successor of b in < 0 . By considering the three cases (1)
it is readily checked from the denition of Z 0 that the
formula is satised.
(c)
is A. Then, b is the last tuple in < 0 . By
denition of Z 0 , we have that Z(a; b) is false; hence,
since A is false, the formula is satised.
Thus, the valuation satises COMP (W
and hence also E. Since A is false in this model of E, we
have that
2 E. This is a contradiction, however. Hence,
claim (7) is proved. Now claim (7) means
This concludes the \)" part of the proof.
Remark: All defaults in D can be made prerequisite-free
by deleting the prerequisite and rewriting the conclusion
as
. 2
Theorem 2 The data complexity of Boolean DQL is p
complete.
PROOF From Theorem 1, we have that every Boolean
DQL query denes a p
database property; this gives the
membership part. For the hardness part, we notice that
for every SO 98 sentence as in (3), an equivalent sentence
CADOLI, EITER AND GOTTLOB: DEFAULT LOGIC AS A QUERY LANGUAGE 113
of the form (4) in the proof of Theorem 1 can be constructed
(cf. Appendix), for which the equivalent default
query QDQL can be easily constructed (even in polynomial
time). Consequently, the problem of deciding whether a
xed sentence (3) is valid in a given database instance W ,
which is p
2 -hard, is transformable to a Boolean DQL query
in polynomial time. 2
Theorem 3 A database query is p
-recognizable if and
only if it is denable as a DQL I/O query.
PROOF \(": Deciding whether a tuple t belongs to a
certain output relation S i 2 S transforms in polynomial
time into credulous reasoning in a propositional default
theory (cf. Proof of Theorem 1). Hence, p
-recognizability
follows.
be a p
-recognizable query. We
dene from q a database property P on the relation scheme
in the following way: For each instance W of
only if S i jW S i jq(WR ) for each
denotes the restriction of W to the
relations of R. That is, P(W ) is true i each tuple t that
belongs to S i jW is computed by q on the instance WR of
R.
Notice that P is in fact a database property. Moreover, it
is easy to see that P is p
-recognizable. Consequently, by
Theorem 1, P can be expressed by a Boolean DQL query
DQL . Without loss of generality, we may assume that
DQL has the form of the query QDQL constructed in the
proof of Theorem 1; notice that the database relations of
DQL are R [ S. From Q P
DQL we construct a DQL I/O
query Q q
DQL as follows. The background knowledge is the
one of Q P
DQL , i.e. B, and the open defaults D 0 are dened
by
The database relations are given by R and the output relations
by S.
The rules :S i (x)
enforce a total valuation of
the S i predicates. Intuitively, these defaults simulate all
extensions of an instance W of R to an instance W 0 of
which is then processed by the defaults of query
DQL . The default : :A
cutts extensions in
which the atom A is not contained; hence, only default
extensions EW 0 for W 0 are left such that P(W 0 ) is true.
The query Q q
DQL collects under the brave semantics all
tuples t for S i such that S i (t) belongs to some default
extension EW 0 ; from the denition of P , this means that
exactly those tuples t are collected that q computes for S i
on the database W . In other words, Q q
DQL computes on
database W the same as q, i.e., Q q
DQL denes q in DQL.
More formally, we show the following. Let W be an
instance of R, and dene the set of formulas G by
where F is as in the proof of Theorem 1. Let W 0 be the
extension of W to R
is an extension of Q P
it is easy to see from the denition of extension that E is
also an extension of Q q
DQL +W . It follows that
To show that Q q
DQL and q compute the same on W , it
thus remains to show that
Let E be an extension of Q q
. Notice that A 2
Then, it is easy to see that E is also an extension of
is the extension of W to R 0 such
that Consequently, P(W 0 ) is true.
This means that
It follows that S i jQ q
Thus we have shown that Q q
DQL denes q, which concludes
the \)" part of the proof. This proves the theorem.Theorem 4 The Boolean normal DQL queries precisely
capture the class p
.
PROOF Reconsider the Boolean DQL query QDQL constructed
in the proof of Theorem 1. Let Q 0
DQL be the
DQL query resulting from QDQL by replacing (B; D) with
are obtained from D and B, re-
spectively, by removing every non-normal default :
for
the associated predicates and by adding !
to B.
Notice that each is a Boolean formula built on atoms
from < ; since the defaults : y<y 0
y<y
to both D and D 0 , it is easy to see that the default theories
have the same extensions for
every instance W , and thus express the same query.
Now let D 00 be the result of replacing in D 0 every non-normal
default, i.e. rule :
for deriving A, by
, and
DQL be the query obtained from Q 0
DQL by replacing
D 0 with D 00 . Notice that Q 00
DQL is a normal DQL query.
One can easily show that, for every database instance W
of R, a set E is an extension of Q 00
only if
E is an extension of Q
DQL +W .
Hence, it follows that Q 00
has an extension E
such that A 2 E i W y). The
result follows. 2
Theorem 5 The normal I/O DQL queries precisely capture
the class of p
-recognizable queries.
PROOF We show this by a modication of the construction
in the proof of Theorem 3.
Without loss of generality, we assume that the Boolean
DQL query Q P
DQL for expressing P has the form of the
query Q 00
DQL in Theorem 4 (instead of QDQL in Theorem
1). Then, by analogous lines as in the proof Theorem
3, we obtain that the I/O query Q q
DQL constructed
from Q P
DQL by adding the defaults :S i (x)
? expresses the query q.
Now let Q be the query obtained from Q q
DQL by replacing
each occurrence of every S i 2 S by S 0
new
predicate of the same arity, and by replacing the default
? with the defaults
Let W be an instance of R. Notice that A belongs to
every extension of Q q
and that an atom S i (t)
belongs to an extension E of Q belongs to
E. Thus, it is easy to see that a set E is an extension of
DQL +W only if
is an extension of Q On the other hand, if E is an
extension of Q+W such that A 2 E, then
is an extension of Q q
Thus, it follows that Q
computes on W the same as Q q
DQL .
Consequently, Q and Q q
DQL represent the same query.
Since Q is a normal DQL I/O query, the result follows.
Acknowledgements
The authors are grateful to Torsten Schaub for interesting
comments on semantics of DQL. They also appreciate
the very useful comments of the anonymous reviewers,
which helped to improve the reading of the paper. The
rst author has been partially supported by ESPRIT Basic
Research Action 6810 COMPULOG II and by Progetto Finalizzato
Informatica of the CNR (Italian Research Coun-
cil). The second and third authors have been partially
supported by the Christian Doppler Laboratory for Expert
Systems.
--R
Readings in Model-Based Diagnosis
Abductive Inference Models for Diagnostic Problem Solving
Principles of Database and Knowledge Base Sys- tems
Foundations of Databases
Computational Complexity
Ellis Horwood Lim- ited
--TR
--CTR
Thomas Eiter , Axel Polleres, Towards automated integration of guess and check programs in answer set programming: a meta-interpreter and applications, Theory and Practice of Logic Programming, v.6 n.1-2, p.23-60, January 2006
Binding Propagation Techniques for the Optimization of Bound Disjunctive Queries, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.368-385, February
Simona Perri , Francesco Scarcello , Nicola Leone, Abductive logic programs with penalization: semantics, complexity and implementation, Theory and Practice of Logic Programming, v.5 n.1-2, p.123-159, January 2005
Thomas Eiter , Wolfgang Faber , Nicola Leone , Gerald Pfeifer, Declarative problem-solving using the DLV system, Logic-based artificial intelligence, Kluwer Academic Publishers, Norwell, MA, 2000
Cristinel Mateis, Quantitative Disjunctive Logic Programming: semantics and computation, AI Communications, v.13 n.4, p.225-248, December 2000
Victor W. Marek , Jeffrey B. Remmel, On the expressibility of stable logic programming, Theory and Practice of Logic Programming, v.3 n.4, p.551-567, July
Nicola Leone , Gerald Pfeifer , Wolfgang Faber , Thomas Eiter , Georg Gottlob , Simona Perri , Francesco Scarcello, The DLV system for knowledge representation and reasoning, ACM Transactions on Computational Logic (TOCL), v.7 n.3, p.499-562, July 2006
Gianluigi Greco , Sergio Greco , Ester Zumpano, A Logical Framework for Querying and Repairing Inconsistent Databases, IEEE Transactions on Knowledge and Data Engineering, v.15 n.6, p.1389-1408, November
Logic programming and knowledge representation-the A-prolog perspective, Artificial Intelligence, v.138 n.1-2, p.3-38, June 2002
Thomas Eiter , Georg Gottlob , Heikki Mannila, Disjunctive datalog, ACM Transactions on Database Systems (TODS), v.22 n.3, p.364-418, Sept. 1997
Christoph Koch , Nicola Leone , Gerald Pfeifer, Enhancing disjunctive logic programming systems by SAT checkers, Artificial Intelligence, v.151 n.1-2, p.177-212, December | expressive power;default logic;relational databases;query languages;nonmonotonic reasoning |
627861 | Establishing the Relevancy of the Bookkeeping Libraries to the Functional Testing of Computer Implementations. | AbstractIn this paper, we address issues related to the definition of "faults," "errors," and "failures" and their separability, and attribution to the different development processes of computing systems. In particular, we deal with historical databases, which presumably contain certain data (i.e., test failure data) and describe the methodology that can be used to analyze the database and obtain the pertinent information. The validation method may be of particular importance, especially when information from the database needs to be extrapolated for a purpose other than the one for which the database was developed. Our methodology was used to evaluate the historical data collected during the development of the IBM 4381 and 9370 family of computers, and to extrapolate the faults found during the function testing. | Introduction
Functional testing, defined here as the process of evaluating the functions of computer systems and software
products to assure that they meet pre-specified requirements, constitutes an integral and important process
in the development of computer systems and software products. Functional testing, as most of the processes
involved with the computing systems, has a number of diverse aspects. The major contribution to the diverse
aspects of functional testing is what is commonly referred to as a "bug", which may be discovered, corrected,
and attributed to different processes of the development of a system. For example, a "bug" observed during
the execution of a program can be attributed to software, hardware, logic design, technology, manufacturing,
etc., and depending on the process involved different aspects of the testing may be uncovered. Due to the
plurality of the processes involved with functional testing, in most cases, researchers consider issues related to
some processes and exclude others. For instance, a researcher may consider issues concerning the functional
testing of the processes involving logic design and microcode development, and exclude other processes
involving technology, power supply, packaging, cooling, software, manufacturing, etc. At no exception, our
studies in the past five years have been concentrated in the general topic of "error" prediction models, see
for example [1], and their relation to the "bugs" (also referred to commonly as "errors", "defects", "faults",
etc.) to be experienced by a development team during the design and implementation of a computer system.
In particular, we were concerned with two development teams, namely, the logic design and microcode
development teams.
One important issue, often neglected and not discussed in the literature, is the choice and validation of
the data used in the research of functional testing. In this paper, we discuss the procedure we used to choose
and validate the "error" data in our studies. As it will become obvious from the presentation to follow,
the choice, extrapolation, and validation of data (even though necessary) was more involved than originally
anticipated providing a partial justification why most researchers accept the data "as is". In any case, we
hope that our experience provides some general guidelines for procedures in the maintaining and using the
"error" data and precise definitions that allow at least a uniform treatment of "error" data in the future.
Before we proceeded in the investigation of the different aspects of functional testing in the development
of computing systems, we considered of extreme importance to provide the answer to the following questions:
What constitutes an "error"?
ffl Can an "error" be attributed to the different development processes?
ffl Can "errors" be separated to the different development processes?
ffl Which historical data sources contained information pertinent to the aspects of interest regarding
functional testing?
ffl How can the "errors" be attributed to the functional testing, after the source of historical data is
established?
ffl Is the "error" data extrapolated from a historical source accurate and representative of the aspects of
the process under consideration?
The answers to the previous questions for our application will be found in the sections to follow. The
organization of the discussion is as follows: First, we describe what we considered to be an error, fault and
failure, and discuss the separability of errors. Consequently, we discuss issues regarding the historical data
sources and the data accuracy. Finally, we describe a database, denoted as the library sub-system, that we
considered as the most appropriate for our research, the extraction of the "error" data using a fuzzy logic
question answering system [2], and finally we establish a confidence level for the accuracy of the error data.
2 Errors and Separability of Errors
In this paper, we adopt the predominantly accepted definition for failure, fault and error proposed by A.
Avizienis and J. Laprie [3] which indicates that:
"A system failure occurs when the delivered service deviates from the specified service, where the service specified
is an agreed description of the expected service. The failure occurred because the system was erroneous:
an error is that part of the system state which is liable to lead to failure. The cause-in its phenomenological
sense- of an error is a fault. An error is thus the manifestation of a fault in the system, and a failure is the
effect of an error on the service."
During the development of a computer system, when a deviation from the expected service (a failure) is
detected, the state of the system (the error) causing the failure is determined and an attempt is made to
correct the cause/causes (faults) leading to the unwanted system state. Given that there are a number of
processes associated with the development of a computer system, determining which process is responsible
for a fault constitutes a difficulty that need be overcome/ in order to resolve the failure (e.g. an architectural
deviation in a computer system implementation may be the result of a fault introduced in manufacturing,
logic design, circuit design, etc. and the resolution of failure requires establishing the "responsible" development
process). To be able to attribute faults to the different development processes, it is required to be able
to categorize the different kinds of faults, and to separate and attribute the faults to the different processes.
A. Avizienis and J. Laprie [3] categorized the logic circuitry faults, the subject of our general interest, into:
1. Physical faults due to external influences (e.g., power supply fluctuations, electromagnetic interference,
radiation) and common weaknesses in the manufacturing process.
2. Design faults introduced by human mistakes and faulty design tools, as well as by ambiguities and
errors in the initial specifications.
Clearly, the categorization of faults alone is not enough to serve the purpose of collecting data for the
development of a model. Separation and attribution of faults is important as clarified by the following
scenario: Assume that we are interested in the logic design process, thus we should be able to separate which
faults are associated with physical faults as they are not part of the design faults. Furthermore, we should
be able to distinguish logic design faults from faults associated with the tools for the same reasons. In other
words, we should be able to distinguish which faults have been introduced by the logic design team, and
which were not.
The separation of faults can be achieved by considering the following:
ffl The group of people participating in a development effort of a system is sub-divided into teams.
ffl Each team is responsible for a particular process.
ffl Teams communicate with pre-defined specifications.
ffl The underlined assumption among teams is that pre-defined specifications will be met.
ffl When a failure (resulting in the detection of an erroneous state, i.e. an error) has been discovered, the
team responsible for the process that presented the failure, will correct the fault/faults.
The essence of the previous statements is that an error/fault can be associated with a particular team
as it requires the intervention of the team for its resolution. The suggestion here is that the responsibilities
of a team is a key contributor in separating and attributing errors/faults, and that the intervention of
a particular team to resolve an error/fault can be used to separate and count the errors/faults. Clearly,
errors that require the intervention of multiple teams could (and in our opinion should) be considered the
manifestation of multiple faults.
2.1 The Dilemma of the Error Data
In order to develop an "error" prediction model for computer implementations, it is often required to conduct
research based on historical data (needed to derive and validate the model). Two sources of databases
containing historical data, denoted here as "error tracking" libraries and "bookkeeping" libraries, have been
used in the past by researchers for the development of "error" prediction models in software and microcode
processes, see for example [4, 5, 6]. The first source, denoted here as "error tracking" libraries, are usually
established during the integration of a system to report defects and track their resolution. The second source,
which we will refer to as the "bookkeeping" libraries are usually created in the beginning of the development
cycle to manage the revision of the code and to maintain information regarding the development processes.
The dilemma regarding which library to use is not so much which library is the most representative of the
development process, both are considered representative by a number of researchers, but rather which of the
two libraries contains the most accurate data. Such a dilemma is seldom discussed in the literature as either
researchers have available just one of the libraries, or it is assumed that there are no accuracy problems.
The "error" tracking libraries are usually established during the integration of a system to report defects
and track their resolution, and library entries are established as follows: When a deviation of a pre-specified
behavior has been established, an entry is created. Such an entry usually corresponds to the report of an
observed deviation (i.e. the entry represents the description of a failure), or the description (in a number of
cases the partial description) of the machine state that leads to a failure (i.e. the description of an error).
The bookkeeping libraries are usually created to maintain information regarding the development processes
and they are initiated in the beginning of the development cycle. Library entries are established, in addition
to reasons unrelated to functional testing, for the correction of faults in the form of changes (e.g. adding
newly developed code, updating comments after code is developed, updating code containing the code itself,
etc. Consequently, faults can be accounted by examining the number of changes.
Clearly, the two databases are different and they may face different types of accuracy and representation
problems. In discussing the accuracy of the data, we begin by considering a common accuracy problem:
An error is the part of the system state which is liable to lead to a failure, suggesting that experiencing
no failures does not imply the absence of errors, and thus the absence of faults. This introduces the first
inaccuracy in developing models based on historical data, as those data reflect only the discovered errors, an
approximation of the total number of errors and faults existing in a product. Second, the libraries available
today may not take into account the severity of the errors/faults, an important parameter in scheduling the
development, because they attribute the same weight to all library entries.
Regarding the "error tracking" libraries, multiple failures may correspond to a single fault but logged
multiple times in the libraries(the opposite also holds true). Also, the error keeping begins usually long
after the initial design and entry of components, implying inapplicability to the entire development process.
Additionally, in instances, it becomes a means for communication among groups rather than a mean of
future studies to understand the development process. As a formal process, there may be a reluctance of
different groups to report errors and rather rely on private communications. The implications here is that
the libraries partially reflect the development of a system and in instances they may contain misleading
data. Furthermore, in occasions, some individuals, primarily due to misunderstanding, may report non-existing
errors. An example of this is the case in which individuals misinterpret the output of test cases and
confuse tool errors for design errors. Additionally, reporting an error in a unit that appears to belong to the
responsibilities of a certain group may not be true and thus a concatenation of error entries may occur until
the faulty unit is established resulting in multiple entries of an error. This may not be always trackable,
depending on the set up of the library, resulting in multiple counting of the same error.
The previously discussed problems are not encountered in the bookkeeping libraries. The advantages and
problems with this type of libraries rely upon the following:
Bookkeeping libraries are developed and contain information that is necessary for the development
(e.g. provide the security and structure to the process of building the hardware design and microcode
by controlling the access to the data files, and maintaining the most current copies of data files).
ffl Entries always correspond to the faults as the correction of faults require changes and changes are
reported as entries and counted as faults (this clearly implies one change one fault not a necessarily
true conclusion) and as most other libraries it does not account for the severity of faults.
ffl The bookkeeping libraries begin early in the development process, thus they can be considered as
representative of the entire development cycle.
ffl It is more difficult, if not entirely impossible, to confuse design with other errors as the observed
changes are part of what is used as the design of the system.
ffl There is no multiple logging of failures or faults.
ffl Given that a failure may be of the consequence of multiple faults, it is more representative of the
"bugs" encountered during the development of a product.
From what has been discussed so far, we concluded that bookkeeping libraries offer a better approximation
of history regarding the testing of computer systems. There is however a major drawback: bookkeeping
libraries are not designed to keep track of corrections of faults in a system. Thus, it may be difficult to
extract the entries of libraries reporting changes in the design. The major challenge with this type of library,
and at no exception, the libraries at our disposition, is to be able to answer the following question: Can
the number of changes be extrapolated with an acceptable approximation? Before proceeding to answer
this question, we discuss the library sub-systems, as the existence of this question and its answer is entirely
dependent on the library set up.
3 Collection of the Microcode and Hardware Library Data
Three databases are considered in this study created during the microcode and hardware development of
the IBM 4381 and 9370 computer systems.
1. The microcode development bookkeeping library for the IBM 4381 computer systems
2. The hardware development bookkeeping library for the IBM 4381 computer systems
3. The microcode development bookkeeping library for the IBM 9370 computer systems
The microcode libraries of the IBM 4381 and 9370 computer systems, which are similar in structure, are
accessed by a set of commands that allow a user to add a data file to the library (PUT command), to retrieve
an existing file, (GET command), and to perform other type of maintenance to the library such as compile
and release the code. A sequence of GET/PUT commands is usually used to retrieve, modify and store a
file back in the library. The libraries described here allow other commands as well and maintain additional
information not pertinent to our discussion. All transactions to the library are automatically recorded by
the system. A comment field of 40 characters accompanying a GET or PUT command was used to allow the
user to describe the reason for issuing the command. The comments are filled by the developers to document
the reasons for accessing a data file from the library and placing it back. This field is considered to be a key
factor in assessing the relevancy of a library access to functional changes, since there was no other means of
explicitly stating whether the entry pertains to functional changes. In addition to the above information, a
library record contains a STATE field indicating the state of readiness of a data file with respect to the four
test phases, namely, component test, unit test, sub-system test and system test.
During the analysis of the microcode bookkeeping libraries, each record was classified into one of three
categories, namely Irrelevant, Definite-Change and Possible-Change. The records in the Irrelevant category
include all "routine" entries, such as "GET", "Promotes", "History", etc. The records in the Definite-
Change category include the PUT entries which were promoted from the component test to the system test
phase in three days, or less. This assumption is based on the notion that the new code was submitted to the
library in an expedient manner as a mean to fix or patch an outstanding problem that was found during the
system test phase. This category also includes the PUTs which are identified as patches (i.e. modifications
Library Total Total PUT Possible and Definite Possible
Records records Definite Changes Changes Changes
4381 Hardware 17,811 7,874 7,874 228 7,646
4381 Microcode 136,016 29,955 27,473 3,632 23,841
Total Records The total number of records in a library
Total-PUTs All Put records in a library (for the IBM 4381 hardware library,
this number includes only the Development PUTs)
Possible and Definite Changes
Changes
Changes The PUT records which were promoted from the component test to the system
test phase in three days or less, and the PUTs which are identified as patches
Possible Changes All PUT records not included in the Irrelevant records, and Definite
Change Categories
Table
1: Breakdown of the IBM 4381 and the IBM 9370 Libraries
to object, or load files). The records in the Possible-Change category include all PUT entries which are
not included in the Definite-Change and Irrelevant categories. This is based on the assumption that a PUT
possibly signifies a change in the design because it is applied to re-submit an existing data file back into
the library after it was retrieved by a GET. Table 1 shows the breakdown of the IBM 4381 microcode and
hardware, and the IBM 9370 microcode PUTs into possible and definite changes.
4 Analysis of the Bookkeeping Libraries by the Question Answering
System
Table
1 indicates that the majority of the database records belong to the Possible-Change category. Given
that the database is written by humans, in order to determine if the possible changes are indeed functional
changes or routine accesses, we developed a database question answering system based on fuzzy logic described
in [2]. The question answering system analyzes the comments written in a spoken language and
determines whether the comments are related to a particular subject of interest. The system is depicted in
fig. 1.
In fig. 1, the "Unique Word Generator" generates an alphabetically sorted list of all unique words
contained in the comments of the PUTs in the Possible change category of the IBM 4381 microcode and
hardware libraries. The "Word Processor" requires the manual processing of the words and the automatic
examination of the database. During this manual process, the list of the unique words generated by the
database
word
processor modified
fuzzy evaluator,
confidence vector
database
confidence vector
analyzer
relevant
table generator
confidence value
assignment
FEV processor
unique word
list generator
Figure
1: The Question Answering System
"Unique Word Generator" is analyzed in order to extract and place potential "relevant" words to functional
testing in a table which is referred to as the replace table . We considered relevant words the words that
are likely to be used in describing a functional change, such as "error", "fix", "bug", "change", etc. The
replace table contains two columns, the first contains the words as they appear in the database, and the
second contains the synonyms corresponding to the words that appeared in the first column. All words in a
database that match the first column of the replace table are replaced by the words on the second column
and all other words are deleted. As a result of this analysis, new "comment records" are generated and a
second database is established. This database, which is referred to as the modified database, contains the
words that are only present in the replace table. Using the Unique Word Generator and operating on the
modified database, the list of all unique words in the modified database is generated, and it is referred to
as the relevant word table. Subsequently, each word of the relevant word table is assigned a confidence
value; a value between zero and one, which represents the degree of likelihood that a relevant word is used
to describe an action associated with functional testing. Assigning a confidence value to a word may be
achieved in many ways including surveys of participants in the creation of the database which was the case
in our investigation.
Several hardware and microcode developers were asked to attribute a confidence value to each word in
the word table which reflected his/her perception of the usage of that word, when used in a comment, in
describing a functional error, a bug, or a change. Thus, for any given word, a multiplicity of values were
collected corresponding to the opinion of people. To construct the membership grades for each word in the
relevant word table from the responses of the survey participants, an algorithm was needed to determine
the most expected value from the scaled values. This can be achieved by establishing the fuzzy expected
value for each word in the relevant word table (FEV Processor) [7]. We considered a number of algorithms
including the FEV [8] and the WFEV [9] and developed a new algorithm, denoted as the Clustering Fuzzy
Expected Value (CFEV), shown to have a superior performance for this type of application [10].
Using the clustering algorithm, the elements of a fuzzy set are grouped into separate clusters, and the
population sizes and their mean are determined for each individual cluster. Then, the mean of the entire
fuzzy set is evaluated and adjusted based on the population sizes and the mean of the formed clusters to
form the CFEV value of each word in the relevant word list. A detailed description of the CFEV can be
found in [10]. The CFEV algorithm computes the fuzzy expected value by:
In equation 1, WA is the mean of all responses to a particular word, N is the number of responses, m is the
number of clusters produced from the data for each word, N i is the number of responses in a cluster i and
W Ai is the mean of cluster i.
Based on the CFEV value of each word in the relevant word lists, the confidence of the comments in the
libraries are computed based on [2] as follows: The "Fuzzy Evaluator" operates by analyzing each comment
within the "modified database" in conjunction with the list of words in the "relevant word table", and the
degree of confidence associated with each word.
1. For a comment with no relevant words, a confidence of zero is assigned to the entire record.
2. For a comment with one relevant word, the CFEV value of this word is assigned to the entire record.
3. For a comment with two relevant words, the confidence value of the entire record was based on the
following function
-A (w
ce \Gammak(w i
In equation 2, -A (w is the membership function attributing a degree of confidence with regards
to A (functional testing) for a comment, where, w i and w j are the confidence values associated with
two individual words i and j appearing in the same comment, k is a constant greater than 0, l is a
constant between 0:0 and 1:0 indicating that words r that having confidence value w r ? l are considered
to favorably describe the subject of interest, while words having w r ! l are considered to adversely
describe the subject of interest, and words with w are considered to be "neutral". The parameter
c is defined as
2 (i.e. the average confidence of the two words i and j).
4. For a comment with more than two relevant words, the confidence value of the entire record may be
computed by:
The operator \Phi, for any given i, applies equation 2 with possible inclusion. The inputs to equation
2 are the confidence values attributed to words i and i+1, and the output value of \Phi for all the i's
between 1 and including n-1 is a set of confidence values. Consequently, a confidence value is attributed
to a comment by applying the following algorithm.
1: If there exists at least one element in the set produced by \Phi that exceeds a given threshold
value ae 0 and the average of all pairs is ! l, then the confidence value of the comment is assumed
to be the MAX confidence value present in the set.
ffl Step 2: If step 1 does not hold true, and if there exists at least one element in the set produced
by \Phi, less than a given threshold value, ae 1 and the average of all pairs is ! l, then the confidence
value of the comment is assumed to be the MIN confidence value present in the set.
ffl Step 3: If neither step 1 nor step 2 holds true, then the confidence value of a comment is assumed
to be equal to the average of the confidences.
The detailed explanation of the above algorithm can be found in [2].
5 Establishing the Accuracy of the Data
The Question Answering System [2] was invoked with the threshold values ae 0:33, the k
and l constants 0:5, and the d and s clustering parameters 0:20. The
comments relevant to functional testing were extracted from each bookkeeping library. The outcome of the
analysis of the bookkeeping libraries is shown in table 2.
Library % Tool Changes % Tool
IBM 4381 Hardware 38% 40%
IBM 4381 Microcode 48% 55%
IBM 9370 Microcode 25% 31%
Tool Changes The percentage of records assessed as related to functional changes
based on the number of Possible changes.
The percentage of Tool and Definite changes based
Changes on the number of Possible and Definite changes.
Table
2: Analysis of the IBM 4381 and the IBM 9370 Libraries
Number of Relevant IBM 4381 Hardware Possible IBM 4381 Microcode Possible IBM 9370 Microcode Possible
Words in a Comment PUTs (7,646 Records) % of PUTs (23,841 Records) % of PUTs (60,498 Records) % of
Relevant Words Relevant Words Relevant Words
3 4.34% 9.40% 4.17%
6 0% 0.04% 0.03%
Table
3: Frequencies of Relevant Words
There are three validations to be made regarding the accuracy of the number of changes, considered in
this presentation as faults, namely:
1. The accuracy of the tool
2. The accuracy of the tools in the library entries considered to report possible changes
3. The overall accuracy of the data
To answer the questions, it is of interest to operate on representative data and measure the accuracy
by measuring the tools' responses with the expert human assessments. While human assessments can be
subjective in the absence of better library systems and the potential inability of creating "objective" entries
in future libraries (library entries will always reflect the opinion of the person determining the "meaning" of
what constitutes an entry to the library before an entry is established) they represent the best approximation.
In attempting to answer the above questions without resorting in manual evaluation of the entire
databases, an upper bound of comments to be evaluated manually by a human expert in the field had
to be established. With the man-power and time at disposition, we decided to evaluate no more than 10,000
comments. In determining the experimental databases, we proceeded as follows: First, we extracted the
profiles, in terms of number of words, of all the databases in the library, before and after the application of
the tool. It was immediately observed that after the application of the tool the comments with zero relevant
words were as low as 19.45%, and as high as 39.99% (see table 3). This discovery provided a complication in
the composition of the experimental databases. Given that, the percentages indicate that as much as 40%
of the entries have been considered to be routine accesses (i.e. they have zero relevant words) by the tool, it
was imperative to guarantee that the tool was highly successful in excluding these comments imposing the
consideration of more comments containing zero relevant words than initially anticipated.
The previous discussion was incorporated in the three databases (DB1, DB2 and DB3) with their composition
described in table 4. The three databases, DB1, DB2 and DB3, were extrapolated from the IBM
4381 hardware and microcode and the IBM 9370 microcode Possible Changes, respectively. We selected the
Database Zero Relevant One Relevant Two Relevant ? Two Relevant Total Possible
Words Word Words Words Changes
Table
4: Number of Relevant Words in the Experimental Databases
Database % of Zero % of One % of Two % of ? Two % of Possible
Relevant Words Relevant Word Relevant Words Relevant Words Changes
DB3 38.85% 43.08% 13.84% 4.23% 2.15%
Table
5: Percentages of Relevant Words in the Experimental Databases
entries randomly and made sure the entries appeared only once. The characteristics of the databases are
described in table 5. A number of things should be noted from the characteristics of the databases, namely:
ffl We considered almost half the comments (45.12%) of the IBM 4381 hardware.
ffl We considered more than one fifth comments (22.02%) of the IBM 4381 microcode.
ffl We considered only 2.15% of the IBM 9370 microcode.
ffl Clearly, the experimental database extrapolated from the IBM 9370 microcode as indicated earlier can
be argued to be non-representative.
ffl Even though the composition of the databases did not reflect the actual characteristics, we have
evaluated almost half the comments in the IBM 4381 hardware and more than one over five of the
IBM 4381 microcode suggesting that if the tool was close to the human evaluation, then the confidence
associated with the success of the tool should be high.
ffl The composition of DB1 and DB2 (especially DB1) reported in table 6, suggest that we have evaluated
more than satisfactory comments with zero relevant words (we considered 71.72% for DB1 and 48.09%
for DB2 of the comments with no relevant words in the entire IBM 4381 hardware and microcode
databases). Furthermore, for at least DB1, we have operated on substantial overall percentages of all
the comments (table 6). For DB1, we considered for examination 33.82% of the overall comments left
in the entire IBM 4381 hardware database after the examination containing one relevant word, 39.10%
containing two relevant words, and 44.12% containing three or more relevant words.
Database % of Zero % of One % of Two % of ? Two % of Possible
Relevant Words Relevant Word Relevant Words Relevant Words Changes
DB1 71.72% 33.82% 39.10% 44.12% 45.12%
DB3 2.09% 2.32% 1.97% 1.81% 2.15%
Table
Frequencies of Relevant Words Based on all the Possible Changes
Tool Accuracy 96.09% 95.70% 95.86% 97.79% 99.37% 99.41%
Tool Changes 31.51% 35.45% 23.30% 0% 0% 0%
Manual Changes 30.55% 35.51% 21.82% 2.21% 0.63% 0.59%
Table
7: Tool Versus Manual Evaluation
The previous discussion suggest that high degree of accuracies in DB1 of the tool suggest a high degree of
confidence in the final outcome for at least in the IBM 4381 hardware database and potentially by extension
a high degree of confidence to the other libraries shown similar accuracy of the tool. In conducting the
discussion of the experimental databases, it should be noted that we were interested in the overall accuracy
regarding the closeness of the evaluation to the human evaluation. In other words, we wanted to evaluate the
percentage of records that correspond to functional changes versus the percentage of records that did not,
and to evaluate the closeness of the percentages between the tool and the manual evaluation. To evaluate the
tool accuracy, it is necessary to first evaluate the tool error and consequently use such an error to represent
the disagreement between the manual evaluation and the tool and produce the agreement between the two
evaluations which constitutes the tool accuracy. The tool accuracy can be found in the first row of table 7,
which was compiled to include the following as tool errors:
ffl The tool is in error when it includes comments that are not considered as functional changes by the
manual evaluation.
ffl The tool is in error when it excludes comments that are considered by the manual evaluation as
functional changes.
The findings indicate that the agreement between the tool and the manual evaluation is as low as 95.70%,
and as high as 96.09% (for the three experimental databases DB1, DB2 and DB3). This indicates that the
tool is very accurate in its decisions to separate the database entries in relevant and irrelevant to functional
testing regions. Finally, it is of interest to identify if a database is pertinent to the functional testing by
establishing the percentage of pertinent comments to the functional testing. The second and third row of
table 7 report such percentages for the three experimental databases (DB1, DB2 and DB3) for functional
changes computed for the tool (second row) and the manual evaluation (third row). The fourth row reports
the absolute value of the deviation between the tool and the manual evaluation. The findings indicate that
the tool and the manual evaluation are very close among each others indicating that the tool can be used
to establish the relevance of a database to the functional testing. Given that the excluded comments were
a concern, in table 7(columns DB1.0, DB2.0, and DB3.0) we also report the result of the evaluation for the
comments containing zero relevant words. Given that the lowest percentage is 97.79%, it can be suggested
that the tool when it considers entries as routine accesses to the database, then there is a high degree of
certainty that they are indeed routine accesses. In conclusion, the experimentation strongly suggests that
the tool will operate more than satisfactory. We note here that we performed additional experimentations
regarding the actual database composition and overall validation. The results are in accordance with what
is reported here. The interested reader is refer to [11].
6 Conclusion
In this paper, we first identified a number of issues related to the functional testing. In particular, we
addressed the issues related to the definition of "faults", "errors" and "failures" and their separability to
the various development processes, the dilemma of the research data and the choice of the database that
provides the most confidence in the reflection of the entire development cycle. Consequently, we discussed
the assessment of the IBM 4381 microcode and hardware and the IBM 9370 history libraries, two databases
containing more than half a million records, and established their relevancy to the study of functional changes
by applying a fuzzy reasoning database question answering system [2]. As a result of this assessment, it was
concluded that the libraries are pertinent to functional testing based on the percentages of the relevant
records in the IBM 4381 microcode and hardware and the 9370 microcode bookkeeping libraries and the
error data used for example in [1] were extracted. While the confidence associated with the tool and the
overall data could be considered to be high, we caution the reader to consider the final results of our analysis
as an approximation of the error data. As a final note, we indicate that in the absence of better databases and
a precise common methodology, approximations is what can be done today in the arena of error data. We
hope that the investigation reported in this paper will help in the discussion for future improvements, related
to the maintaining of databases, and provide grounds for discussion to achieve a common methodology in
maintaining commonly acceptable good error data so that extensive and complicated extrapolations and
validations of approximately good error data is avoided.
--R
On the prediction of computer implementation faults
A fuzzy reasoning database question answering system.
Dependable computing: From concepts to design diversity.
Software errors and complexity: An empirical investigation.
An analysis of several software defect models.
Experiments with computer software complexity and reliability.
On the derivation of memberships for fuzzy sets in expert systems.
Fuzzy mathematical techniques with applications.
The use of weighted fuzzy expected value (wfev) in fuzzy expert systems.
A method for computing the most typical fuzzy expected value.
Establishing the relevancy of the bookkeeping libraries to the functional testing of computer implementations.
--TR | software reliability;error prediction models;faults;error data;data accuracy;functional testing;errors |
627878 | Boolean Similarity Measures for Resource Discovery. | AbstractAs the number of Internet servers increases rapidly, it becomes difficult to determine the relevant servers when searching for information. We develop a new method to rank Internet servers for Boolean queries. Our method reduces time and space complexity from exponential to polynomial in the number of Boolean terms. We contrast it with other known methods and describe its implementation. | Introduction
Searching for information in the Internet is a considerable
task. Thousands of servers provide different information
over the networks. Determining appropriate servers
for searching is a common problem. For novice users, they
have no idea where to send requests, and experienced users
may miss new servers having relevant information.
A user query can be described using natural language,
keywords, or a database query language. We assume each
query is transformed to a standard format, such as a
Boolean expression, by an associated query engine. Because
each user requests different information, it is inappropriate
to broadcast requests to all servers. That overwhelms
the underlying networks and overloads irrelevant
servers.
To solve this problem, we propose the client-directory-
server model [1]. Our goal is to give users a list of relevant
servers ranked according to their relevance to the query.
In this model, a "directory of services" records a description
of each information server, called a server description.
A user sends his query to the directory of services which
determines and ranks the servers relevant to the user's re-
quest. The user employs the rankings when selecting the
servers to query directly. Fig. 1 shows the details.
A server description can be automatically generated by
clustering algorithms [1], by information extraction tools
[2], or can be manually assigned by administrators [3], [4].
In either case, it can represent a summary of the underlying
database contents or function as a filter to collect
information satisfying certain conditions. In this research,
we focus on Boolean environments, where both user queries
and server descriptions are written in Boolean expressions.
We believe that Boolean expressions can precisely describe
a server's contents as well as a user's information need. Using
the above methods [1], [2], server descriptions can eas-
This work was supported in part by the Advanced Research
Projects Agency under contract number DABT63-93-C-0052, HBP
NIH grant 1-P20-MH/DA52194-01A1, National Science Foundation
Institutional Infrastructure grant number CDA-9216321, and NSF
NYI grant number NCR-9457518.
The authors are with the Computer Science Department, University
of Southern California, Los Angeles, California, 90089.
Directory of
Services
User
Server A
(1) (2)
query result
relevant list
query
Server A
query Document 1
Document 2
Fig. 1. Resource discovery process: (1) A user sends a query to
the directory of services. (2) The directory of services returns a
ranked list of relevant servers. (3) The user sends his query to
one or more of the relevant servers which (4) return matching
documents.
ily be formulated as Boolean expressions. For user queries,
existing tools or algorithms can help users generate or re-construct
complicated Boolean queries to express their information
needs [5], [6], [7].
Example 1: Consider the following Boolean expression,
where keyword and author are predefined attribute names,
and network, UNIX, and Smith are their corresponding val-
ues. In the discussion, we would represent this expression
as are called descrip-
tors, - is the logical or operator, and - is the logical and
operator. 2
In this paper, we develop an efficient algorithm to rank
servers based on their similarities with respect to a query.
We describe two existing similarity measures for Boolean
expressions, introduce our new measure, and experimentally
contrast it with the well known Jaccard's coefficient.
We review related work on Internet resource discovery
in Section II. Section III describes existing and our new
Boolean similarity measures. We show experimental results
of both measures in Section IV and analyze their time and
space complexity in Section V. Section VI discusses the implementation
of our method and Section VII presents our
conclusions.
II. Related Work
Internet resource discovery services [8], such as Archie
[9], WAIS [3], CRS [10], GlOSS [11], and Indie [4], all provide
services similar to the client-directory-server model.
They determine relevant servers for users to submit queries.
In Archie [9], a centralized server collects file and directory
names from anonymous Internet FTP servers. Users
G
send queries containing the requested file name to the centralized
server, get back a list of matching hosts, and retrieve
the file manually. This system only searches documents
by their file names and does not support complicated
Boolean queries.
WAIS [3] has a special server called the directory of
servers, which contains the description of each WAIS server
and compares them with user queries to determine relevant
servers. The WAIS directory of servers is similar to the directory
of services in our model. It ranks servers based on
a word-weighting algorithm, but it is maintained manually.
In Content Routing System (CRS) [10], each server is
characterized by a content label, which is a Boolean combination
of attribute-value pairs and is manually constructed
by administrators or automatically derived from frequently
occurring terms in the database. A server is relevant to a
query if its content label satisfies the query. Users can refine
queries when browsing the content labels of selected
servers. In addition, this system automatically forwards
queries to relevant servers and merges their results. Cur-
rently, nesting of Boolean operations is not supported.
The GlOSS [11] system uses a probabilistic scheme to
find relevant servers for user queries. In GlOSS, each server
extracts a "histogram" of term occurrences in its database.
The histograms are used to estimate the query result size
(defined as the number of documents in the database times
the probability that a document contains all the query
terms) and to determine relevant servers. This method
is built upon the assumption that terms appear in different
documents of a database following independent and
uniform probability distributions. GlOSS only considers
Boolean and queries and does not rank servers.
Indie [4] is designed and implemented based on the client-
directory-server model. Each Indie resource is managed by
a server called an Indie broker, which maintains a generator
that describes the objects stored in its database. The
generator, a nested Boolean expression, is used as a filter to
collect data from information providers. The logically centralized
but replicated server, called directory of services, is
a specialized broker that contains only the generators of every
Indie broker in the system. Users send a Boolean query
to the directory of services, which compares the query with
each generator in its database, finds the similarity between
them, then sends a ranked list of relevant Indie brokers to
the user.
Depending on the type and format of the user query and
server description, each system employs a different similarity
measure to determine relevant servers. Most systems
only support a simple query type, such as keywords or
simple Boolean combinations. They do not solve nested
Boolean queries and rank servers accordingly. The method
presented in this paper is to measure the similarity between
Boolean expressions, which can be directly applied to In-
die. It can also be used for full text data or keyword queries
by combining them with all and or or Boolean operators.
III. Similarity Measure
Well-known similarity measures, such as Dice's coeffi-
cient, Jaccard's coefficient, Cosine coefficient, and Overlap
coefficient, have been used to compute the similarities
of one document to another document, and documents to
queries for automatic classification, clustering, and indexing
[12]. For these measures, documents and queries are
represented as sets of keywords or vectors.
In the "cluster-based retrieval" system, documents with
high similarities are grouped into a cluster. User queries
are first compared with cluster representatives, then compared
with documents in the clusters that have high similarities
with the queries [12]. In the client-directory-server
model, the function of directory of services is similar to
cluster-based retrieval, where servers are clusters described
by cluster representatives (i.e. server descriptions). For
user queries and cluster representatives both described as
Boolean expressions, the above similarity measures can not
be applied directly. The degree of similarity between user
queries and server descriptions is determined by how much
these Boolean expressions overlap. Consider the example
below.
Example 2: Suppose RA and RB are the server descriptions
of two retrieval systems stored in the directory
of services, and Q 1 and Q 2 are two user queries:
Both RA and RB overlap with Q 1 , but RA contains two
overlapped terms (t 1 and t 3 ) while RB contains only one
Thus, RA is more relevant to query Q 1 than RB ,
assuming all terms are weighted equally. However, for an
and-or-combined query Q 2 , it becomes more complicated
to determine which server description is more relevant. 2
We need a systematic method to measure the overlap between
user queries and server descriptions. Furthermore,
this method must perform efficiently even when the number
of server descriptions increase. Radecki employed several
measures to rank similarity between Boolean expressions
[13], [14]. In the following sections, we review Radecki's
measures and present our modified measure. We demonstrate
our improvements in space and time complexity and
compare the two measures on a synthetic benchmark.
A. Background
Radecki proposed two similarity measures, S and S ,
based on Jaccard's coefficient. He defined the similarity
value S between queries Q 1 and Q 2 as the ratio of the
number of common documents to the total number of documents
returned in response to both queries. This ratio,
commonly known as Jaccard's coefficient, can be described
as
LI AND DANZIG: BOOLEAN SIMILARITY MEASURES FOR RESOURCE DISCOVERY 3
" denotes set intersection, [ denotes set union, and
are the response sets to Q 1 and Q 2 , re-
spectively. To apply S in our environment, we denote -(R)
and /R (Q) as the sets of documents in the cluster represented
by R and in R's response to query Q. The similarity
value S between Q and R is then defined as the ratio of
the number of common documents to the total number of
documents in /R (Q) and -(R),
Because all the documents satisfying query Q belong to
cluster R (i.e. /R (Q) ' -(R)), (1) can be simplified as
Example 3: Using the definitions from Example 2, we
assume system A (represented by RA ) contains documents
(represented by RB ) contains
documents g. Thus,
Assume for query Q 1 , the system responses are
are the responses to Q 1 in systems
A and B respectively. The similarity measures between
against RA and RB are then
0:667:In the case of a directory of services, however, the similarity
measure is used to estimate the importance of entire
information systems and decide the order in which users
should search them. If the similarity is calculated based
on the query results from every information system, the
searching order is no longer needed because you have already
searched them all.
Radecki proposed a similarity measure S that is independent
of the responses to the queries [14]. In S , Boolean
expression Q is transformed into its reduced disjunctive
normal form (RDNF), denoted as ~
Q, which is the disjunction
of a list of reduced atomic descriptors. If set T is
the union of all the descriptors that appear in the to-be-
compared Boolean expression pair, then a reduced atomic
descriptor is defined as a conjunction of all the elements in
T in either their original or negated forms. Let Q and R
be two Boolean expressions and TQ and TR be the sets of
descriptors that appear in Q and R respectively. Suppose
is the set size of TQ[TR .
Then the RDNFs of Q and R are
~
~
r i;j );
where m and n are the number of reduced atomic descriptors
in ( ~
Q) TQ[TR and ( ~
R) TQ[TR . Each reduced atomic descriptor
q i;j and
in the two RDNFs consists
of the same number of descriptors (k), which is the
set size of TQ [TR . Each ~ q i;j and ~ r i;j in the RDNFs represents
the corresponding descriptor t j or its negation :t j (:
is the logical not operator). For example, ~
q 2;1 denotes the
first descriptor in the second reduced atomic descriptor of
q 2;1 is either t 1 or :t 1 depending
on how Q is transformed. The following example illustrates
the transformation from Boolean expressions to RDNFs.
Example 4: From Example 2,
where set TX is the union of all the descriptors in Boolean
or RB ). To transform Q 2 to
its RDNF, we can apply the distributive law
and expand the two conjunctions,
their associated reduced atomic descriptors. The expansion
process is based on the equation
where t a and t b are descriptors. Consider Q 2 and RA first.
each reduced atomic descriptor
in ( ~
and ( ~
must contain all the
negated forms. Thus, the conjunctions
in Q 2 are expanded to
The RDNFs of Q 2 and RA are
Similarly, because TQ2 [ g, the RDNFs
of Q 2 and RB are
-t 4 ):Radecki defines the similarity value S between two
Boolean expressions (Q and R) as the ratio of the number
of common reduced atomic descriptors in ~
Q and ~
R to
the total number of reduced atomic descriptors in them,
Example 5: Continuing with Example 4,
Therefore, RA is more relevant to query Q 2 than RB . 2
From Example 4, we can see that Q 2 is transformed
to different RDNFs, ( ~
and ( ~
, when
computing with RA and RB . This means whenever a new
user query is compared against N server descriptions, it
needs 2N RDNF transformations to calculate the similarity
between them. This method suffers when the number
of server descriptions is large and users query frequently.
The system will spend significant amounts of time recomputing
RDNFs, and consequently will perform badly. To
solve this problem, we modify Radecki's method so that
it need not recompute RDNFs of server descriptions while
still providing results of equivalent or better quality.
B. New Similarity Measure
We propose a new measure based on Radecki's similarity
measure S , that is independent of the underlying information
systems and requires less computation. We transform
a Boolean expression to its compact disjunctive normal
form (CDNF) using the distributive law described in
the previous section. The CDNF is a disjunction of compact
atomic descriptors, each being a conjunction of subsets
of descriptors in the original Boolean expression. The descriptors
in each compact atomic descriptor are determined
while performing the distributive law.
Let Q and R be two Boolean expressions, and TQ and
TR be their sets of descriptors. We denote -
R as the
CDNFs of Q and R, and express them as
r j;v );
where each conjunction (
q i;u and
r j;v ) is a compact
atomic descriptor, and m and n are their number
in -
R. The x i is the number of descriptors in the
th (1 - i - m) compact atomic descriptor of -
Q, and y j is
the number of descriptors in the j th (1 - j - n) compact
atomic descriptor of -
R. Each -
q i;u and - r j;v in the CDNFs
represents a descriptor in TQ and TR respectively.
Example The CDNFs of Q 2 , RA , and RB in Example
are
Each compact atomic descriptor in -
consists of only
the descriptors in TQ2 without introducing new descriptors
from TRA and TRB . In other words, the descriptors in
are independent of those in other Boolean expressions,
such as RA and RB . 2
We denote our similarity measure S \Phi and define the similarity
of two Boolean expressions as the summation of the
individual similarity measures (s \Phi ) between each compact
atomic descriptor. The individual similarity measure s \Phi is
defined as
and
indicates the i th compact atomic descriptor of
Q, and -
R j indicates the j th compact atomic descriptor
of CDNF -
R.
R are the sets of descriptors
in -
is the number of descriptors that
appear in T j
R but not in T i
R j is the number of
descriptors that appear in T i
Q but not in T j
R . The similarity
measure S \Phi is the sum of the individual s \Phi given by
Qj
Qj and j -
Rj are the number of compact atomic descriptors
in -
R respectively.
Example 7: Using the above definitions, we compute S \Phi
for and RB of Example 6. We find
A
LI AND DANZIG: BOOLEAN SIMILARITY MEASURES FOR RESOURCE DISCOVERY 5
RA , and T 1
RB , are the sets of descriptors
in the compact atomic descriptors -
A , and -
respectively. The individual similarity measures are therefor
R A
R A
R A
R A
which yields
Similarly for Q 2 and RB , we get
which leads to
Therefore, RA is more relevant to query Q 2 than RB . 2
Notice that the similarity values calculated using S \Phi (in
Example 7) are different from those calculated using S
(in Example 5). It is meaningless to compare these values
directly because both are measured on a relative scale.
However, they can be used to rank a list of Boolean expressions
measured by the same method.
IV. Experiments
To compare the rankings estimated by similarity measures
S and S \Phi , we conduct experiments on two
databases. One is the standard CISI dataset. The other is
the Homer database at the University of Southern Califor-
nia. We use the result of S as the criterion, and compare
it to that of S and S \Phi . Each experiment consists of the
following steps.
1. Create individual server databases by using queries as
filters.
2. Calculate S based on the number of hit documents on
each server.
3. Calculate S and S \Phi for each filter-query pair.
4. Rank servers based on S, S and S \Phi .
5. Compare their rankings using Spearman rank-order
correlation coefficient (r s ) [15].
using confidence interval
for the proportion [16].
We describe the details as follows.
During the experiment, all queries play two roles. First,
each query is used as the filter of a server to collect specific
documents from the testing database. Thus, we can create
N servers by running N queries on the database, where
each server description is represented by the associated fil-
ter. Second, each query is submitted to all the N servers.
The number of hit documents is used to calculate S using
(2). Based on the S values from the N servers, we can rank
them for each query and use that as the standard ranking
to evaluate S and S \Phi .
To calculate the rankings estimated by S and S \Phi , we
apply (3) and (4) to each filter-query pair (i.e. query pair)
and sort them in descending order. To compare which
method generates a ranking closer to the standard, we compute
the degree of association between (S , S) and between
(S \Phi , S) by applying the Spearman rank-order correlation
coefficient (r s ) [15]. The r s ranges between \Gamma1 and 1. If
two rankings are identical, r s = 1. If one ranking is the
reverse of the other, r \Gamma1. The larger the r s , the closer
the rankings.
denote the N queries as well as the
filters. For each query Q
according to the similarity values
separately. For
tied values, each Q j is assigned the average of the ranks
that would have been assigned had no ties happened. Let
an and b be two rankings for Q i generated
by various similarity measures, where n is the number of
elements in the ranking (N in our case). The tied ranks in
each ranking form a group. Assume there are g u different
groups in a an , each group has u k (1 - k - g u ) tied
elements. Similarly, ranking b groups, each
has tied elements. The r s coefficient can
be obtained by [15]:
where
gu
(v 3
For each query, we can determine which method performs
better by their r s values with respect to S. Among
the N observations, we measure the confidence that S \Phi is
superior to S by calculating the confidence interval for the
proportion, defined as follows [16]:
Confidence interval
r
where z 1\Gamma ffis the
2 )-quantile of a unit normal variate
confidence level), n is the total
number of samples (N in our case), and n 1 is the number
of times S \Phi is superior to S (i.e. r s (S \Phi
6 IEEE TRANSACTIONS ON KNOWLEDGE AND
By definition [16], if and the confidence interval
does not include 0.5, we can say with 95% confidence that
S \Phi is superior to S .
A. CISI Experiment
The CISI dataset consists of 1460 information science
documents and 35 Boolean queries. All documents are indexed
with terms occurring in the title and abstract but
not on a stop list of 429 common words. All indexed terms
are stored in their original forms without stemming. A
Boolean query is a nested structure of terms with logical
and, or, and not operators in between. Documents are hit
by a query if they satisfy all the conditions in the query.
Following the six steps described previously, we calculate
r s (S \Phi ; S) and r s (S ; S) for the 35 queries. Fig. 2 shows the
value of r s (S \Phi ; S) minus r s (S ; S) for each query. Among
them, r s (S \Phi ; S) is greater than r s (S ; S) for 24 times (the
\Theta's above zero) and less than r s (S ; S) for 11 times (the
\Theta's below zero). This indicates S \Phi generates a ranking
closer to that of S for 24 out of 35 times, whereas S only
has closer order for 11 out of 35 times. The mean Spearman
coefficient of r s (S \Phi ; S) and r s (S ; S) are 0.331 and
respectively. This shows S \Phi has a better average
estimation than S on the CISI database.
-0.4
query sample
Spearman
coefficient
difference
Fig. 2. The difference between rs (S \Phi ; S) and rs (S ; S) for
Boolean queries on the CISI database. The \Theta's above zero indicate
S \Phi generates a ranking closer to that of S than S for the
associated query.
From the results of r s (S \Phi ; S) and r s (S ; S), the sample
proportion of "r s (S \Phi
Because 35), we can calculate the "confidence
interval for the proportion" for
95% confidence interval
r
The confidence interval does not include 0.5. Therefore, we
can say with 95% confidence that S \Phi is superior to S in
the CISI experiment.
B. USC Homer Experiment
In this experiment, we manually create
samples, each averaging 3.6 descriptors picked up from 24
terms in diverse fields. We submit these queries to the USC
Homer database and compute the results. Fig. 3 shows the
values of r s (S \Phi ; S) minus r s (S ; S) for the queries.
-0.4
query sample
Spearman
coefficient
difference
Fig. 3. The difference between rs (S \Phi ; S) and rs (S ; S) for
Boolean queries on the USC Homer database. The \Theta's above
zero indicate S \Phi generates a ranking closer to that of S than S
for the associated query.
In Fig. 3, r s (S \Phi ; S) is greater than r s (S ; S) for 22 times
(the \Theta's above zero) and less than r s (S ;
(the \Theta's below zero). This indicates S \Phi generates a ranking
closer to that of S for 22 out of 32 times, whereas S
only has closer order for 10 out of 32 times. The mean
Spearman coefficients of r s (S \Phi ; S) and r s (S ; S) are 0.595
and 0.494 respectively. This shows S \Phi has a better average
estimation than S on the USC Homer database.
From the results of r s (S \Phi ; S) and r s (S ; S), the sample
proportion of "r s (S \Phi
Because calculate the "confidence
interval for the proportion" for
95% confidence interval
r
The confidence interval does not include 0.5. Therefore, we
can say with 95% confidence that S \Phi is superior to S in
this experiment.
C. Discussion
The queries associated with each dataset are designed to
hit a number of documents in the collection. Therefore, the
LI AND DANZIG: BOOLEAN SIMILARITY MEASURES FOR RESOURCE DISCOVERY 7
servers generated by using queries as the filters contain different
portions of the collection. The CISI database is a collection
of documents in library science and related areas. It
is an experimental database commonly used by researchers
working on information retrieval. The USC Homer is an
on-line library catalog system that covers a board range
of collections, such as business, law, literature, medicine,
science, and engineering. So, for example, each server in
the first experiment is a subset of documents focusing on
a specific topic in information science, while the servers
in the second experiment contain documents in widely different
fields. Table I gives the additional characteristics
of the two experiments. We obtained similar results from
the two databases even though they have different size and
cover different fields of documents. In the two experiments,
both the average Spearman coefficient and the confidence
interval for proportion show that S \Phi is superior to S .
I
Characteristics of the CISI and USC Homer experiments.
CISI USC Homer
Number of documents 1460 - 800,000
Number of queries
Number of servers
Mean number of terms per query 7.14 3.6
Mean number of documents per server 91.7 5492
V. Analysis and Comparison
Space and time are two of the important factors in designing
a real-time system. In an on-line information retrieval
system, the system response time is highly dependent
on the underlying data structures and associated indexing
and searching techniques. In this section we analyze
the space and time complexities of the two searching techniques
- similarity measures S and S \Phi .
As mentioned eariler, to calculate S \Phi , we need to apply
the distributive law such as
to obtain CDNFs, where t 1 , t 2 , and t 3 are descriptors. To
calculate Radecki's S , we need to transform Boolean expressions
to RDNFs. Two steps are required in the trans-
where the distributive law is
used to produce the corresponding disjunctive normal form;
and 2) "expansion", where we use t
so that each reduced atomic descriptor contains all the
descriptors (original or negated) in the to-be-compared
Boolean expressions. The order of these two steps affects
the complexity, but not the result, of transforming
a Boolean expression to a RDNF. If the distribution is performed
before the expansion, it is equivalent to transforming
the Boolean expression to its CDNF and then expanding
the CDNF to a RDNF. If the expansion is performed
before the distribution, it needs more space and computation
because extra negated descriptors, such as :t 2 , will
be generated in the expansion step. The following example
will clarify this idea.
Case 1: We transform Boolean expression Q to CDNF
Q, then expand it to RDNF ~
Q.
Case 2: We expand Q first, then distribute it.
oe
oe
oe
In Case 1, the expansion is performed after the distri-
bution. Therefore each compact atomic descriptor is expanded
(from (5) to (6)) instead of each descriptor (from
(7) to (8)), as in Case 2. A compact atomic descriptor usually
contains more than one descriptor after applying the
distributive law to its original Boolean expression. In the
above example, each of the two compact atomic descriptors
in -
contains two descriptors.
Eight additional descriptors are added from (5) to (6) after
the expansion. On the other hand, each individual descriptor
in the original Boolean expression is expanded in
Case 2. Thirty-three additional descriptors are added from
(7) to (8). The second approach needs more space than the
first one for storing those intermediate descriptors, which
consequently cause it to spend more time checking the duplicates
before obtaining the final ~
Q.
In our example, the original Boolean expression contains
only 3 descriptors. It is the simplest transformation case.
For more complicated Boolean expressions, the difference
between Case 1 and Case 2 is bigger. Therefore we use the
first approach (i.e. Boolean expression
in our complexity analysis. Based on this, the time complexities
of S \Phi and S are equal to the transformation time
from Boolean expression to CDNF or RDNF plus the time
to compute the similarity measures. For a single Boolean
expression,
Time S \Phi (computation);
where
Time S \Phi (transformation)
Similarly, the space complexities of S \Phi and S are determined
by the storage requirements for the CDNF and
RDNF respectively. For a single Boolean expression,
Space
Space S
In the following sections, we discuss the complexity of each
individual step.
A. From Boolean Expression To CDNF
To simplify the analysis, we use binary trees [17] to represent
the Boolean expressions. Each external node or "leaf"
represents a descriptor. All the internal nodes, including
the root, are logical operators. The negation not can be
stored with the associated descriptor, therefore we do not
denote it separately. The height of a tree is the longest
path from any leaf to the root.
The binary trees are transformed to their equivalent
CDNF binary trees using the distributive law. The technique
is to transform an and-rooted subtree to an equivalent
or-rooted subtree one at a time in a top-down ap-
proach. An example is shown in Fig. 4, where A, B, and C
are the subtrees of associated nodes.
and
or
A A
and
and or
Fig. 4. Compact disjunctive normalization. We use the distributive
law C), on the subtrees A, B, and
C.
We first change the current root node from and to or, and
change its or-rooted child node to be and-rooted. Then we
demote the other child (C) by one level, and add one and
node at its original position to be its new parent. Finally,
we replicate the demoted child (C) and exchange it with
one of the children (B) on the other subtree. The same
procedure is repeated until reaching the leaves.
The space complexity of transforming the Boolean expression
to a CDNF varies from O(n) to O(n 2 ) depending
on how the Boolean expression is constructed, where n is
the sum of the total number of descriptors and logical operators
in the Boolean expression. For example, a linear
binary tree (Fig. 5(a)) generates an O(n) CDNF, while a
complete binary tree (Fig. 5(b)) generates an O(n 2 ) CDNF.
Notice that n is equal to the total number of nodes if the
Boolean expression is represented as a binary tree. Intu-
itively, the O(n 2 ) CDNF can be derived by starting with
a tree of height h. It will have nodes for a
complete binary tree. The distributive law no more than
doubles the height. Thus, the new tree is bounded by size
The time complexity is primarily determined by the
number of times the distributive law is invoked and the
size of the subtree to be duplicated. Basically it is the
same order as the space complexity - O(n) for a linear
binary tree and O(n 2 ) for a complete binary tree.
or
and
or
and
or and
and
or and
and
or
(a) (b)
Fig. 5. Various binary trees: (a) a linear binary tree, and (b) a
complete binary tree, where and and or are logical operators,
are descriptors.
The linear and complete binary trees are used as the
lower and upper bounds for complexity analysis. Below,
we discuss only the worst case - a complete binary tree
with and node at the root and or nodes elsewhere. In this
case, the distributive law is applied to all the subtrees at
each level. The case of linear binary tree is described in
[18].
Fig. 6 shows an n-node complete binary tree where each
internal node contains two children (i.e. a complete binary
tree). Every time the topmost and node is distributed, an
additional and node and a copy of one of its subtrees will
be created. Fig. 6(a) is transformed to 6(b) by creating
an and node and a duplicate C. Similarly, A and B are
duplicated from Fig. 6(b) to 6(c). The time complexity
T (n) consists of the time for adding additional and nodes
and for duplicating subtrees. Since there is no need for
distribution for n - 3 , T these cases. Otherwise,
Therefore,
Let h be the height of the original binary tree, then
1. We can derive
LI AND DANZIG: BOOLEAN SIMILARITY MEASURES FOR RESOURCE DISCOVERY 9
and [n]
or
or
or
n-3[ ] or
or
or
(a)
and
or
and
or
or
or
or
or
or
or
or
A
(b)
or [2n+1]
or
or
and
or
or
and
or
or
or
and
or
A C2
or
and
or
A C1
(c)
Fig. 6. The CDNF transformation of an n-node complete binary tree. Originally, only the root is the and operator, the other internal nodes
are all or operators. A; B; C 1
, and C 2
are subtrees. The number in the brackets means the number of nodes in this subtree. (a) The
original binary tree. (b) The binary tree after distribution on the first level node (i.e. the root), where the subtree C is duplicated. (c)
The binary tree after distributions on the second level nodes, where the subtrees A and B are duplicated.
Similarly, the space complexity M(n) consists of the
space to store the root and its two to-be-distributed sub-
trees. For n - 3, the space is not changed because there is
no need for distribution. Otherwise,
Therefore,
ae n if n - 3;
which can be derived as
the n+1
descriptors (i.e.
leaves) in the original n-node full binary tree. We divide t i
into four groups (A, B, C 1 , C 2 ) of equal size, each group
having n+1
descriptors. Let
Therefore, Fig. 6 can be presented as
A
A
are subtrees, and (9), (10), and
Figs. 6(a), 6(b), and 6(c) respectively. Equation
(12) represents the resulting CDNF -
Q, which consists
of ( n+1
descriptors with 2 descriptors
in each of them. We can therefore show that the characteristics
of the CDNF of an n-node complete binary tree
are:
8 descriptors,
ffl each compact atomic descriptor contains 2 descriptors,
ffl Time complete (Boolean expression
ffl Space complete
B. From CDNF To RDNF
Assume are two Boolean expressions, which
have n 1 and n 2 total nodes and p 1 and p 2 distinct descriptors
respectively. Let p be the size of the union of these two
distinct descriptor sets, c 1 and c 2 the number of compact
atomic descriptors of -
the number
of reduced atomic descriptors of ~
We observe
that the resulting RDNFs of Boolean expressions Q 1 and
contain the following characteristics:
ffl each reduced atomic descriptor contains p descriptors,
Q 1 has (r 1 \Theta p) descriptors,
descriptors.
In ~
each compact atomic descriptor containing
2 descriptors is expanded to 2 p\Gamma2 reduced atomic descriptors
containing p descriptors. However, some of these
reduced atomic descriptors are duplicates. Therefore the
total number should not exceed 2 p , which is the number
of all possible combinations of reduced atomic descriptors
containing p descriptors. The space complexity of ~
be derived as
ae
where (r 1 \Theta p) is the number of descriptors in ~
is the number of logical operators in ~
The time for transforming CDNF to RDNF consists of
expanding each compact descriptor, and 2) checking and
removing duplicate reduced atomic descriptors. Because 2)
can be done as 1) is being executed, it is omitted in our
analysis. Thus, the time complexity for an n-node binary
tree is
C. Computation
To calculate the similarity measure S \Phi between two CD-
NFs, we need to compare their compact atomic descriptors.
Using the notation given above, we further define that the
th
descriptors
and the j th (1 - j - c 2 ) atomic descriptor of
To speed up the computation time, all the descriptors
within the compact atomic descriptors or reduced
LI AND DANZIG: BOOLEAN SIMILARITY MEASURES FOR RESOURCE DISCOVERY 11
atomic descriptors are sorted before calculating their sim-
ilarities. Therefore it takes
To compare these two CD-
NFs term-by-term, it takes
Hence,
Time S \Phi (computation)
Similarly, we transform the same two Boolean expressions
to RDNFs ~
each reduced atomic descriptors
contains exact p descriptors. Using the same optimal
sorting method, it takes
time to
time to compare them.
Thus,
Because RDNF is obtained by expanding its CDNF,
we are certain that c 1 - r 1 and c 2 - r 2 . There-
fore, Time S \Phi (computation) is always less than or equal to
are both n-node binary
trees as described above, then k
Time S \Phi (computation)
D. Remarks
Below, we summarize the previous time and space analysis
Time S \Phi (computation)
Time S \Phi (computation)
Space
Space S
The above comparisons are analyzed based on a pair of
Boolean expressions only. For (N +1) Boolean expressions,
consisting of one incoming query and N server descriptions,
their time and space complexities are in proportion to N
[18]. As discussed previously, an n-node binary tree consists
of n+1
leaves (or descriptors in the Boolean expres-
sion). The number of distinct descriptors p must be no
larger than n+1
O(n). Thus, the complexities
of the two measures S \Phi and S can be simplified as
shown in Table II.
II
Time and space complexities of S \Phi and S for one user query
against N server descriptions. Both the user query and the
server descriptions are n-node binary trees.
time complexity O(Nn 4 ) O(N2 2n n)
space complexity O(Nn 2 ) O(N2 n n)
Apparently, S \Phi outperfroms S in both time and space
complexities. The above analysis shows that our similarity
measure based on CDNFs consumes up to exponentially
less time and space than Radecki's method. The following
example further illustrates the performance difference
between the two measures.
Example 8: Consider a directory of services containing
100 server descriptions, each consisting of 5 descriptors.
The time and space used to calculate the similarities S
and S \Phi for a 5-descriptor user query are:
Time S \Phi (1 query, 100 server descriptions)
Space S (1 query, 100 server descriptions)
Space S \Phi (1 query, 100 server descriptions)
When using S \Phi , the directory of service is eight times faster
in searching the relevant servers, and takes only one-sixth
the space of S . 2
VI. Implementation
In the client-directory-server model, the directory of services
ranks the servers by comparing their descriptions with
the query. Both the query and the server descriptions need
to be normalized before the comparison. In our method,
the normalization of a Boolean expression is independent
of other Boolean expressions in the comparison. There-
fore, we can pre-normalize the server descriptions and store
them in the directory of services. Below, we describe the
implementation of our Boolean similarity measure.
We use the UNIX tools flex and bison to parse the nested
Boolean expressions and build the associated binary parse
trees. Each attribute-value pair in the user query and the
server description is presented as a three-element subtree in
the binary parse tree. The three-element subtree consists
of one parent node and two child nodes. The left and right
child nodes, i.e. the leaves, are the attribute name and its
value respectively. The leaves are joined by the parent
node, which is a relational operator ("=" or "6="). These
subtrees are merged by the logical operators (and and or)
to form the binary parse tree.
The binary parse trees are transformed to their equivalent
CDNF binary trees based on the distributive law. Notice
that while replicating the subtree (such as C in Fig. 4),
we only copy the logical operator nodes in order to save
space. For relational operator nodes, only their associated
pointers are copied. All the nodes in the binary tree whose
parents are or are linked together after the distributive
normalization. Consider the following example.
Example 9: Let Q 1 be an incoming user query and
be three server descriptions stored as CDNFs
RC ) in the directory of services,
and
The Q 1 is normalized to -
or
The similarity values between the user query and the three
server descriptions are
:Figs. 7 and 8 show the binary parse tree of the user query
Example 9 before and after normalization. Fig. 9
shows the server description RC after normalization. The
link generated in each normalized binary tree is pointed at
by head.
network keyword
or
and
author
Smith
Fig. 7. User query Q 1 before normalization.
and
network keyword
and
or
head
author
Smith author
Smith
Fig. 8. Normalized user query -
. The head links all the nodes whose
parents are or. The dashed subtree is a replicated subtree. The
iis one of the compact atomic descriptors in -
Q1 .
database McLeod
author
and
and
or
head
author
Smith
R C
Fig. 9. Normalized server description -
RC . The head links all the
nodes whose parents are or. The -
is one of the compact atomic
descriptors in -
RC .
After the normalization process, we compare each component
in the links of the two binary trees. Each element in
the server description link represents a compact atomic descriptor
C in the server description -
RC . Each element in
the user query link represents a compact atomic descriptor
in the user query -
To calculate s \Phi ( -
we compare all the nodes under -
1 with all the nodes
under -
C and find out the number of uncommon nodes
between them. Then we sum up all the s \Phi ( -
C ) to
Similarly, we calculate S \Phi
in descending order of
their similarity values with Q 1 , and return the result to the
user.
VII. Conclusions
We have developed a new method using compact disjunctive
normal form (CDNF) to rank the similarity between
Boolean expressions. We compared our method with
LI AND DANZIG: BOOLEAN SIMILARITY MEASURES FOR RESOURCE DISCOVERY 13
Radecki's measure on two databases and used the Spearman
rank coefficients and the confidence intervals to show
that our method can get a closer ranking order to that
generated by Jaccard's coefficient. The theoretical analysis
proves that this new measure outperforms the one proposed
by Radecki significantly in terms of time and space
complexity. These results demonstrate that our similarity
measure can greatly improve the searching process in
today's world of overwhelming information.
In addition to ranking results, similarity estimates can
be used to help identify similar but autonomously managed
retrieval systems. For example, the similarity measure can
be used to cluster servers with similar descriptions in a single
directory entry. When the similarity measures of two
servers exceed a certain value, they can be merged to remove
redundancy. Moreover, the administrator can create
new servers by using the most frequently asked queries as
the filter and select other relevant servers as its information
sources. Thus, most user queries can be satisfied by
a small number of servers which reduces search time. For
people using Boolean expressions to represent their inter-
ests, such as collaborative filtering [19] or user profile [20],
[21], similarity measure can help find other individuals having
common interests, so that they may share their collec-
tions. Our method can also benefit systems that support
automatic query formulations by relevance-feedback [22],
[23], where the reformed queries could be in complex
Boolean forms.
--R
"Vocabulary problem in Internet resource discovery"
"Essence: A resource discovery system based on semantic file indexing"
"An information system for corporate users: Wide Area Information Servers"
"Distributed indexing of autonomous Internet services"
"A direct manipulation interface for Boolean information retrieval via natural language query"
"A graphical filter/flow representation of Boolean queries: A prototype implementation and evaluation"
"Algorithms for automatic construction of query formulations in Boolean form"
"Internet resource discovery services"
"Archie: An electronic directory service for the Internet"
"A content routing system for distributed information servers"
"The efficacy of GlOSS for the text database discovery problem"
Information Retrieval
"A model of a document-clustering-based information retrieval system with a Boolean search request formu- lation"
"Similarity measures for Boolean search request formulations"
Rank Correlation Meth- ods
The Art of Computer Systems Performance Analysis
"Boolean similarity measures for resource discovery"
"Using collaborative filtering to weave an information tapestry"
"Modeling of user preferences and needs in Boolean retrieval systems"
"Index structures for selective dissemination of information under the Boolean model"
"The use of automatic relevance feedback in Boolean retrieval systems"
"Ad- vanced feedback methods in information retrieval"
--TR
--CTR
Rashid Ali , M. M. Sufyan Beg, A comprehensive model for web search evaluation, Proceedings of the 5th WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, p.159-164, November 01-03, 2006, Dallas, Texas
Katia Sycara , Seth Widoff , Matthias Klusch , Jianguo Lu, Larks: Dynamic Matchmaking Among Heterogeneous Software Agents in Cyberspace, Autonomous Agents and Multi-Agent Systems, v.5 n.2, p.173-203, June 2002
Charles L. A. Clarke , Gordon V. Cormack, Shortest-substring retrieval and ranking, ACM Transactions on Information Systems (TOIS), v.18 n.1, p.44-78, Jan. 2000
Weiyi Meng , Clement Yu , King-Lup Liu, Building efficient and effective metasearch engines, ACM Computing Surveys (CSUR), v.34 n.1, p.48-89, March 2002 | resource discovery;ranking;boolean query;information retrieval;similarity measure |
627899 | Coherence Approach to Logic Program Revision. | AbstractIn this paper, we present a new approach to the problem of revising extended programs; we base this approach on the coherence theory initially advocated by Gardenfors for belief revision. Our approach resolves contradiction by removing only conflicting information, not the believed source of it, and therefore, keeps information loss minimal. Furthermore, since there is no need to search for problematic assumptions, as is done in the traditional assumption-removal approach, our approach provides a skeptical revision semantics that is tractable. We define the skeptical and credulous coherence semantics and show that both semantics can be characterized in terms of the fixpoint semantics of a revised program using a simple program-revision technique. These semantics provide a suitable framework for knowledge and belief revision in the context of logic programs. Semantical properties and advantages of the proposed revision semantics are also analyzed. | Introduction
T HE extension of logic programs with classical negation
significantly increases the expressive power of logic
programs, but also presents new challenges [9]. Among
other things, the contradiction problem brought up by classical
negation has to be addressed.
Unlike normal programs, an extended program with
classical negation may not be consistent. For example,
considered contradictory since
both a and :a can be derived from it.
Many attempts have been made to resolve the contradiction
problem. However, almost all proposals are based
on one approach: Resolve the contradiction problem by
removing some problematic assumptions, though different
mechanisms may be used for the removal. (The assumptions
here refer to the values of the default negations. The
term is used simply because such values are usually determined
by first assuming a value and then justifying the
value.) Three notable examples are:
1. the contradiction removal semantics [14],
2. the argumentation semantics [6], and
3. the assumption denial semantics [23].
The assumption removal approach, though avoiding contradictory
conclusions caused by negation as failure, is not
suitable for many applications, such as knowledge and belief
revision.
An analysis of semantical properties shows that the
assumption-removal approach violates two important semantical
principles in logic programming and knowledge
revision: the principles of conservatism and relevance.
The authors are with the Department of Computing Science, University
of Alberta, Edmonton, Canada T6G 2H1; email: fyuan,
[email protected].
Manuscript received 24 Apr. 1995; revised 29 Feb. 1996.
IEEE Log Number 104442
One of the basic postulates in knowledge revision is that
of conservatism which requires that any conclusions derived
from a revised system be derivable from the system prior to
the revision [7]. The assumption-removalapproach, though
avoiding contradiction, does not address the consequences
of assumption removal. As such, conservatism may not
hold.
Example 1.1: Consider \Pi below
:a / notb; a /; c / not:a;
and the credulous argumentation semantics [6].
With each negative literal :ff viewed as a named proposition
ff, the credulous argumentation semantics of \Pi does
not imply c, which demonstrates that c is not true with
respect to the credulous argumentation semantics without
involving any revision mechanism. Since \Pi has a unique
preferred extension fnot:ag, the credulous argumentation
semantics of \Pi does imply a and c which violates
conservatism (see Section 5 for details). 2
The principle of relevance requires that the values of the
literals in a set be determined by only those clauses that define
them [3], [11]. Relevance has been considered essential
in any goal-directed query evaluation [2] while the goal-directed
theoretical characterization forms suitable bases
for logic programming [12]. The following example demonstrates
that the assumption removal approach may violate
relevance.
Example 1.2: Assume \Pi is defined by
a / notb; :a / notb; d / notb:
To resolve the conflict between a and :a, the assumption-
removal semantics has to remove notb. Consequently, d
cannot be derived. Note that the relevant program of d ,
i.e., the set of all clauses that define d, is fd / notbg
which deduces d. 2
Furthermore, the assumption-removal approach suffers
computationally from finding a minimal set of problematic
assumptions. This is the cause of intractability in the
contradiction removal semantics, as shown in [19], and in
the grounded version of the ideal skeptical semantics [1],
as shown in [22]. Although Dung's grounded argumentation
semantics [6] does not attempt to compute a minimal
removal set, it is also NP-hard since it is NP-complete to
determine if an atom is derivable from a sound argument
[22], an essential property in attack and counterattack relationships
In this paper, we present an alternative approach, based
on the coherence theory [7], to removing contradiction from
extended programs. This approach provides a suitable semantics
for knowledge and belief revision in the framework
of logic programs. The idea is simple but very effective:
Given an extended logic program \Pi and a set N
of assumptions, the coherence revision semantics
is determined by the disjunction of maximal consistent
sets of literals derivable from \Pi N 1 .
Note that the assumption-removal approach is based on
the assumption that inconsistency is caused by problematic
assumptions while the coherence approach removes contradiction
regardless the source of it.
We investigate the coherence approach mainly because
1. the coherence approach resolves contradiction by removing
only conflicting information, not the believed
source of it, and therefore, keeps information loss min-
imal; and
2. in this approach there is no need of searching for minimal
sets of problematic assumptions, and thus it provides
a skeptical revision semantics that is tractable.
In the paper, we first define the notion of consistent-and-
justified (CJ) partial models, based on a negation-as-failure
rule and the coherence theory, and then present the skeptical
and credulous coherence semantics in terms of CJ partial
models.
A remarkable fact about CJ partial models of a program
\Pi is that they are fixpoints of a suitably defined operator
for program \Pi R , which is obtained from \Pi by a simple
program revision technique: semi-normalizing the doubling
program [17]. This program revision technique is simple
but not trivial, and it gives a natural revision semantics
based on the coherence theory.
We also analyze the semantical properties of the proposed
revision semantics and show that these semantics
satisfy conservatism and relevance, among a number of
properties along the line similar to that of Dix [4].
II. Preliminaries
We briefly review the basic concepts and important results
that are useful for the following discussions. An ex-
1 \Pi N is the program obtained by GL-transformation defined in Section
2.
tended clause is a formula of the form
where L i are literals and notL j are assumed negations (or
assumptions). Note that :A denotes a fact that A is false
while notA denotes an assumption that A cannot be true.
Sometimes, we also use rhs to denote the body of a clause.
An extended logic program, or a program for short, is a set
of extended clauses. Assume L is a literal, by :L we mean
the literal that is complementary to L. For simplicity we
consider propositional logic only and the only literals of
interest are the set of all literals, denoted as L \Pi , whose
atoms appear in a given program \Pi.
Given \Pi, an assumption set N is defined as a set of assumed
negations whose literals are in L \Pi . The intended
meaning of \Pi under N is determined by \Pi N , the GL-
transformation, obtained from \Pi by first deleting all notL
if notL 2 N , and then deleting all clauses that contain assumptions
in their bodies [10]. We say L is derivable from
\Pi, denoted as \Pi ' L, if either
1. L / is an extended clause in \Pi, or inductively,
2. there exist a clause L /
Apparently, \Pi ' L if and only if L is derivable from all
positive clauses, i.e., clauses without default negations, in
\Pi.
2.1 Three-valued logic and semantical character-
ization
A three-valued interpretation is a pair I
contains all the literals true in I, F all the literals false in
I such that T " and the rest literals are considered
undefined. We use t, f , and u respectively to denote the
three truth values, and the order of the truth values, based
on the so called truth ordering, is defined as: f ! u t.
The connective not is defined as: not t, and
not u. The truth value of a conjunction is defined as
the minimum value among all the truth values of the literals
and assumptions in the conjunction. A program clause
is satisfied by an interpretation I if the truth value of the
head of the clause is greater than or equal to that of the
body. A model of a program is an interpretation in which
all the program clauses are satisfied. A three-valued interpretation
reduces to a two-valued interpretation if T [ F
contains all concerned literals. A binary relation among all
interpretations, based on the so called knowledge ordering,
is defined as:
In this case, I is said to be a sub-interpretation of I 0 . An
interpretation M is said to be a partial model of \Pi if there
exists a model M 0 of \Pi such that M - M 0 . For convenience
we abuse the notation in the following way: a set
I of literals and assumed negations and an interpretation
are used interchangeably. Since all proposed
semantics for programs can be characterized by a set of
three-valued interpretation, we have:
Definition 2.1: [4] Given program \Pi, we use SEM (\Pi) to
represent the set of all extended literals contained in every
characteristic interpretation under the semantics SEM .
That is,
where SEM \Pi assigns to every program \Pi a set of three
valued interpretations whose literals appear in \Pi. 2
Note that by SEM (\Pi) we mean a set of literals and assumed
negations as well as a three-valued interpretation,
whichever is convenient.
2.2 Justified models
We redefine the justified model [25] that can be used to
represent many existing semantics.
Definition 2.2: Let \Pi be a program and I an
interpretation.
1. A literal L in T is said to be positively justified wrt
I if there, recursively, exists a clause
in \Pi such that L i are positively
justified literals in T and L j 2 F for
2. A literal L 2 F is said to be negatively justified wrt
I if L is not positively justified wrt -
The following lemma presents an alternative definition for
the justified literals. Its proof is straightforward.
Lemma 2.3: Let I
not -
Tg. Then
1. positively justified if and only if \Pi notF ' L
(through literals in T ),
2. L is negatively justified if and only if \Pi not -
F 6' L.
Note by \Pi ' L through literals in T , we mean there, recur-
sively, exists a clause L /
such that for justified model of \Pi if
1. L is in T if and only if L is positively justified wrt M ;
and
2. L is in F if and only if L is negatively justified wrt
The justified model was proposed by Yuan and You and
it coincides with the partial stable model of Przymusinski
[15], [25]. The idea behind the justification model is very
simple and clear:
1. L is true if and only if it can be derived from the
program and all the assumed negations, and
2. L is assumed false if and only if it cannot be derived
from the program even when all non-true literals are
assumed false.
We define a least justified model as the justified model that
is a sub-interpretation of any justified model and a maximal
justified model as a justified model that is not a sub-
interpretation of any other justified model.
Proposition 2.5: Any program \Pi has a least justified
model.
The expressive power of the justified model is demonstrated
below by the fact that most semantics proposed so far can
be characterized in terms of the justified models.
Theorem 2.6: Let \Pi be a normal program. Then
1. the well-founded model of \Pi coincides with the least
justified model of \Pi;
2. M is a stable model of \Pi iff it is a two-valued justified
model of \Pi;
3. M is a regular model of \Pi iff it is a maximal justified
model of \Pi.Note that the regular model semantics of You and Yuan
[20] has been recently shown to coincide with the partial
stable model semantics of Sacca and Zaniolo [16] and with
Dung's preferred extension semantics [5], as well as with
Przymusinski's maximal three-valued stable models [21].
For convenience, we will use the least justified model and
well-founded model interchangeably.
2.3 The alternating fixpoint theory revisited
The alternating fixpoint theory, first proposed by Van
Gelder [8] and then further developed by You and Yuan
[21], [24], is a powerful tool to characterize the various semantics
of programs. The theory is based on an important
transformation defined below.
Definition 2.7: Let \Pi be a program and N a set of assumed
negations. Then T \Pi is defined as a mapping on the
set of all assumption sets as follows:
T \Pi is anti-monotonic, i.e., T \Pi
but not monotonic.
Definition 2.8: Let \Pi be a program and N an assumption
set. Then N is
1. a fixpoint of \Pi if
2. an alternating fixpoint of \Pi if
3. a normal alternating fixpoint of \Pi if
The following theorem establishes a one-to-one correspondence
between the justified model and the normal alternating
fixpoint.
Theorem 2.9:
1. If N is a normal alternating fixpoint of \Pi then hT ; F i
is a justified model of \Pi,
2. If hT ; F i is a justified model of \Pi then fnotL j L 2 Fg
is a normal alternating fixpoint of \Pi.
The proof of the theorem is quite straightforward. Assume
N is a normal alternating fixpoint of \Pi. Then we have
which implies that I = hT; F i is a three-valued
interpretation. Lemma 2.3 implies that L 2 T if and only
if L is positively justified wrt hT ; F i, and L 2 F if and
only if L is negatively justified wrt I = hT ; F i, which, by
Definition 2.4, implies that hT ; F i is a justified model.
(2) follows similarly. 2
2.4 The contradiction problem
Obviously, a program \Pi is inconsistent if there exists a
pair of complementary literals, a and :a such that \Pi ' a
and \Pi ' :a. Unfortunately, this simple criterion does not
identify many problematic programs.
Example 2.1: Let \Pi be
Though no pair of complementary literals can be derived
from \Pi, the program is not
contradiction-free. In fact, the unique answer set of \Pi is
inconsistent in that it contains a pair of complementary
literals b and :b. 2
A program is considered problematic if it leads to contradictory
conclusions under a given semantics. Since most
proposed semantics of extended programs can be characterized
by justified models, we are to classify problematic
programs by their justified models.
A justified model does
not contain a pair of complementary literals. Further, an
extended program is said to be contradiction-free if all of its
justified models are consistent. Otherwise, it is considered
contradictory.
III. Coherence Semantics
If a justified model inconsistent then naturally
we may have to revise it, using either the assumption-
removal approach or the coherence approach. The former
is to consider only those subsets of F that do not cause
inconsistency, and the latter is to find a proper consistent
subset of T , as illustrated below.
Example 3.1: Consider \Pi below:
a / notb; :a / notc; d / notb; notc:
Assume justified model of \Pi. Since
b does not appear in the head of any clause in \Pi, b is
not positively justified wrt any interpretation, which implies
b is negatively justified wrt M , and therefore, b 2
F . Similarly, we have fc; :b; :c; :dg ' F . By Definition
2.2, this implies that a; :a; d are all positively justified
wrt M , that is, fa; :a; dg ' T . It follows that
hfa; :a; dg; fb; c; :b; :c; :dgi is the only justified model of
\Pi and, obviously, it is inconsistent.
Consider four consistent sub-interpretations of M :
For any literal is assumed false if and
only if it is negatively justified wrt M i and a literal is true
only if it is positively justified wrt M i . However, :a and a
are undefined in M 1 and M 2 respectively, despite the fact
that :a and a are positively justified wrt M 1 and M 2 respectively
For 4, on the other hand, a literal is
true in M j if and only if it is positively justified wrt M j and
a literal is assumed false only if it is negatively justified wrt
Similarly, notc and notb are not assumed in M 3 and
respectively even though c and b are negatively justified
wrt M 3 and M 4 respectively.
The assumption removal approach favors M 3 and M 4
while the coherence approach chooses M 1 and M 2 . 2
As argued at the outset, we choose the coherence approach
to revise any inconsistent justified models. One of the challenges
in the coherence revision is how to find an appropriate
consistent subset of T . Two different philosophical
beliefs lead to two different approaches: the credulous and
the skeptical. The credulous approach favors the maximum
consistent subsets of all positively justified literals,
and naturally, is not computationally tractable. The skeptical
approach, on the other hand, derives a conclusion only
if it is well-founded. However, the question arises as to
what degree a conclusion is considered well-founded. Before
answering this question, let us see another illustrating
example.
Example 3.2: Assume \Pi is defined as
a /; :a /; d / nota; :d /;
and
Then M 1 is a justified but inconsistent model of \Pi, and
is a consistent partial model of \Pi,
though M 2 is a sub-model of M 3 and M 3 is a sub-model of
both M 4 and M 5 .
For the credulous approach, both M 4 and M 5 are an obvious
choice since they contain a maximal consistent subset
of all positively justified literals. For the skeptical approach,
to be extremely cautious, we could always take an empty set,
like though it is obviously not reasonable.
The difference between M 3 and M 4 is that M 4 derives
:a without a well-founded justification (note that a may
also be selected, as in M 5 derives :d because d
is negatively justified wrt the well-founded semantics of \Pi
in that notd is true under the well-founded semantics of
\Pi, and therefore, d will never conflict with :d. 2
The above example leads to the following definition of
"shadow-justification" to characterize the special status of
d and the like.
Definition 3.1: Let \Pi be a program and
partial model of \Pi. Further, let \Pi M be the program obtained
from \Pi by deleting notL if L 2 F . A literal L is
said to be negatively shadow-justified wrt M if notL is assumed
under the well-founded semantics of \Pi M . 2
The following lemma demonstrates that the negative
shadow-justification is a weaker notion than negative-
justification.
Lemma 3.2: Assume is a partial model of
\Pi such that each literal in T is positively justified wrt M .
Then L is negatively shadow-justified wrt
is negatively justified wrt M . 2
A negatively shadow-justified literal, on the other hand,
is not necessarily a negatively justified literal. For exam-
ple, consider M 3 in Example 3.2. d is negatively shadow-
justified wrt M 3 , but not negatively justified wrt M 3 .
We are to define a suitable partial model such that L
is derived under the skeptical approach if it is positively
justified and :L is negatively shadow-justified. This leads
to the following key definition.
Definition 3.3: An interpretation is said to
be a consistent-and-justified partial model (or, CJ partial
model) of \Pi if
1. L 2 F if and only if L is negatively justified wrt M ,
2. L is in T only if L is positively justified wrt M ,
3. L is in T if (1) L is positively justified wrt M and (2)
:L is negatively shadow-justified wrt M , and
4. T contains no pair of complementary literals. 2
The first condition retains the negative justification rule
for assumed negations. The second indicates that all true
literals must be positively justified. The third condition
specifies the lower bound of the well-foundedness, that is, L
must be in T if it is positively justified and :L is negatively
shadow-justified.
Example 3.3: Consider \Pi in Example 3.2 again.
are the set of all CJ partial models of \Pi while
is not a CJ partial model of \Pi. 2
It is straightforward to see that for any normal program
justified model of \Pi if and only if it is a CJ
partial model of \Pi. The following shows that any extended
program has a least CJ partial model and its proof follows
from Theorem 4.2.
Theorem 3.4: Every extended program has a least CJ
partial model that is a sub-interpretation of every CJ partial
model of \Pi. 2
Definition 3.5: Let \Pi be an extended program. Then M
is said to be
1. a skeptical partial model of \Pi if it is the least CJ
partial model of \Pi, and
2. a maximal consistent-and-justified partial model of
\Pi if it is a CJ partial model of \Pi and not a sub-
interpretation of any other CJ partial model of \Pi.Obviously, any extended program has the unique skeptical
partial model. Now we are in the position to define the
coherence semantics for extended programs.
Definition 3.6: Let \Pi be an extended program. Then
1. the skeptical coherence semantics (SCS) of \Pi is characterized
by the skeptical partial model; and
2. the credulous coherence semantics (CCS) of \Pi is characterized
by the set of all maximal consistent-and-
justified partial models of \Pi. 2
Note that we say a semantics is characterized by a set S of
partial models if an extended literal is true in the semantics
if and only if it is contained in every partial model in
S. Theorem 3.4 shows that both the skeptical and credulous
coherence semantics are well-defined over all extended
programs.
IV. Revised Programs
In this section we first present the revised program, based
on two simple program revision techniques, namely, the
doubling transformation, first proposed by Wallace [17],
and the semi-normalization. Then we will show that the
revised program can be used to characterize the coherence
semantics.
A clause with L as the head is semi-normal if its body
contains not:L. In the following, we define the revised
program \Pi R as the semi-normalized program of the doubling
transformation of \Pi.
Definition 4.1: Let \Pi be an extended program. Then
the revised program of \Pi, denoted as \Pi R , is the program
obtained from \Pi such that for each clause
in \Pi, \Pi R contains precisely two clauses:
1:
notLm+n ; not -
2: -
are newly introduced atoms. 2
Note that the doubling transformation \Pi D of \Pi is defined
the same as \Pi R above but without semi-normalization.
Further, semi-normalization is applied only to (4:1), not
(4:2). This is simply because the head of (4:2) is a newly
introduced atom -
negative literal does not appear
in the revised program.
Example 4.1: Let notbg. Then
\Pi R is defined by:
:a / nota; a / not:a; b / notb; not - b; not:b
:a /; -a /; - b / notbThe revised program \Pi R of \Pi consists of two programs \Pi 1
and \Pi 2 such that the heads of all the clauses in \Pi 1 and
are original and newly-introduced literals respectively.
Furthermore, since every clause in \Pi 1 is semi-normalized
and there are no negative occurrences of newly-introduced
literals -
L, \Pi R is always consistent.
The role of newly-introduced literals -
L is quite subtle
and interesting. Given any set N of assumptions, \Pi N and
are identical, subject to a homogeneous mapping between
L and -
L. Therefore, -
L characterizes the behavior
of L with respect to the original program \Pi under any assumption
set. Such characterizations effectively guarantee
the conservatism.
Consider not:ag
in Example 1.1. Then \Pi
not:a; not -
:a; not:cg and \Pi
not:ag.
Assume not:ag. Then both -a and -
:a are
derivable from \Pi R
simply because both a and :a are
derivable from \Pi N . The derivation of -
:a then effectively
prevents c from being derived under \Pi R N since not -
:a cannot
be assumed. Without the derivation of -
:a, the revision
may derive c which violates the conservatism.
Interestingly, both the skeptical and credulous coherence
semantics can be characterized through the revised pro-
gram, as shown below.
Theorem 4.2: Let \Pi R be the revised program of \Pi,
the well-founded model of \Pi R , and
where
Then M is the unique skeptical partial model of \Pi. 2
This theorem not just shows the existence of the skeptical
partial model but also gives a polynomial algorithm
for computing the skeptical partial model of any given extended
program.
Definition 4.3: N is an L-maximal fixpoint of the revised
program \Pi R of \Pi if
1. N is a fixpoint of \Pi R , and
2. there exists no fixpoint N 0 of \Pi R such that
fnot -
Ljnot -
Ljnot -
establishes one to one correspondence
between the L-maximal fixpoint of \Pi R and the maximal
CJ partial model of \Pi.
Theorem 4.4:
1. If N is an L-maximal fixpoint of \Pi R then
is a maximal CJ partial model of \Pi, where
2. If hT ; F i is a maximal CJ partial model of \Pi then
is an L-maximal fixpoint
of \Pi R .Note that L used in the above two theorems refers to a
literal appearing in \Pi, not a newly introduced atom -
some literal L 0 .
V. Semantical Properties
In this section, we first define several interesting semantical
properties, and then demonstrate that the coherence
semantics satisfies some of these properties.
Given program \Pi and set S of literals, the value of S
should be determined by only those clauses whose heads
define literals in S. This leads to the relevant program,
denoted as REL(\Pi; S), of S under \Pi.
1. L / body is in REL(\Pi; S) if either L or :L is in S.
2. L / body is in REL(\Pi; S) if L is contained in
LREL(\Pi;S) .
3. Nothing else is in REL(\Pi; S).
Note that we always consider L and :L to be relevant.
With the introduction of classical negation we have to consider
the coherence between L and :L. In order to reflect
the degree of coherence of various semantics, we define an
isomorphy transformation I by replacing each negative literal
:a with a newly introduced propositional symbol. I \Gamma1
is then defined as the reverse transformation of I. Note that
for any newly introduced proposition pnew , I returns
the nil value since no literals in L \Pi are mapped to
:pnew . It is easy to see that for most of the existing non-
revision semantics, we have I \Gamma1 (SEM
However, this does not hold for the revision semantics if \Pi
is not contradiction-free.
Now we are in the position to define some very interesting
properties for extended logic program semantics.
Definition 5.1: A semantics SEM is said to satisfy
1. N-cumulativity if SEM
set of negations F ' SEM (\Pi);
2. negative-justification if notL 2 SEM (\Pi) for
every L that is negatively justified with respect to
3. simplicity if L 2 SEM (\Pi) for every L / in \Pi;
4. relevance if SEM (\Pi) "
for any set S ' L \Pi ;
5. conservatism if (SEM (\Pi)) ' I \Gamma1 (SEM (I(\Pi)));
6. preservation if (SEM
any contradiction-free program \Pi; and
7. coherence if L 2 SEM (\Pi) implies not:L 2
N-cumulativity expresses the desire that the semantical inference
can be carried out incrementally, based on assumed
negations. Negative-justification is used to characterize the
negation as failure rule, and simplicity is plain and straight-
forward. The value of any set of literals should be determined
by only those clauses that define them, and this
is relevance [3], [11]. Conservatism requires that no new
conclusions be derived through the revision. Preservation
indicates that the revision semantics should not change
the intended meaning of the program if the program is
contradiction-free. Coherence, first realized by Pereira and
Alferes [13], is to measure the degree of coherence of the
concerned semantics.
Remark In the context of normal programs, it is easy to
check that
1. almost all well-behaved semantics, as defined in [4],
satisfy N-cumulativity, negative-justification, simplicity
and relevance, and any proposed semantics that
satisfies N-cumulativity, negative-justification, simplicity
and relevance is well-behaved; 2 and
2. any semantics for normal program satisfies the rest of
above properties trivially.
2 The subtle difference between the well-behaved semantical properties
and these four properties will be discussed elsewhere.
In the context of extended programs, however, it is not difficult
to see that any consistent semantics CANNOT satisfy
both negative-justification and simplicity. The reason
is simple: to remove inconsistency, we have to give up either
negative-justification or simplicity. The assumption
removal approach gives up negative-justification and retains
simplicity while the coherence approach gives up the
simplicity and retains negative-justification.
Theorem 5.2:
1. Both SCS and CCS satisfy N-cumulativity, negative-
justification, relevance, and conservatism, but not simplicity
2. The CCS satisfies preservation but the SCS does not.
3. Neither SCS nor CCS satisfies coherence 3 . 2
The following example shows that SCS does not satisfy
preservation and CCS does not satisfy coherence.
Example 5.1: Let
is the only justified model and is consistent.
Further, the skeptical partial model of \Pi is h;; ;i. 2
It is very important for a semantics to satisfy preserva-
tion, meaning that the intended meaning of a program
should not change during the revision if the program is
problem-free. The skeptical coherence semantics does not
satisfy preservation, mainly due to the weak definition of
contradiction-free. As a matter of fact, SCS does satisfy
both preservation and coherence for any program whose
well-founded model satisfies coherence.
We conclude this section with the following demonstrating
example.
Example 5.2: Let us consider the following knowledge
base:
Most presidential candidates are honest;
Most presidential candidates are professional;
politicians;
Politicians are not honest;
Republic politicians are conservative;
Non-conservative presidential candidates are;
liberals;
Dole is a Republican presidential candidate.
This can be represented by the following program:
republican(Dole) /
pres candidate(Dole) /
pres candidate(X), not abnormal(X)
politician(X) / pres candidate(X), not abnormal(X)
liberal(X) / pres candidate(X), not conservative(X)
3 Our approach was named after the coherence theory advocated
by Gardenfors for belief revision while the coherence property was
formulated by Pereira and Alferes [13]. The term coherence has two
entirely different meanings which explain why our coherenceapproach
does not satisfy the coherence property.
The program is inconsistent because both honest(Dole) and
: honest(Dole) can be derived. The program, however, has
a unique CJ-partial mode M and, therefore, both revision
semantics coincide and imply the following:
republican(Dole),
pres-candidate(Dole),
politician(Dole),
not liberal(Dole).
Please note that the revision semantics concludes neither
honest(Dole) nor :honest(Dole).
For comparison, the assumption removal approach has
to remove not abnormal(Dole), and therefore, may not be
able to derive politician(Dole). For example, the credulous
argumentation semantics of this program assumes not con-
servative(Dole) and consequently, concludes that Dole is
a liberal. The derivation of liberal(Dole) in the credulous
argumentation semantics is due to its violation of conser-
VI. Prioritized Revision
The proposed coherence approach does not impose any
priority relations among conflicting parties, though in
many applications, certain priority relations are desired.
Example 6.1: Consider the following program \Pi:
light on / switch on; notbroken;
switch on /; :light on /;
which describes the following situation:
normally lights are on if the switch is not broken
and as a matter of fact, we found out that lights
are not on.
For such applications, we do expect that lights are not
on. In fact both the skeptical revision of Witteveen and
Brewka and the argumentation semantics of Dung conclude
switch on and :light on by contrapositive reasoning and
the like [18], [6]. However, the skeptical coherence semantics
of \Pi implies that switch is on but not :light on due
to the conflict between the pair of complementary literals.This example demonstrates that the proposed coherence
semantics does not favor literals that are directly derivable
from a given program over those that are derivable through
assumed negations. For such applications, a priority relation
is necessary.
Assume \Pi is an extended program and L \Pi the set of
all literals involved in \Pi. Then a priority relation - is
defined as a binary relation on L \Pi such that L i - L j if the
priority of L i to be derived is at least as high as that of
. Since this paper concentrates on the conflict resolving,
only priorities between L and :L are considered.
Definition 6.1: Let \Pi be an extended program and -
the priority relation among literals in \Pi. Then the prioritized
program of \Pi, denoted as \Pi - is the program obtained
from \Pi by changing every clause L i / body into
Example 6.2: Consider \Pi in Example 6.1 and a priority
relation
-= f:light on - light ong:
Then \Pi - is
light on / switch on; notbroken; not:light on;
switch on /;
:light on /.
\Pi - is contradiction-free and its well-founded semantics implies
that the switch is on and :light on. 2
As expected, the priority relation should be determined according
to applications and it is difficult to form a general
guideline. However, many revision semantics implicitly favor
the following priority:
for every set N of assumptions such
that \Pi N ' :L there exists a subset N 0 of N such
that \Pi N 0
Unfortunately, computing a semantics under such a priority
relation is very difficult. In many applications, literals
that are derivable directly from a given program are considered
to have higher priority to be held than their negations
that are derivable from assumed negations. Light on is a
notable example for such priority relations. The following
algorithm revises a given program \Pi according to such a
priority, based on the well-founded approach.
Algorithm 6.1:
Input \Pi: an extended program
Output revised program with a priority based on
the well-founded approach
Methods
(3) for each L 2 T but :L 62 F do
changing :L / body in \Pi - into
(5) exit if no changes, otherwise goto (2).This algorithm assigns a higher priority to those literals
that are derived without assuming newly derived negations
than those that are derived only after assumed nega-
tions. The algorithm can also be used to compute the well-founded
semantics of \Pi - and it is polynomial to the size
of the input program. For \Pi in Example 6.1, the skeptical
coherence semantics of \Pi - implies switch is on and
:light on.
The coherence approach is proposed as a general approach
to logic program revision based on a simple idea:
Resolve contradiction by removing conflicting parties, regardless
their sources. This approach works better than the
assumption removal approach for many applications, as our
examples demonstrated, simply because it is too difficult, if
not impossible, to correctly identify the source of conflict.
However, if the source of conflict is known to the agent,
then, of course, it is better to remove the troublemaker -
the source of conflicts - directly. This is the motivation
of prioritized revision. The priority relation, if applicable,
clearly indicates the agent's intension and therefore, must
be honored in the revision.
VII. Comparisons and Further Discussion
The following are some examples that illustrate the similarity
and the difference between several proposals.
Example 7.1: Consider the following program:
Under the skeptical belief revision [18], none of c, d, and
:d can be derived, nor nota can be assumed. Similarly, c
cannot be derived under the argumentation semantics [6].
Obviously, under any circumstances, a cannot be derived,
and therefore, nota should be assumed. The contradiction
between d and :d may suggest that the last two rules should
be revised. However, the inconsistency between the last two
clauses should not affect that c can be derived under the
condition that a cannot be proved. Note that both coherence
semantics of the program assume nota and consequently
conclude c.This example demonstrates that the skeptical belief revision
semantics and the argument semantics do not satisfy
the negative-justification and relevance.
One of the important aspects in knowledge revision is
the minimality, that is, the change of information during
revision should be minimized. Although unnecessary loss
of information is usually associated with the violation of
relevance, as demonstrated in the example below, a precise
characterization of the minimality is not an easy task and
is currently under investigation.
Example 7.2: Consider
notb; c / notdg. The relevant program of fcg is fc /
notdg and therefore, by relevance, c should be deduced.
However, to resolve the conflict between a and :a with the
assumption removal approach, either notb or notd has to
be removed which fails to derive c. 2
VIII. Conclusions
The coherence approach to logic program revision is
based on the idea that contradiction may be resolved by removing
only conflicting information, without searching for
believed source of conflict. We have demonstrated that this
approach satisfies many desirable properties, including the
conservatism and relevance. Furthermore, simply because
the revision can be achieved regardless of the source of con-
flict, the coherence approach provides a tractable skeptical
revision semantics.
Various coherence semantics provides an alternative approach
to logic program revision, and its applications are
currently under investigation.
IX.
Appendix
This section contains utility lemmas and proofs of the
main theorems. First, some convenient notations. Let
be an interpretation. Then -
Tg and F denotes fnotL j L 2 Fg. Note that we abuse
F as a set of literals as well as a set of assumed negations,
whenever convenient.
Proposition 2.5 Any program \Pi has a least justified
model.
Proof: Since T \Pi is anti-monotonic, any program has at least
one normal alternating fixpoint [24]. By Theorem 2.9, this
implies that any program has at least one justified model.Theorem 2.6 Let \Pi be a normal program. Then
1. the well-founded model of \Pi coincides with its least
justified model;
2. M is a stable model of \Pi iff it is a two-valued justified
model of \Pi;
3. M is a regular model of \Pi iff it is a maximal justified
model of \Pi.
Proof: It follows Theorem 2.9 and the fact that, as shown
in [21], the well-founded model, a stable model , and a
regular model of \Pi coincide with the least, a fixpoint, and
a maximal normal alternating fixpoint of \Pi respectively.Let \Pi R be a revised program and N a fixpoint of \Pi R .
Then the characteristic interpretation of N is defined as
where
and
Further, we use NP and N -
P to denote the set of all assumptions
in N whose corresponding literals are from P
and -
respectively.
Lemma 9.1: Let CI(N be the characteristic
interpretation of N . Then
Proof: The construction of \Pi R and the comparison of (4.1)
and (4.2) clearly indicate that for any assumption set
and
Since N is a fixpoint of \Pi R , (9.2) implies that
By (9.1), this implies that
Lemma 9.2: Let \Pi R be the revised program of \Pi, FR be
a set of assumed literals such that -
The proof is straightforward.
Lemma 3.2 Assume be a partial model of \Pi
such that each literal in T is positively justified wrt M .
Then L is negatively shadow-justified wrt
is negatively justified wrt M .
Proof: Assume L is negatively justified wrt M then, by
Lemma 2.3,
be the well-founded model of \Pi M
which is obtained from \Pi by deleting notL if L 2 F . For
each literal L 2 T , since L is positively justified wrt M ,
consequently, \Pi M ' L, which implies L 2 Tw ,
and T ' Tw , and
This implies, together with (9.5), that
Tw 6' L:
By Theorem 2.6 (1) L 2 Fw and therefore, L is negatively
shadow-justified wrt M .Now we define a justified partial model of \Pi as a CJ
model of \Pi such that there exists no CJ model
of \Pi and T ae T 0 .
Assume three-valued interpretation of
\Pi. Then we define the characteristic assumption set of M
as
The following lemma establishes a one-to-one correspondence
between the fixpoint of \Pi R and the justified partial
model of \Pi.
Lemma 9.3: Let \Pi R be the revised program of \Pi. Then
1. N is a fixpoint of \Pi R only if CI(N ) is a justified
partial model of \Pi; and
2. CA(M ) is a fixpoint of \Pi R if is a justified
partial model of \Pi.
Proof: (1). Assume N is a fixpoint of \Pi R , CI(N
g. Then by (9.1),
By the construction of \Pi R
\Pi NP ' L if and only if \Pi R N ' -
By Lemma 9.1 and (9.8), F (N
fLj\Pi NP 6' Lg: That is,
iff L is negatively justified wrt CI(N )
( by Lemma 2.3)
Thus, L 2 F (N ) if and only if L is negatively justified
with respect to CI(N ) and \Pi.
Assume L, and by
Ng. Therefore, L is positively justified with respect to
Since every clause with L 2 L \Pi as the head is semi-
normalized, T (N ) contains no pair of complementary literals
Assume (1) L is positively justified and (2) :L is negatively
shadow-justified, both with respect to CI(N ). We
now show that L 2 T (N ), that is, to show that \Pi R N ' L.
By the structure of \Pi R , it suffices to show that not:L 2 N .
Then
N 6' :L for N is a fixpoint
of \Pi R
if \Pi R
:L by (9.2)
if \Pi F (N) 6' :L by
Since :L is negatively justified wrt \Pi F , \Pi F 6' :L.
It follows that CI(N ) is a consistent-and-negatively-
justified partial model.
Assume CI(N ) is not a justified partial model. Then
there exists a consistent-and-negatively-justified partial
model of \Pi such that
We will show that there exists at least one literal
that :L is contained in T which contradicts
T 0 is consistent. Assume L is in T
fore, L is positively justified with respect to M 0 . Without
loosing generality, we assume there exists a clause
in \Pi such that L
n. Thus, there exists a clause
notLm+n ; not -
in \Pi R . Since L
, and not -
n. The fact that L 62 T and
implies that \Pi R N 6' L. Since C is a clause in \Pi R , this
implies that not:L is not contained in N . Since N is a
fixpoint of \Pi R , \Pi R
therefore, by (9.1), :L 2 T .
This contradicts that
justified partial model of \Pi
and
We will show that N is a fixpoint of \Pi R .
First, we show that
Lg:
this amount to show that
only if \Pi R
L:
However,
only if \Pi F 6' L by Lemma 2.3
if and only if \Pi R N 6' -
L by
Now we show that
Assume L is in T . Then L is positively justified with respect
to M , and therefore,
Since T is consistent, :L 62 T and hence not:L 2 NP . By
Lemma 9.2, \Pi R
which implies,
Now we need only to show that NP ' fnotLj\Pi R
Lg.
Assume not, then there exists an L such that L 62 T and
Ng. Therefore, L is positively justified wrt M . Since L is
not in T , there are only two cases.
Case 1: :L is in T .
Since :L is in T , not:L is not in N . But every clause
with L as the head in \Pi R has not:L in its body which
contradicts that \Pi R
Case 2: L is positively justified through some literals not
contained in T .
By induction on the steps through which L is derived
from \Pi R N , there must be at least one literal L j such that
positively justified wrt M . This leads to Case 1.
imply that N is a fixpoint of \Pi R . 2
Lemma 9.4: Let \Pi R be the revised program of \Pi; and
be the well-founded
models of \Pi and \Pi R respectively. Then for any literal L 2
1.
2. L 2 F \Pi if and only if L 2 FR .
Proof: First we present an iterative approach to computing
the well-founded model of \Pi [26]. Assume T
as the program obtained from \Pi n by
1. deleting all clauses with L in the body if L 2 Fn ,
2. deleting all clauses with notL in the body if L 2
and
3. deleting all clauses with L as the head if L 2 Fn .
Further, let
where N is the set of all involved negations. Then
We also assume that TRn ; FRn , and \Pi Rn are defined as
above for \Pi R . Since for any given N of assumed negations,
only if \Pi R N ' -
L, it is sufficient to show,
and we are to show by induction, that
Basis It is trivial.
Hypothesis We assume that for any k - n,
Induction We need to show that
Since
it is sufficient to show that
was deleted from \Pi n if and only if
was deleted from
\Pi Rn .
Assume L / was deleted from \Pi n . Then,
there are three cases:
Case 1:
Case 2:
Case 3: L 2 Fn .
By the induction hypothesis, the three corresponding cases
for \Pi Rn are
Case 1:
Case 2: -
Case 3: L 2 FRn .
In each case, L /
will be deleted
from \Pi Rn .
Assume L /
deleted
from \Pi Rn . Then there are five cases
Case 1:
Case 2: -
Case 3:
Case 4:
Case 5: :L 2 TRn .
Similar to the only-if part, the hypothesis implies that the
first three corresponding cases for \Pi n are
Case 1:
Case 2:
Case 3: L 2 Fn .
In each case, L / will be deleted from \Pi Rn .
If as in Case 3, then, by (9.3), -
which is the same as in Case 2. Further, in case 5, i.e.,
since every clause with :L
as the head in \Pi R is semi-normalized. This means any
clause with L as the head had be deleted from \Pi Rn\Gamma1 which
contradicts that L /
notL is in \Pi Rn .
This completes the proof. 2
Theorem 4.2 Let \Pi R the revised program of \Pi,
the well-founded model of \Pi R , and
where
Then M is the unique skeptical model of \Pi.
be the well-founded semantics
of \Pi. By (9.4) and Lemma 9.4,
is the well-founded model of \Pi and
is also the well-founded model of \Pi F . Obvi-
ously, we also have
TR "
First, we show the following three facts.
Fact 1 L 2 F if and only if L is negatively justified wrt M
and \Pi.
iff \Pi R
TR 6' -
L for MR is a justified model
of \Pi R
iff \Pi R
L by (9.14) and the
construction of \Pi R
iff L is negatively justified wrt M and \Pi.
Fact 2 L 2 T if and only if L is positively justified wrt M
and :L is negatively shadow-justified wrt M .
Assume L is positively justified wrt M and :L is negatively
shadow-justified wrt M . Then
since \Pi R
This implies, by Lemma 9.2
It follows that L 2 T .
Assume hence L is true in the well-founded
semantics of \Pi R . The deriving process of L under the well-founded
semantics clearly indicates that L is positively justified
wrt M and \Pi.
Since M \Pi is the well-founded model of \Pi F , any L 2 F \Pi
is negatively shadow-justified wrt M and \Pi. Because L 2
TR and every clause with L as the head in \Pi R is semi-
normalized, :L 2 F \Pi . It follows that :L is negatively
shadow-justified wrt M and \Pi.
Fact 3 T does not contains a pair of complementary literals
It follows from the fact that every clause with L as the
head in \Pi R is semi-normalized, and therefore, L 2 TR only
Facts 1, 2, and 3 imply that M is a CJ partial model.
be a CJ partial model of \Pi. Then we
need only to show M - N , i.e.,
First, we show F ' FN .
be a justified partial model of \Pi such
that T 0 ' TN . Then, by Lemma 9.3, CA(N 0 ) is a fixpoint
of \Pi R , and therefore,
since MR is a well-founded model of \Pi R . It follows that
only if L is positively justified wrt M
and :L is negatively shadow-justified wrt M , it is straight-forward
to show that T ' TN . 2
Theorem 4.4
1. If N is an L-maximal fixpoint of \Pi R then
is a maximal CJ partial model of \Pi,
Ngi.
2. If hT ; F i is a maximal CJ partial model of \Pi then
is an L-maximal fixpoint
of \Pi R .
Proof: It follows Lemma 9.3 and the definition of the L-
maximal fixpoint. 2
Theorem 5.2
1. Both SCS and CCS satisfy N-cumulativity, negative-
justification, relevance, and conservatism, but not simplicity
2. The CCS satisfies preservation but the SCS does not.
3. Neither SCS nor CCS satisfies coherence.
Proof: N-cumulativity is satisfied by both coherence semantics
simply because is a CJ partial model
of \Pi if and only if M is a CJ partial model of \Pi F 0
for any
Assume L is negatively justified wrt SEM (\Pi), where
SEM is either SCS or CCS. Then for any CJ partial
model M such that SEM (\Pi) - M , L is also negatively
justified wrt M and therefore, notL is contained in M .
This implies that notL 2 SEM (\Pi).
Conservatism follows from the fact that M is a justified
model of \Pi if and only if there exits a CJ partial model M 0
such that M 0 - M .
The proof for both SRS and CRS satisfy relevance is not
difficult but rather tedious and thus omitted.
The CCS satisfies preservation because for any
contradiction-free program \Pi, M is a maximal CJ partial
model of \Pi if and only if M is a maximal justified model
of \Pi.
All negative statements are supported by various examples
in the paper. 2
Acknowledgments
The authors would like to thank J-uergen Dix and anonymous
reviewers for providing many constructive comments.
This work is partially supported by grants from the National
Science and Engineering Research Council of Canada
and by the ISIS, Fujitsu Labs, Numazu, Japan. This paper
is based on the technical report Coherence Approach to
Logic Program Revision, ISIS-RR-94-19E, Fujitsu Labora-
tories, 1994. A preliminary version of this paper appears
in the Proc. of the 12th International Conference on Logic
Programming, Page 167-181, June 1995. The work of the
first author was performed while visiting the ISIS, Fujitsu
Labs, Numazu, Japan.
--R
Scenario Semantics of Extended Logic Programs.
A framework for representing and characterizing semantics of logic programs.
A classification theory of semantics of normal logic programs: II.
Negations as hypotheses: An abductive foundation for logic programming.
An argumentation semantics for logic programming with explicit negation.
Revision.
The alternating fixpoints of logic programs with negation.
Logical programs with classical negation.
The stable model semantics for logic programming.
Extended well founded semantics for logic programs with negations.
Uniform proofs as a foundation for logic programming.
Contradiction removal within well founded semantics.
Stable models and non-determinism in logic programs with negation
Unrestricted logic programs or if stratification is the cure
Skeptical reason maintenance and belief revision.
Revision by Expansion in Logic Programs.
A three-valued semantics of deductive databases and logic programs
On the equivalence of semantics for normal logic programs.
Iterative belief revision in extended logic programming.
Logic programming with assumption denials.
Autoepistemic logic of first order and its expressive power.
Justification rules and justified model semantics.
Autoepistemic circumscription and logic programming.
--TR
--CTR
Chiaki Sakama , Katsumi Inoue, An abductive framework for computing knowledge base updates, Theory and Practice of Logic Programming, v.3 n.6, p.671-715, November | logic programming;knowledge representation;nonmonotonic reasoning;belief revision |
627906 | Efficient Data Mining for Path Traversal Patterns. | AbstractIn this paper, we explore a new data mining capability that involves mining path traversal patterns in a distributed information-providing environment where documents or objects are linked together to facilitate interactive access. Our solution procedure consists of two steps. First, we derive an algorithm to convert the original sequence of log data into a set of maximal forward references. By doing so, we can filter out the effect of some backward references, which are mainly made for ease of traveling and concentrate on mining meaningful user access sequences. Second, we derive algorithms to determine the frequent traversal patternsi.e., large reference sequencesfrom the maximal forward references obtained. Two algorithms are devised for determining large reference sequences; one is based on some hashing and pruning techniques, and the other is further improved with the option of determining large reference sequences in batch so as to reduce the number of database scans required. Performance of these two methods is comparatively analyzed. It is shown that the option of selective scan is very advantageous and can lead to prominent performance improvement. Sensitivity analysis on various parameters is conducted. | Introduction
Due to the increasing use of computing for various applications, the importance of database mining
is growing at a rapid pace recently. Progress in bar-code technology has made it possible for
retail organizations to collect and store massive amounts of sales data. Catalog companies can also
collect sales data from the orders they received. It is noted that analysis of past transaction data
can provide very valuable information on customer buying behavior, and thus improve the quality
of business decisions (such as what to put on sale, which merchandises to be placed together on
shelves, how to customize marketing programs, to name a few). It is essential to collect a sufficient
amount of sales data before any meaningful conclusion can be drawn therefrom. As a result, the
amount of these processed data tends to be huge. It is hence important to devise efficient algorithms
to conduct mining on these data.
Note that various data mining capabilities have been explored in the literature. One of the
most important data mining problems is mining association rules [3, 4, 13, 15]. For example, given
a database of sales transactions, it is desirable to discover all associations among items such that
the presence of some items in a transaction will imply the presence of other items in the same
transaction. Also, mining classification is an approach of trying to develop rules to group data
tuples together based on certain common features. This has been explored both in the AI domain
[16, 17] and in the context of databases [2, 6, 12]. Mining in spatial databases was conducted in
[14]. Another source of data mining is on ordered data, such as stock market and point of sales
data. Interesting aspects to explore from these ordered data include searching for similar sequences
[1, 19], e.g., stocks with similar movement in stock prices, and sequential patterns [5], e.g., grocery
items bought over a set of visits in sequence. It is noted that data mining is a very application-dependent
issue and different applications explored will require different mining techniques to cope
with. Proper problem identification and formulation is therefore a very important part of the whole
knowledge discovery process.
In this paper, we shall explore a new data mining capability which involves mining access patterns
in a distributed information providing environment where documents or objects are linked
together to facilitate interactive access. Examples for such information providing environments
include World Wide Web (WWW) [11] and on-line services, such as Prodigy, CompuServe and
America Online, where users, when seeking for information of interest, travel from one object to
another via the corresponding facilities (i.e., hyperlinks) provided. Clearly, understanding user
access patterns in such environments will not only help improve the system design (e.g., provide
efficient access between highly correlated objects, better authoring design for pages, etc.) but also
be able to lead to better marketing decisions (e.g., putting advertisements in proper places, better
customer/user classification and behavior analysis, etc. Capturing user access patterns in such
environments is referred to as mining traversal patterns in this paper. Note that although some
efforts have elaborated upon analyzing the user behavior [8, 9, 10], there is little result reported
on dealing with the algorithmic aspects to improve the execution of traversal pattern mining. This
can be in part explained by the reason that these information providing services, though with great
potential, are mostly in their infancy and their customer analysis may still remain in a coarser
level such as user occupation/age study. In addition, it is important to note that, as pointed out
in [8], since users are traveling along the information providing services to search for the desired
information, some objects are visited because of their locations rather than their content, showing
the very difference between the traversal pattern problem and others which are mainly based on
customer transactions. This unique feature of the traversal pattern problem unavoidably increases
the difficulty of extracting meaningful information from a sequence of traversal data. However, as
these information providing services are becoming increasingly popular nowadays, there is a growing
demand for capturing user behavior and improving the quality of such services. As a result,
the problem of mining traversal patterns has become too important not to address immediately.
Consequently, we shall explore in this paper the problem of mining traversal patterns. Our solution
procedure consists of two steps. First, we derive an algorithm, called algorithm MF (standing
for maximal forward references), to convert the original sequence of log data into a set of traversal
subsequences. As defined in Section 2, each traversal subsequence represents a maximal forward
reference from the starting point of a user access. As will be explained later, this step of converting
the original log sequence into a set of maximal forward references will filter out the effect of
backward references which are mainly made for ease of traveling, and enable us to concentrate on
mining meaningful user access sequences. Second, we derive algorithms to determine the frequent
traversal patterns, termed large reference sequences, from the maximal forward references obtained
above, where a large reference sequence is a reference sequence that appeared in a sufficient number
of times in the database. Note that the problem of finding large reference sequences is similar to
that of finding large itemsets for association rules [3] where a large itemset is a set of items appearing
in a sufficient number of transactions. However, they are different from each other in that
a reference sequence in mining traversal patterns has to be consecutive references in a maximal
forward reference whereas a large itemset in mining association rules is just a combination of items
in a transaction. As a consequence, although several schemes for mining association rules have
been reported in the literature [3, 4, 15], the very difference between these two problems calls for
the design of new algorithms for determining large reference sequences.
Explicitly, we devise two algorithms for determining large reference sequences. The first one,
referred to as full-scan (FS) algorithm, essentially utilizes some techniques on hashing and pruning
while solving the discrepancy between traversal patterns and association rules mentioned above.
Although trimming the transaction database as it proceeds to later passes, algorithm FS is required
to scan the transaction database in each pass. In contrast, by properly utilizing the candidate reference
sequences, the second algorithm devised, referred to as selective-scan (SS) algorithm, is able
to avoid database scans in some passes so as to reduce the disk I/O cost involved. Specifically,
algorithm SS has the option of using a candidate reference set to generate subsequent candidate reference
sets, and delaying the determination of large reference sets to a later pass when the database
is scanned. Since SS does not scan the database to obtain a large reference set in each pass, some
database scans are saved. Experimental studies are conducted by using a synthetic workload that is
generated based on referencing some logged traces, and performance of these two methods, FS and
SS, is comparatively analyzed. It is shown that the option of selective scan is very advantageous
and algorithm SS thereby outperforms algorithm FS in general. Sensitivity analysis on various
parameters is also conducted.
This paper is organized as follows. Problem formulation is given in Section 2. Algorithm MF to
identify maximal forward references is described in Section 3.1, and two algorithms, FS and SS, for
determining large reference sequences are given in Section 3.2. Performance results are presented
in Section 4. Section 5 contains the summary.
Problem Formulation
As pointed out earlier, in an information providing environment where objects are linked together,
users are apt to traverse objects back and forth in accordance with the links and icons provided. As
a result, some node might be revisited because of its location, rather than its content. For example,
in a WWW environment, to reach a sibling node a user is usually inclined to use "backward" icon
and then a forward selection, instead of opening a new URL. Consequently, to extract meaningful
user access patterns from the original log database, we naturally want to take into consideration
the effect of such backward traversals and discover the real access patterns of interest. In view of
this, we assume in this paper that a backward reference is mainly made for ease of traveling but
not for browsing, and concentrate on the discovery of forward reference patterns. Specifically, a
backward reference means revisiting a previously visited object by the same user access. When
backward references occur, a forward reference path terminates. This resulting forward reference
path is termed a maximal forward reference. After a maximal forward reference is obtained, we
back track to the starting point of the forward referencing and resume another forward reference
path. In addition, the occurrence of a null source node also indicates the termination of an ongoing
forward reference path and the beginning of a new one.
While deferring the formal description of the algorithm to determine maximal forward references
(i.e., algorithm MF) to Section 3.1, we give an illustrative example for maximal forward
references below. Suppose the traversal log contains the following traversal path for a user:
G; H; G; W; A; O; U; O; V g, as shown in Figure 1. Then, it can be verified
by algorithm MF that the set of maximal forward references for this user is fABCD;ABEGH;
g. After maximal forward references for all users are obtained, we then map
the problem of finding frequent traversal patterns into the one of finding frequent occurring consecutive
subsequences among all maximal forward references. A large reference sequence is a reference
sequence that appeared in a sufficient number of times. In a set of maximal forward references,
the number of times a reference sequence has to appear in order to be qualified as a large reference
sequence is called the minimal support. A large k-reference is a large reference sequence with k
elements. We denote the set of large k-references as L k and its candidate set as C k , where C k ,
as obtained from L k\Gamma1 [4], contains those k-references that may appear in L k . Explicitly, C k is a
superset of L k .
It is worth mentioning that after large reference sequences are determined, maximal reference
sequences can then be obtained in a straightforward manner. A maximal reference sequence is
a large reference sequence that is not contained in any other maximal reference sequence. For
example, suppose that fAB; BE;AD;CG;GH;BGg is the set of large 2-references (i.e., L 2 ) and
fABE;CGHg is the set of large 3-references (i.e., L 3 ). Then, the resulting maximal reference
sequences are AD;BG;ABE; and CGH . A maximal reference sequence corresponds to a "hot"
access pattern in an information providing service. In all, the entire procedure for mining traversal
patterns can be summarized as follows.
Procedure for mining traversal patterns:
Step 1: Determine maximal forward references from the original log data.
G
A
O
Figure
1: An illustrative example for traversal patterns.
Step 2: Determine large reference sequences (i.e., L k , k - 1) from the set of maximal forward
references.
Step 3: Determine maximal reference sequences from large reference sequences.
Since the extraction of maximal reference sequences from large reference sequences (i.e., Step
is straightforward, we shall henceforth focus on Steps 1 and 2, and devise algorithms for the
efficient determination of large reference sequences.
3 Algorithm for Traversal Pattern
We shall describe in Section 3.1 algorithm MF which converts the original traversal sequence into
a set of maximal forward references. Then, by mapping the problem of finding frequent traversal
patterns into the one of finding frequent consecutive subsequences, we develop two algorithms,
called full-scan (FS) and selective-scan (SS), for mining traversal patterns.
3.1 Maximal Forward References
In general, a traversal log database contains, for each link traversed, a pair of (source, desti-
nation). This part of log database is called referer log [7]. For the beginning of a new path,
which is not linked to the previous traversal, the source node is null. Given a traversal sequence
of a user, we shall map it into multiple subsequences, each of which
represents a maximal forward reference. The algorithm for finding all maximal forward references is
given as follows. First, the traversal log database is sorted by user id's, resulting in a traversal path,
for each user, where pairs of (s are ordered by time. Algorithm
MF is then applied to each user path to determine all of its maximal forward references. Let D F
denote the database to store all the resulting maximal forward references obtained.
Algorithm An algorithm to find maximal forward references.
string Y to null for initialization, where string Y is used to store the current
forward reference path. Also, set the flag to indicate a forward traversal.
Step 2: Let
If A is equal to null then
/* this is the beginning of a new traversal */
begin
Write out the current string Y (if not null) to the database D F ;
string
Go to Step 5.
Step 3: If B is equal to some reference (say the j-th reference) in string Y then
/* this is a cross-referencing back to a previous reference */
begin
If F is equal to 1 then write out string Y to database D F ;
Discard all the references after the j-th one in string Y ;
Go to Step 5.
Step 4: Otherwise, append B to the end of string Y .
/* we are continuing a forward traversal */
If F is equal to 0, set
Step 5: Set 1. If the sequence is not completed scanned then go to Step 2.
Consider the traversal scenario in Figure 1 for example. It can be verified that the first backward
reference is encountered in the 4-th move (i.e., from D to C). At that point, the maximal
forward reference ABCD is written to D F (by Step 3). In the next move (i.e., from C to B),
although the first conditional statement in Step 3 is again true, nothing is written to D F since the
flag meaning that it is in a reverse traversal. The subsequent forward references will put
ABEGH into the string Y , which is then written to D F when a reverse reference (from H to G)
Table
1: An example execution by algorithm MF.
move string Y output to D F
9 ABEG ABEGH
14 AO AOU
is encountered. The execution scenario by algorithm MF for the input in Figure 1 is given in Table 1.
It is noted that in some cases, the traversal log record obtained only contains the destination references
instead of a pair of references. For example, for WWW browsing, the request message may
only contain the destination URL. The traversal sequence will then have the form fd
for each user. Even with such an input, we can still convert it into a set of maximal forward
references. The only difference is that in this case we cannot identify the breakpoint where the user
picks a new URL to begin a new traversal path, meaning that two consecutive maximal forward
references. For example, ABEH and WXY Z, may be treated as one path, i.e., ABEHWXYZ.
Certainly, this constraint, i.e., without the id's of source nodes, could increase the computational
complexity because the paths considered become longer. However, this constraint should have little
effect on identifying frequent reference subsequences. Since there is no logical link between H and
W , a subsequence containing HW is unlikely to occur frequently. Hence, a reference containing
the pattern HW will unlikely emerge as a large reference later. Therefore, algorithm MF can in
fact be employed for the cases when the id's of source nodes are not available.
3.2 Determining Large Reference Sequences
Once the database containing all maximal forward references for all users, D F , is constructed, we
can derive the frequent traversal patterns by identifying the frequent occurring reference sequences
in D F . A sequence s 1 ; ::::; s n is said to contain r 1 ; ::::; r k as a consecutive subsequence if there exists
an i such that s k. For example, BAHPM is said to contain AHP . A sequence
of k references, r 1 ; ::::; r k , is called a large k-reference sequence, if there are a sufficient number of
users with maximal forward references in D F containing r 1 ; ::::; r k as a consecutive subsequence.
As pointed out before, the problem of finding large reference sequences is different from that
of finding large itemsets for association rules and thus calls for the design of new algorithms.
Consequently, we shall derive in this paper two algorithms for mining traversal patterns. The first
one, called full-scan (FS) algorithm, essentially utilizes the concept of DHP [15] (i.e., hashing and
pruning) while solving the discrepancy between traversal patterns and association rules. DHP has
two major features in determining association rules: one is efficient generation for large itemsets and
the other is effective reduction on transaction database size after each scan. Although trimming the
transaction database as it proceeds to later passes, FS is required to scan the transaction database
in each pass. In contrast, by properly utilizing the candidate reference sequences, the second
algorithm, referred to as selective-scan (SS) algorithm, is improved with the option of determining
large reference sequences in batch so as to reduce the number of database scans required.
3.2.1 Algorithm on Full Scan
Algorithm FS utilizes key ideas of the DHP algorithm. The details of DHP can be found in [15]. An
example scenario for determining large itemsets and candidate itemsets is given in the Appendix 1 .
As shown in [15], by utilizing a hash technique, DHP is very efficient for the generation of candidate
itemsets, in particular for the large 2-itemsets, thus greatly improving the performance bottleneck
of the whole process. In addition, DHP employs effective pruning techniques to progressively reduce
the transaction database size.
Recall that L k represents the set of all large k-references and C k is a set of candidate k-references.
C k is in general a superset of L k . By scanning through D F , FS gets L 1 and makes a hash table (i.e.,
to count the number of occurrences of each 2-reference. Similarly to DHP, starting with
FS generates C k based on the hash table count obtained in the previous pass, determines the set of
1 In this example, the technique of hashing, which is employed by DHP to reduce the number of candidate itemsets,
is not shown.
large k-references, reduces the size of database for the next pass, and makes a hash table to determine
the candidate 1)-references. Note that as in mining association rules, a set of candidate
references, C k , can be generated from joining L k\Gamma1 with itself, denoted by L
due to the difference between traversal patterns and association rules, we modify this approach
as follows. For any two distinct reference sequences in L
join them together to form a k-reference sequence only if either r contains s
after dropping the first element in one sequence and the last
element in the other sequence, the remaining two are identical). We note that
when k is small (especially for the case of deriving C k by joining L k\Gamma1 with itself will result
in a very large number of candidate references and the hashing technique is thus very helpful for
such a case. As k increases, the size of L decrease significantly. Same as in [15], we
found that it is generally beneficial for FS to generate C k directly from L
using hashing) once k - 3.
To count the occurrences of each k-reference in C k to determine L k , we need to scan through a
trimmed version of database D F . From the set of maximal forward references, we determine, among
k-references in C k , large k-references. After the scan of the entire database, those k-references in
C k with count exceeding the threshold become L k . If L k is non-empty, the iteration continues for
the next pass, i.e., pass k + 1. Same as in DHP, every time when the database is scanned, the
database is trimmed by FS to improve the efficiency of future scans.
3.2.2 Algorithm on Selective Scan (SS)
Algorithm SS is similar to algorithm FS in that it also employs hashing and pruning techniques to
reduce both CPU and I/O costs, but is different from the latter in that algorithm SS, by properly
utilizing the information in candidate references in prior passes, is able to avoid database scans in
some passes, thus further reducing the disk I/O cost. The method for SS to avoid some database
scans and reduce disk I/O cost is described below. Recall that algorithm FS generates a small
number of candidate 2-references by using a hashing technique. In fact, this small C 2 can be used
to generate the candidate 3-references. Clearly, a C 0
3 generated from C 2 C 2 , instead of from L 2 L 2 ,
will have a size greater than jC 3 j where C 3 is generated from L 2 L 2 . However, if jC 0
3 j is not much
larger than jC 3 j, and both C 2 and C 0
3 can be stored in the main memory, we can find L 2 and L 3
together when the next scan of the database is performed, thereby saving one round of database
This approach of generating Ck directly from Lk\Gamma1 is proposed by algorithm Apriori in [4] in generating candidate
itemsets for association rules.
scan. It can be seen that using this concept, one can determine all L k 's by as few as two scans of
the database (i.e., one initial scan to determine L 1 and a final scan to determine all other large
reference sequences), assuming that C 0
is generated from C 0
be kept in the memory.
Note that when the minimum support is relatively small or potentially large references are
long, C k and L k could become large. With C 0
being generated from C 0
may cost too much CPU time to generate all subsequent C 0
candidate sets of large references since the size of C j may become huge quickly, thus compromising
all the benefit from saving disk I/O cost. For the illustrative example in the Appendix, if C 3 was
determined from C 2 C 2 , instead of from L 2 L 2 , then C 3 would be ffABCg, fABEg, fACEg,
fBCEgg. This fact suggests that a timely database scan to determine large reference sequences
will in fact pay off. After a database scan, one can obtain the large reference sequences which are
not determined thus far (say, up to Lm ) and then construct the set of candidate (m+ 1)-references,
Cm+1 , based on Lm from that point. According to our experiments, we found that if jC 0
for some k - 2, it is usually beneficial to have a database scan to obtain L k+1 before the set of
candidate references becomes too big. (Same as in FS, each time the database is scanned, the
database is trimmed by SS to improve the efficiency of future scans.) We then derive C 0
k+2 from
L k+1 . (We note that C 0
k+2 is in fact equal to C k+2 here.) After that, we again use C 0
j to derive
k+2. The process continues until the set of candidate (j+1)-references becomes empty.
Illustrative examples for FS and SS are given in Table 2 where the number of reference paths
and the minimum support Extensive experiments are conducted in
Section 4. In this example run, FS performs a database scan in each pass to determine the corresponding
large reference sequences, resulting in six database scans. On the other hand, SS scans the
database only three times (skipping database scans in passes 2, 4 and 5), and is able to obtain the
same result. The CPU and disk I/O times for FS are 19.48 seconds and 30.8 seconds, respectively,
whereas those for SS are 18.75 seconds and 17.8 seconds, respectively. Considering both CPU and
I/O times, the execution time ratio for SS to FS is 0.73, showing a prominent advantage of SS.
Performance Results
To assess the performance of FS and SS, we conducted several experiments to determine large
reference sequences by using an RS/6000 workstation with model 560. The methods used to
Table
2: Results from an example run by FS and SS.
Algorithm FS
Algorithm SS
Root
Leaf node
25% go back to parent node
75% jump to only internal node
parent node
jump to internal node
3% of all internal nodes
parent node
children nodes
internal jump
(to any node)
(a) (b)
Figure
2: A traversal tree to simulate WWW.
generate synthetic data are described in Section 4.1. Performance comparison of these two methods
is given in Section 4.2. Sensitivity analysis is conducted in Section 4.3.
4.1 Generation of Synthetic Traversal Paths
In our experiment, the browsing scenario in a World Wide Web (WWW) environment is simulated.
To generate a synthetic workload and determine the values of parameters, we referenced some
logged traces which were collected from a gateway in our working location [18]. First, a traversal
tree is constructed to mimic WWW structure whose starting position is a root node of the tree.
The traversal tree consists of internal nodes and leaf nodes. Figure 2a shows an example of the
traversal tree. The number of child nodes at each internal node, referred to as fanout, is determined
from a uniform distribution within a given range. The height of a subtree whose subroot is a child
node of the root node is determined from a Poisson distribution with mean - h . Then, the height of
a subtree whose subroot is a child of an internal node N i is determined from a Poisson distribution
with mean equal to a fraction of the maximum height of the internal node N i . As such, the height
of a tree is controlled by the value of - h .
A traversal path consists of nodes accessed by a user. The size of each traversal path is picked
from a Poisson distribution with mean equal to jP j. With the first node being the root node, a
traversal path is generated probabilistically within the traversal tree as follows. For each internal
node, we determine which is the next hop according to some predetermined probabilities. Essen-
tially, each edge connecting to an internal node is assigned with a weight. This weight corresponds
to the probability that each edge will be next accessed by the user. As shown in Figure 2b, the
weight to its parent node is assigned with p 0 , which is generally 1
where n is the number of
child nodes. This probability of traveling to each child node, p i , is determined from an exponential
distribution with unit mean, and is so normalized that the sum of the weights for all child nodes is
equal . If this internal node has an internal jump and the weight for this jump is p j , then
is changed to p 0 and the corresponding probability for each child node is changed to
such that the sum of all the probabilities associated with this node remains one. When the
path arrives at a leaf node, the next move would be either to its parent node in backward (with a
probability 0.25) or to any internal node (with an aggregate probability 0.75). Some internal nodes
in the tree have internal jumps which can go to any other nodes. The number of internal nodes
with internal jumps is denoted by N J , which is set to 3% of all the internal nodes in general cases.
The sensitivity of varying N J will also be analyzed. Those nodes with internal jumps are decided
randomly among all the internal nodes. Table 3 summarizes the meaning of various parameters
used in our simulations.
4.2 Performance Comparison between FS and SS
Figure
3 represents execution times of two methods, FS and SS, when
0:1. HxPy means that x is the height of a tree and y is the average size of the reference paths.
D200K means that the number of reference paths is 200,000. A tree for H10 was obtained when
the height of a tree is 10 and the fanout at each internal node is between 4 and 7. The root node
consists of 7 child nodes. The number of internal nodes is 16,200 and the number of leaf nodes is
73,006. The number of internal nodes with internal jumps is thus 16200 \ThetaN J =486. Note that the
CPU
Time
(sec)
Minimum support
c
c
SS \Theta
\Theta \Theta \Theta \Theta \Theta
I/O
Time
(sec)
Minimum support
c c
c
SS \Theta
\Theta \Theta \Theta \Theta \Theta \Theta51525
CPU
Time
(sec)
Minimum support
c c
c
c
c
c
SS \Theta
\Theta \Theta \Theta \Theta
\Theta
I/O
Time
(sec)
Minimum support
c c
c c
c
c
SS \Theta
\Theta \Theta \Theta \Theta \Theta \Theta103050
CPU
Time
(sec)
Minimum support
c
c
c
SS \Theta
\Theta \Theta \Theta \Theta \Theta
I/O
Time
(sec)
Minimum support
c c
c
c
c
c
SS \Theta
\Theta \Theta \Theta \Theta \Theta \Theta
Figure
3: Execution Times for FS and SS.
Table
3: Meaning of various parameters.
H The height of a traversal tree.
F The number of child nodes, fanout.
N J The number of internal nodes with an internal jump .
Backward weight in probability to its parent node.
in probability to its internal jump.
' A parameter of a Zipf-like distribution.
HxPy x is the height of a tree and
jDj The number of reference paths (size of database).
of forward references for L k .
of candidate k-reference sequences.
of large k-reference sequences.
size of the reference paths.
total number of nodes increases as the height of a tree increases. To make the experiment tractable,
we reduced the fanout to 2 \Gamma 5 for the tree of H20 with the height of 20. This tree contained 616,595
internal nodes and 1,541,693 leaves. In Figure 3, the left graph of each HxPy.D200K represents
the CPU time to find all the large reference sequences, and the right graph shows the I/O time to
find them where the disk I/O time is set to 2 MB/sec and 1 MB buffer is used in main memory.
It can be seen from Figure 3 that algorithm SS in general outperforms FS, and their performance
difference becomes prominent when the I/O cost is taken into account.
To provide more insights into their performance, in addition to Table 2 in Section 3, we have
Table
4 which shows the results by these two methods when In
Table
4, FS scans the database eight times to find all the large reference sequences, whereas SS
only involves three database scans. Note that after initial scans, disk I/O involved by FS and SS
will include both disk read and disk write (i.e., writing the trimmed version of the database back
to the disk). The I/O time for these two methods is shown in Figure 4. Considering both CPU
and I/O times, the total execution time of FS is 143.94 seconds, and that of SS is 100.89 seconds.
Note that the execution time ratio for FS to SS is 0.70 in this case, which is slightly better than
the one associated with Table 2.
Figure
5 shows scale-up experiments, where both the CPU and I/O times of each method increase
linearly as the database size increases. For this experiment, the traversal tree has 10 levels,
the fanout of internal nodes is between 4 and 7, and the minimum support is set to 0.75%. It can
be seen that SS consistently outperforms FS as the database size increases.
Table
4: Number of large reference sequences and execution times for H20P20.
Algorithm FS
Algorithm SS
SS skips database scans for k=2,4,5,6,7
I/O
time
(second)
FS SS
Figure
4: I/O cost for FS and SS in each pass.
200K 400K 600K 800K 1000K
Execution
Time
(sec)
Database size
c
c
c
c
c
SS(cpu) \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
Figure
5: Execution time of FS and SS when database size increases.
4.3 Sensitivity Analysis
Since algorithm SS in general outperforms FS, without loss of generality, we shall conduct in this
section the sensitivity analysis on various parameters for algorithm SS. Performance evaluation was
carried out under the condition that the database size is 200,000, the average size of traversal paths
is 10, i.e., jP and the minimum support is 0.75%.
Figure
6 shows the number of large reference sequences when the probability to backward at an
internal node, p 0 , varies from 0.1 to 0.5. As the probability increases, the number of large reference
sequences decreases because the possibility of having forward traveling becomes smaller. Figure 7
shows the number of large reference sequences when the number of child nodes of internal nodes,
i.e., fanout F , varies. The three corresponding traversal trees all have the same height 8. The tree
consists of 483 internal nodes and 1,267 leaf nodes. The tree for the second bar
consists of 11,377 internal nodes and 62,674 leaf nodes, and the one for the third bar consists of
74,632 internal nodes and 634,538 leaf nodes. The results show that the number of large reference
sequences decreases as the degree of fanout increases, because with a larger fanout the traversal
paths are more likely to be dispersed to several branches, thus resulting in fewer large reference
sequences. Clearly, when the large reference sequences decreases, the execution time to find them
p0=0.1 p0=0.2 p0=0.3 p0=0.4 p0=0.52060100Number
of
large
reference
sequences
Figure
Number of large reference sequences when backward weight p 0 is varied.
also decreases.
Figure
8 gives the number of large reference sequences when the probability of traveling to each
child node from an internal node is determined from a Zipf-like distribution. Different values of
parameter ' for the Zipf-like distribution are considered. The Zipf-like distribution of branching
probabilities to child nodes is generated as follows. The probability p i that the i-th child node
is accessed by a traversal path is
normalization
constant and n is the number of child nodes at an internal node. After we get each p i , it is then
normalized so that
as in Section 4.1. Setting the parameter
to the pure Zipf distribution, which is highly skewed, whereas corresponds to the uniform
distribution. The results show that the number of large reference sequences increases when the
corresponding probabilities are more skewed.
Table
5 shows the performance results of SS when the number of internal nodes with internal
jumps, N J , varies from 3% to 27 % of the total internal nodes. The number of large reference
sequences decreases slightly as N J increases, meaning that it is less likely to have large reference
F (2 to 4) F (5 to to 11)100300500Number
of
large
reference
sequences
Figure
7: Number of large reference sequences when the fanout F is varied.
Degree of skew
Number
of
large
reference
sequences
Figure
8: Number of large reference sequences when parameter ' of a Zipf-like distribution is varied.
Table
5: Number of large reference sequences when the percentage of internal jumps N J is varied.
9
Table
Number of large reference sequences when the height of a traversal tree H is varied.
sequences when we have more jumps in traversal paths. It is noted that performance of SS is less
sensitive to this parameter than to others.
Table
6 shows results of SS when the height of a traversal tree varies. The fanout is between 2
and 5. As the height increases, the numbers of internal nodes and leaf nodes increase exponentially.
The height of a traversal tree is increased from 3 to 20, As the height of a traversal tree increases,
the number of candidate nodes for L 1 increases and the execution time to find L 1 thus increases.
On the other hand, jL decreases as the height of the tree increases since the average visit to each
node decreases. The number of large reference sequences slightly decreases, for 1 - k - 3, when
the height of the tree increases from 5 to 20.
5 Conclusion
In this paper, we have explored a new data mining capability which involves mining traversal
patterns in an information providing environment where documents or objects are linked together
to facilitate interactive access. Our solution procedure consisted of two steps. First, we derived
algorithm MF to convert the original sequence of log data into a set of maximal forward references.
By doing so, we filtered out the effect of some backward references and concentrated on mining
meaningful user access sequences. Second, we developed algorithms to determine large reference
sequences from the maximal forward references obtained. Two algorithms were devised for determining
large reference sequences: one was based on some hashing and pruning techniques, and the
other was further improved with the option of determining large reference sequences in batch so
as to reduce the number of database scans required. Performance of these two methods has been
comparatively analyzed. It is shown that the option of selective scan is very advantageous and algorithm
SS thus in general outperformed algorithm FS. Sensitivity analysis on various parameters
was conducted.
Acknowledgements
M.-S. Chen is in part supported by National Science Council, Project No. NSC 86-2621-E-002-
023-T, Taiwan, ROC. J. S. Park is supported by the Grants for Professors of Sungshin Women's
University in 1997, Korea.
--R
Efficient Similarity Search in Sequence Databases.
An Interval Classifier for Database Mining Applications.
Mining Association Rules between Sets of Items in Large Databases.
Fast Algorithms for Mining Association Rules in Large Databases.
Mining Sequential Patterns.
Knowledge Mining by Imprecise Querying: A Classification-Based Approach
Hypertext Transfer Protocol-HTTP/1.0
Backtracking in a Multiple-Window Hypertext Environment
Browsing in Hypertext: A Cognitive Study.
Characterizing browsing strategies in the world-wide web
The World Wide Web Unleashed.
Discovery of Multiple-Level Association Rules from Large Databases
Efficient and Effective Clustering Methods for Spatial Data Mining.
An Effective Hash Based Algorithm for Mining Association Rules.
Analysis and Presentation of Strong Rules.
Induction of Decision Trees.
Personal communication
Combinatorial Pattern Discovery for Scientific Data: Some Preliminary Results.
--TR
--CTR
D. Avramouli , J. Garofalakis , D. J. Kavvadias , C. Makris , Y. Panagis , E. Sakkopoulos, Popular web hot spots identification and visualization, Special interest tracks and posters of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Holmquist , N. Hari Narayanan, Tightly coupling authoring and evaluation in an integrated tool to support iterative design of interactive hypermedia educational manuals, Proceedings of the conference on Designing interactive systems: processes, practices, methods, and techniques, p.155-164, August 17-19, 2000, New York City, New York, United States
Wenwu Lou , Hongjun Lu, Efficient prediction of web accesses on a proxy server, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Hua-Fu Li , Suh-Yin Lee , Man-Kwan Shan, On mining webclick streams for path traversal patterns, Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, May 19-21, 2004, New York, NY, USA
Jian-Chih Ou , Chang-Hung Lee , Ming-Syan Chen, Web log mining with adaptive support thresholds, Special interest tracks and posters of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Alexandros Nanopoulos , Yannis Manolopoulos , Maciej Zakrzewicz , Tadeusz Morzy, Indexing web access-logs for pattern queries, Proceedings of the 4th international workshop on Web information and data management, November 08-08, 2002, McLean, Virginia, USA
Mao Chen , Andrea S. LaPaugh , Jaswinder Pal Singh, Predicting category accesses for a user in a structured information space, Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, August 11-15, 2002, Tampere, Finland
Tiffany Y. Tang , Gordon McCalla, Student modeling for a web-based learning environment: a data mining approach, Eighteenth national conference on Artificial intelligence, p.967-968, July 28-August 01, 2002, Edmonton, Alberta, Canada
Kun-Ta Chuang , Ming-Syan Chen, Frequent pattern discovery with memory constraint, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Hai Zhuge , Jie Liu, A fuzzy collaborative assessment approach for knowledge grid, Future Generation Computer Systems, v.20 n.1, p.101-111, January 2004
Brenda F. Miles , Vir V. Phoha, The bipartite clique: a topological paradigm for WWWeb user search customization, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Ajumobi Udechukwu , Ken Barker , Reda Alhajj, A framework for representing navigational patterns as full temporal objects, ACM SIGecom Exchanges, v.5 n.2, p.23-33, November, 2004
Wen-Chih Peng , Ming-Syan Chen, Shared Data Allocation in a Mobile Computing System: Exploring Local and Global Optimization, IEEE Transactions on Parallel and Distributed Systems, v.16 n.4, p.374-384, April 2005
Minos N. Garofalakis , Rajeev Rastogi , Kyuseok Shim, SPIRIT: Sequential Pattern Mining with Regular Expression Constraints, Proceedings of the 25th International Conference on Very Large Data Bases, p.223-234, September 07-10, 1999
Tzung-Shi Chen , Shih-Chun Hsu, Mining frequent tree-like patterns in large datasets, Data & Knowledge Engineering, v.62 n.1, p.65-83, July, 2007
Yunjuan Xie , Vir V. Phoha, Web user clustering from access log using belief function, Proceedings of the 1st international conference on Knowledge capture, October 22-23, 2001, Victoria, British Columbia, Canada
Tseng , Cing-Fu Tsui, An efficient method for mining associated service patterns in mobile web environments, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Cheng-Ru Lin , Chang-Hung Lee , Ming-Syan Chen , Philip S. Yu, Distributed data mining in a chain store database of short transactions, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Anindya Datta , Kaushik Dutta , Helen Thomas , Debra VanderMeer , Krithi Ramamritham, Accelerating Dynamic Web Content Generation, IEEE Internet Computing, v.6 n.5, p.27-36, September 2002
Akihiro Inokuchi , Takashi Washio , Hiroshi Motoda, Complete Mining of Frequent Patterns from Graphs: Mining Graph Data, Machine Learning, v.50 n.3, p.321-354, March
Chang-Hung Lee , Cheng-Ru Lin , Ming-Syan Chen, Sliding window filtering: an efficient method for incremental mining on a time-variant database., Information Systems, v.30 n.3, p.227-244, May 2005
M. Garofalakis , R. Rastogi , K. Shim, Mining Sequential Patterns with Regular Expression Constraints, IEEE Transactions on Knowledge and Data Engineering, v.14 n.3, p.530-552, May 2002
Karuna P. Joshi , Anupam Joshi , Yelena Yesha, On Using a Warehouse to Analyze Web Logs, Distributed and Parallel Databases, v.13 n.2, p.161-180, March
Qinbao Song , Martin Shepperd, Mining web browsing patterns for E-commerce, Computers in Industry, v.57 n.7, p.622-630, September 2006
Chin-Chen Chang , Chih-Yang Lin , Henry Chou, Perfect hashing schemes for mining traversal patterns, Fundamenta Informaticae, v.70 n.3, p.185-202, April 2006
Alexander Mikroyannidis , Babis Theodoulidis, A Theoretical Framework and an Implementation Architecture for Self Adaptive Web Sites, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.558-561, September 20-24, 2004
Roderick L. Lee, Web mining: creating structure out of chaos, Managing data mining technologies in organizations: techniques and applications, Idea Group Publishing, Hershey, PA,
Yen-Liang Chen , Ya-Han Hu, Constraint-based sequential pattern mining: the consideration of recency and compactness, Decision Support Systems, v.42 n.2, p.1203-1215, November 2006
Wen-Chih Peng , Ming-Syan Chen, Developing Data Allocation Schemes by Incremental Mining of User Moving Patterns in a Mobile Computing System, IEEE Transactions on Knowledge and Data Engineering, v.15 n.1, p.70-85, January
Alexandros Nanopoulos , Yannis Manolopoulos, Efficient similarity search for market basket data, The VLDB Journal The International Journal on Very Large Data Bases, v.11 n.2, p.138-152, October 2002
Minos Garofalakis , Rajeev Rastogi, Scalable data mining with model constraints, ACM SIGKDD Explorations Newsletter, v.2 n.2, p.39-48, Dec. 2000
Kamal Ali , Steven P. Ketchpel, Golden Path Analyzer: using divide-and-conquer to cluster Web clickstreams, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Shao-Shin Hung , Ting-Chia Kuo , Damon Shing-Min Liu, An Efficient Mining and Clustering Algorithm for Interactive Walk-Through Traversal Patterns, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.356-362, September 20-24, 2004
Qiankun Zhao , Sourav S. Bhowmick , Le Gruenwald, WAM-Miner: in the search of web access motifs from historical web log data, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Chang-Hung Lee , Cheng-Ru Lin , Ming-Syan Chen, Sliding-window filtering: an efficient algorithm for incremental mining, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Jun Wook Lee , Ok Hyun Paek , Keun Ho Ryu, Temporal moving pattern mining for location-based service, Journal of Systems and Software, v.73 n.3, p.481-490, November-December 2004
Yi-Hung Wu , Arbee L. P. Chen, Prediction of Web Page Accesses by Proxy Server Log, World Wide Web, v.5 n.1, p.67-88, 2002
Huang , Fuchun Peng , Aijun An , Dale Schuurmans, Dynamic web log session identification with statistical language models, Journal of the American Society for Information Science and Technology, v.55 n.14, p.1290-1303, December 2004
Weiyang Lin , Sergio A. Alvarez , Carolina Ruiz, Efficient Adaptive-Support Association Rule Mining for Recommender Systems, Data Mining and Knowledge Discovery, v.6 n.1, p.83-105, January 2002
Zhixiang Chen , Ada Wai-Chee Fu , Frank Chi-Hung Tong, Optimal Algorithms for Finding User Access Sessions from Very Large Web Logs, World Wide Web, v.6 n.3, p.259-279, September
Ali Amiri, Dare to share: Protecting sensitive knowledge with data sanitization, Decision Support Systems, v.43 n.1, p.181-191, February, 2007
Igor Cadez , David Heckerman , Christopher Meek , Padhraic Smyth , Steven White, Model-Based Clustering and Visualization of Navigation Patterns on a Web Site, Data Mining and Knowledge Discovery, v.7 n.4, p.399-424, October
Jos Borges , Mark Levene, A fine grained heuristic to capture web navigation patterns, ACM SIGKDD Explorations Newsletter, v.2 n.1, p.40-50, June, 2000
Yen-Liang Chen , Shih-Sheng Chen , Ping-Yu Hsu, Mining hybrid sequential patterns and sequential rules, Information Systems, v.27 n.5, p.345-362, July 2002
George Pallis , Lefteris Angelis , Athena Vakali, Validation and interpretation of Web users' sessions clusters, Information Processing and Management: an International Journal, v.43 n.5, p.1348-1367, September, 2007
Wei-Guang Teng , Cheng-Yue Chang , Ming-Syan Chen, Integrating Web Caching and Web Prefetching in Client-Side Proxies, IEEE Transactions on Parallel and Distributed Systems, v.16 n.5, p.444-455, May 2005
single-pass mining of path traversal patterns over streaming web click-sequences, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.10, p.1474-1487, 14 July 2006
Jan-Ming Ho, Entropy-based link analysis for mining web informative structures, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Shao-Shin Hung , Damon Shing-Min Liu, Efficient reduction of access latency through object correlations in virtual environments, EURASIP Journal on Applied Signal Processing, v.2007 n.1, p.178-178, 1 January 2007
Alexandros Nanopoulos , Dimitrios Katsaros , Yannis Manolopoulos, A Data Mining Algorithm for Generalized Web Prefetching, IEEE Transactions on Knowledge and Data Engineering, v.15 n.5, p.1155-1169, September
Holmquist , N. Hari Narayanan, An integrated architecture for tightly coupled design and evaluation of educational multimedia, Information SciencesInformatics and Computer Science: An International Journal, v.140 n.1, p.127-152, January 2002
Karuna P. Joshi , Anupam Joshi , Yelena Yesha , Raghu Krishnapuram, Warehousing and mining Web logs, Proceedings of the 2nd international workshop on Web information and data management, p.63-68, November 02-06, 1999, Kansas City, Missouri, United States
Hung-Yu Kao , Shian-Hua Lin , Jan-Ming Ho , Ming-Syan Chen, Mining Web Informative Structures and Contents Based on Entropy Analysis, IEEE Transactions on Knowledge and Data Engineering, v.16 n.1, p.41-55, January 2004
Pranam Kolari , Anupam Joshi, Web Mining: Research and Practice, Computing in Science and Engineering, v.6 n.4, p.49-53, July 2004
Anthony J. T. Lee , Yao-Te Wang, Efficient data mining for calling path patterns in GSM networks, Information Systems, v.28 n.8, p.929-948, December
Jose M. Pea, Intelligent Web mining, Intelligent exploration of the web, Physica-Verlag GmbH, Heidelberg, Germany, | data mining;performance analysis;distributed information system;World Wide Web;traversal patterns |
627922 | Performance Analysis of Three Text-Join Algorithms. | AbstractWhen a multidatabase system contains textual database systems (i.e., information retrieval systems), queries against the global schema of the multidatabase system may contain a new type of joinsjoins between attributes of textual type. Three algorithms for processing such a type of joins are presented and their I/O costs are analyzed in this paper. Since such a type of joins often involves document collections of very large size, it is very important to find efficient algorithms to process them. The three algorithms differ on whether the documents themselves or the inverted files on the documents are used to process the join. Our analysis and the simulation results indicate that the relative performance of these algorithms depends on the input document collections, system characteristics, and the input query. For each algorithm, the type of input document collections with which the algorithm is likely to perform well is identified. An integrated algorithm that automatically selects the best algorithm to use is also proposed. | Introduction
Researches in multidatabase system have been intensified in recent years [4, 5, 9, 13, 12, 16, 19]. In this
paper, we consider a multidatabase system that contains both local systems that manage structured data
(e.g., relational DBSs) and local systems that manage unstructured data (e.g., information retrieval (IR)
systems for handling text).
The global schema of a multidatabase system, integrated from local database schemas, provides an
overall picture of all sharable data in the local systems. The global query language can be used to specify
queries against the global schema, which will be referred to as global queries thereafter, and to retrieve data
represented by the global schema. For example, if the global schema is in relational data model, then SQL
can be used as the global query language. Since the multidatabase system considered in this paper contains
1 Department of Computer Science, State University of New York at Binghamton, Binghamton, NY 13902-6000. Email:
[email protected].
2 Department of Electrical Engineering and Computer Science, University of Illinois at Chicago, Chicago, IL 60607. Email:
[email protected].
3 Computer Science Department, UCLA, Los Angeles, CA 90024. Email: [email protected].
4 School of Computer Science, Florida International University, Miami, FL 33199. Email: [email protected].
components and relational components, the global query language must be capable of accommodating
both structured data and unstructured data. An SQL-based query language that can serve such a purpose
has been proposed in [1]. In this paper, we extend the features of this language to specify our queries.
Because we have a database front-end, global users may submit queries that contain joins between
attributes of textual type. A motivating example is presented in Section 2. A likely join comparator
for textual attributes is SIMILAR TO that matches objects with similar textual contents based on some
similarity function. Since each textual object is essentially a document, the join is to pair similar documents
among the two document collections corresponding to the two textual attributes. Although other types of
comparators between textual attributes may exist, the SIMILAR TO operator is a key operator for textual
data and therefore we concentrate on this operator in this paper.
While processing joins between non-textual attributes have been studied extensively, not much research
has been reported on processing joins between textual attributes in the literature. In [6], the authors
reported a case study on automating the assignment of submitted papers to reviewers. The reported study
requires to match the abstract of each submitted paper with a number of profiles of potential reviewers.
The problem is essentially to process a join between two textual attributes. Since the document collections
involved is small, efficient processing strategy of the join is not their concern. Instead, the emphasis of
that work is on the accuracy of the automated match. A somewhat related problem is the consecutive
retrieval problem [7, 17] which is to determine, for a given set of queries Q against a set of records R,
whether there exists an organization of the records such that for each query in Q, all relevant records
(loosely, similar records) can be stored in consecutive storage locations. If we interpret Q and R as two
document collections, then the consecutive retrieval problem deals with the storage aspect of efficient retrieval
of relevant documents from one collection for each document from another collection. However, a major
difference between consecutive retrieval problem and the join processing problem is that the former assumes
the knowledge of which documents from R are relevant to each document in Q while the latter needs to find
which documents from one collection are most similar to each document from another collection. Another
related problem is the processing of a set of queries against a document collection in batch. There are several
differences between this batch query problem and the join problem: (1) For the former, many statistics about
the queries which are important for query processing and optimization such as the frequency of each term in
the queries are not available unless they are collected explicitly, which is unlikely since the batch may only
need to be processed once and it is unlikely to be cost effective to collect these statistics. (2) Special data
structures commonly associated with a document collection such as an inverted file is unlikely to be available
for the batch for the same reason given above. As we will see in this paper, the availability of inverted files
means the applicability of certain algorithms. The clustering problem in IR systems [14] requires to find, for
each document d, those documents similar to d in the same document collection. This can be considered as
a special case of the join problem as described here when the two document collections involving the join
are identical.
A straightforward way exists for processing joins between textual attributes in a multidatabase environ-
ment. This method can be described as follows: Treat each document in one collection as a query and process
each such query against the other collection independently to find the most similar documents. However,
this method is extremely expensive since either all documents in one of the two collections are searched or
the inverted file of that collection is utilized once for processing each document in the other collection. As
an example, consider the Smart system [3] developed at Cornell University. The Smart system uses inverted
file to process user queries. If the collection whose documents are used as queries has a large number of
documents, then using the inverted file of the other collection to process each query independently can easily
incur a cost which is several orders of magnitude higher than that of a better join algorithm (see Section 6).
Therefore, it is very important to develop efficient algorithms for processing joins between textual attributes.
This paper has the following contributions: (1) We present and analyze three algorithms for processing joins
between attributes of textual type. (2) Cost functions based on the I/O cost for each of the algorithms are
provided. (3) Simulation is done to compare the performance of the proposed algorithms. Our investigation
indicates that no one algorithm is definitely better than all other algorithms in all circumstances. In other
words, each algorithm has its unique value in different situations. (4) We provide insight on the type of input
document collections with which each algorithm is likely to perform well. We further give an algorithm which
determines which one of the three algorithms should be used for processing a text-join. We are not aware of
any similar study that has been reported before.
The rest of this paper is organized as follows. A motivating example is presented in Section 2. In Section
3, we include the assumptions and notations that we need in this paper. The three join algorithms are
introduced in Section 4. Cost analyses and comparisons of the three algorithms are presented in Section
5. In Section 6, simulation is carried out to further compare the proposed algorithms and to suggest which
algorithm to use for a particular situation. An integrated algorithm that automatically selects the best
algorithm to use is also included in this section. We conclude our discussion in Section 7.
Motivating Example
Assume that the following two global relations have been obtained after schema integration: Applicants(SSN,
Name, Resume) and Positions(P#, Title, Job descr), where relation Applicants contains information of
applicants for job positions in relation Positions, and Resume and Job descr are of type text. Consider the
query to find, for each position, - applicants whose resumes are most similar to the position's description.
This query can be expressed in extended SQL as follows:
select P.P#, P.Title, A.SSN, A.Name
from Positions P, Applicants A
where A.Resume SIMILAR TO(-) P.Job descr
The where-clause of the above query contains a join on attributes of textual type. This type of joins
do not appear in traditional database systems. Note that "A.Resume SIMILAR TO(-) P.Job descr" and
"P.Job descr SIMILAR TO(-) A.Resume" have different semantics. The former is to find - resumes for
each job description while the latter is to find - job descriptions for each resume. All job descriptions will
be listed as output by the former. However, a job description may not be listed in the output by the latter
if it is not among the - most similar job descriptions to any resume. Later, we will see that the asymmetry
of the operator SIMILAR TO has some impact on the evaluation strategy.
There are some important differences between joins in relational database systems and the join between
two textual attributes. Consider the relational join R1.A ' R2.A, where ' is a comparator such as = and
?. Given a tuple t1 of R1 and a tuple t2 of R2, if t1[A] ' t2[A] is true, then we immediately know that
t1 and t2 satisfy the join. However, for a given resume r and a given job description j, there is no way for
us to know immediately whether or not r SIMILAR TO(-) j is true since to be sure that r is among the -
resumes most similar to j, all resumes have to be considered. If we process the join by comparing each job
description with all resumes, then after a job description d is compared with all resumes, the - resumes most
similar to d can be identified and a partial result is produced. However, if we process the join by comparing
each resume with all job descriptions, then after a resume is compared with all job descriptions, no partial
result can be generated. In this case, many intermediate results (i.e., similarity values between resumes and
job descriptions) need to be maintained in the main memory. This observation indicates that comparing
each job description with all resumes is a more natural way to process the above textual join.
Due to selection conditions on other attributes of the relations that contain textual attributes, it is
possible that only a subset of the set of documents in a collection need to participate in a join. For example,
consider the query that is to find, for each position whose title contains "Engineer", - applicants whose
resumes are most similar to the position's description.
select P.P#, P.Title, A.SSN, A.Name
from Positions P, Applicants A
where P.Title like "%Engineer%" and A.Resume SIMILAR TO(-) P.Job descr
If selection P.Title like "%Engineer%" is evaluated first, then only those job descriptions whose position
title contains "Engineer" need to participate in the join.
In this paper, we are interested in studying algorithms that can be used to process the following query:
select R1.X1, R2.Y2
from R1,
where R1.C1 SIMILAR TO(-) R2.C2
where C1 and C2 are attributes representing two document collections (collection 1 and collection 2, respec-
tively). Clearly, the join to be evaluated is of the form: "C1 SIMILAR TO(-) C2". The impact of selections
will also be addressed.
3 Assumptions and Notations
Using the vector representation [14], each document can be represented as a list of terms together with their
number of occurrences in the document. Each term is associated with a weight indicating the importance
of the term in the document. Usually, terms are identified by numbers to save space. We assume that each
document consists of a list of cells of the form (t#, w), called document-cell or d-cell, where t# is a term
number and w is the number of occurrences of the term t in the document. All d-cells in a document are
ordered in ascending term numbers. The size of each d-cell is jt#j is the number of
bytes to contain X. In practice, sufficient. In a multidatabase environment, different
numbers may be used to represent the same term in different local IR systems due to the local autonomy.
Several methods may be used to overcome this problem. One method is to use actual terms rather than term
numbers. The disadvantage is that the size of the document collection will become much larger. Another
method is to establish a mapping between the corresponding numbers identifying the same term. Such a
mapping structure, usually a table with two columns, if not stored in the main memory, can substantially
degrade the performance. Assuming approximately 150 pages, each of size 4KB, are needed for
the mapping structure to accommodate 100,000 distinct terms. Since the total size of the mapping structure
is less than 1MB, it is likely that the mapping structure can be held in the memory. An attractive method
is to have a standard mapping from terms to term numbers and have all local IR systems use the same
mapping. Such a standard can be very beneficial in improving the performance of the multidatabase system.
It can save on communication costs (no actual terms needs to be transferred) and processing costs (it is
more efficient to compare numbers than to compare actual terms or no need to search the mapping table).
To simplify our presentation, we assume that the same number is always used to represent the same term in
all local IR systems. Note that this assumption can be simulated by always keeping the mapping structure
in the memory when different numbers are used to represent the same term in different local systems. In
the remaining discussion, terms and term numbers will be used interchangeably.
be all the common terms between documents D1 and D2. Let u un and
be the numbers of occurrences of these terms in D1 and D2, respectively. The similarity between D1 and
can be defined as
more realistic similarity function is to divide the similarity by the
norms of the documents and to incorporate the use of the inverse document frequency weight [14], which
assigns higher weights to terms which occur in fewer documents. The normalization can be carried out by
pre-computing the norms of the documents, storing them and performing the divisions during the processing
of the documents. The inverse document frequency weight can be pre-computed for each term and storing
them as parts of the list heads in the inverted files. For the sake of simplicity of presentation, we use the
number of occurrences instead of weights.
For a given term t in a given document collection C, the inverted file entry consists of a list of i-cells
(short for inverted-file-cell) of the form (d#, w), where d# is a document number and w is the number
of occurrences of t in the document with number d#. We assume that i-cells in each inverted file entry
are ordered in ascending document numbers. The size of each i-cell is jd#j jwj. i-cells and d-cells have
approximately the same size.
We use the following notations in our discussion:
the number of documents in collection i,
- the size of the available memory buffer in pages
the number of terms in collection i
the size of the B+tree for collection i in pages (assume tightly packed, i.e, no space is left unused in
each page except possibly the last page)
- the probability that a term in collection C1 also appears in collection C2
q - the probability that a term in collection C2 also appears in collection C1
ff - the cost ratio of a random I/O over a sequential I/O
in bytes (4KB)
the average number of terms in a document in collection i
the average size of an inverted file entry on collection i in pages (5 (K i N i )=(T i P
I i - the size of the inverted file on collection i in pages (J i T i , assume tightly packed)
the average size of a document in collection i in pages (5 K
the size of collection i in pages (S i N i , assume tightly packed)
I t
- the inverted file entry of term t on collection i
- operator SIMILAR TO(-) is used
- the fraction of the similarities that are non-zero
We assume that documents in each collection are stored in consecutive storage locations. Therefore, when
all documents in collection i are scanned in storage order, the total number of pages read in will be D i , which
is also the total I/O cost. On the other hand, if documents in collection i are read in one at a time in random
order, and a document is not kept in the memory after it is processed, then the total number of pages read
in will approximately be N i dS i e and the total cost will approximately be N i dS i e ff, where dXe denotes
the ceiling of X and ff is the cost ratio of a random I/O over a sequential I/O due to the additional seek
and rotational delay of a random read. Similarly, we assume that inverted file entries on each collection are
stored in consecutive storage locations in ascending term numbers and typically dJ i e pages will be read in
when an inverted file entry is brought in the memory in random order.
Note that for a given document collection, if document numbers and term numbers have the same size,
then its total size is the same as the total size of its corresponding inverted file.
In this paper, only I/O cost will be used to analyze and compare different algorithms as if we have a
centralized environment where I/O cost dominates CPU cost. Cost analysis and comparisons for a distributed
environment will be conducted in the future.
Algorithms
In this section, we present three algorithms for processing joins on textual attributes. These algorithms will
be analyzed and compared in the next two sections. We assume the existence of the inverted file on all
document collections.
Depending on how documents and/or inverted files are used to evaluate a join, three basic algorithms can
be constructed. The first algorithm is to use only documents to process the join, the second algorithm is to
use documents from one collection and the inverted file from another collection to evaluate the join, and the
third algorithm uses inverted files from both collections to do the same job. A collection of documents can
be represented by a document-term matrix where the rows are the documents and the columns are the terms
or the inverted file entries of the terms. Therefore, we name the first algorithm the Horizontal-Horizontal
Nested Loop (HHNL), the second algorithm the Horizontal-Vertical Nested Loop (HVNL), and
the third algorithm the Vertical-Vertical Merge (VVM).
4.1 Algorithm HHNL
A straightforward way for evaluating the join is to compare each document in one collection with every
document in the other collection. Although simple, this method has several attractive properties. First, if
one or two of the collections can be reduced by some selection conditions, only the remaining documents
need to be considered. Second, documents can generally be read in sequentially resulting in sequential I/Os.
From the discussion in Section 2, we know that it is more natural to process the join by comparing each
document in C2 with all documents in C1. That is, it is more natural to use C2 as the outer collection and
C1 as the inner collection in the join evaluation. We call this order the forward order and the reverse order
the backward order. The backward order can be more efficient if C1 is much smaller than C2. We consider
the forward order first.
We adopt the policy of letting the outer collection use as much memory space as possible. The case that
lets the inner collection use as much memory space as possible is equivalent to the backward order which will
be discussed later. With this memory allocation policy, the algorithm HHNL can be described as follows:
After reading in the next X documents of C2 into the main memory, for some integer X to be determined,
scan the documents in C1 and while a document in C1 is in the memory, compute the similarity between
this document and every document in C2 that is currently in the memory. For each document d2 in C2, keep
track of only those documents in C1 which have been processed against d2 and have the - largest similarities
with d2.
More rigorously, with C2 as the outer collection, we need to reserve the space to accommodate at least
one document in C1. That is, dS 1 e pages of the memory need to be reserved for C1. We also need to reserve
the space to save the - similarities for each document in C2 currently in the memory. Assume that each
similarity value occupies 4 bytes. Then the number of documents in C2 that can be held in the memory
buffer of size B can be estimated as: is the size of a page in bytes.
We now present the algorithm HHNL:
While (there are documents in C2 to be read in)
fIf there are more unprocessed documents in C2 left
input the next X1 unprocessed documents in C2 into the main memory;
Else input the remaining unprocessed documents in C2 into the main memory;
For each unprocessed d2 in C2 in the memory
For each document d1 in C1
fcompute the similarity between d2 and d1;
If it is greater than the smallest of the - largest similarities computed so far for d2
freplace the smallest of the - largest similarities by the new similarity;
update the list of the documents in C1 to keep track of those documents
with the - largest similarities with d2;
If - is large, then a heap structure can be used to find the smallest of the - largest similarities in the
above algorithm.
We now consider the backward order. When C1 is used as the outer collection to evaluate the join, C2
will be scanned for each set of documents in C1 currently in the memory. Let d1 be the first document in
C1 read in the memory. After C2 is scanned, the N 2 similarities between d1 and every document in C2 are
computed. Since for each document in C2, we need to find the - documents in C1 that are most similar to
it, we need to keep track of the - documents in C1 that have the largest - similarities for each document in
C2. This means that we need to keep track of the - N 2 similarities during the backward order evaluation.
In other words, we need a memory space of size 4- N 2 =P to keep these similarities. Compared with the
forward order which requires 4- X1=P pages to keep track of the needed similarities, more memory space is
needed to save the similarities for the backward order. This will have an adverse impact on the performance
of the backward order. As a result, the forward order is likely to perform better than the backward order
when the two document collections have about the same size. However, when C1 is much smaller than C2,
then the backward order can still outperform the forward order. For example, if C1 can be entirely held in
the memory, then only one scan of each collection is needed to process the join with the backward order no
matter how large C2 is.
4.2 Algorithm HVNL
This algorithm uses the documents in one collection and the inverted file for the other collection to compute
the similarities. In an information retrieval system, processing a user query, which can be considered as a
document, is to find the - documents in the system which are most similar to the user query. One way to
process such a query is to compare it with each document in the system. This method requires almost all
non-zero entries in the document-term matrix be accessed. A more efficient way is to use the inverted file on
the document collection to process the query. This method is used in the Smart system [3]. The advantage
of this method is that it only needs to access those non-zero entries in the columns of the document-term
matrix which correspond to the terms in the query. Since the number of terms in a query is usually a very
small fraction of the total number of terms in all documents in the system, the inverted file based method
accesses only a very small portion of the document-term matrix. Algorithm HVNL is a straightforward
extension of this method to the situation where we need to find the - most similar documents from one
collection for every document in another collection.
The process of using the inverted file to compute the similarities between a document d in C2 to documents
in C1 can be described as follows. Let (t, w) be the next d-cell to be considered in d. Let the inverted file
entry corresponding to t on C1 be f(d 1 , w 1 ), ., (d n , wn )g, where d i 's are document numbers. After t is
processed, the similarity between d and document d i as accumulated so far will be U
the accumulated similarity between d and d i before t is considered, and w w i is the contribution due to the
sharing of the term t between d and d i , i=1,.,n. After all terms in d are processed, the similarities between
d and all documents in C1 will be computed, and the - documents in C1 which are most similar to d can be
identified.
Note that before the last d-cell in d is processed, all intermediate similarities between d and all documents
in C1 need to be saved. The amount of memory needed for such purpose is proportional to N 1 . Further
analysis can reveal that using the inverted file on C2 to process the join needs more memory space to store
intermediate similarities (the amount is proportional to - N 2 ). In practice, only non-zero similarities need
to be saved. We use ffi to denote the fraction of the similarities that are non-zero,
A straightforward way to process the join is to go through the above process for each document in C2
independently. That is, read in each document d in C2 in turn and while d is in the memory, read in
all inverted file entries on C1 corresponding to terms in d to process d (note that not all terms in d will
necessarily appear in C1). The problem with this straightforward method is its lack of coordination between
the processing of different documents in C2. As a result, if a term appears in K documents in C2, then
the inverted file entry of the term (assume that it also appears in C1) on C1 will be read in K times.
Algorithm HVNL is designed to re-use the inverted file entries that are read in the memory for processing
earlier documents to process later documents to save I/O cost. Due to space limitation, usually not all
inverted file entries read in earlier can be kept in the memory. Therefore, the algorithm also needs a policy
for replacing an inverted file entry in the memory by a new inverted file entry. Let the frequency of a term
in a collection be the number of documents containing the term. This is known as document frequency.
Document frequencies are stored for similarity computation in IR systems and no extra effort is needed to
get them. Our replacement policy chooses the inverted file entry whose corresponding term has the lowest
frequency in C2 to replace. This reduces the possibility of the replaced inverted file entry to be reused in
the future. To make the best use of the inverted file entries currently in the memory, when a new document
d1 in C2 is processed, terms in d1 whose corresponding inverted file entries are already in the memory are
considered first. This means that each newly read in document will be scanned twice in the memory. The
first scan is to find the terms whose corresponding inverted file entries are already in the memory and the
second scan is to process other terms. A list that contains the terms whose corresponding inverted file entries
are in the memory will be maintained. Note that when not all inverted file entries that are read in earlier
can be kept in the memory, it is still possible to read in an inverted file entry more than one time. Note
also that the worst case scenario for algorithm HVNL is that for each document in C2 under consideration,
none of its corresponding inverted file entries is currently in the memory. In this case, algorithm HVNL
deteriorates into the straightforward method.
We now present the algorithm HVNL:
For each document d in C2
fFor each term t in d
If t also appears in C1
If the inverted file entry of t on C1 (I t
1 ) is in the memory
accumulate similarities;
For each term t in d
If t also appears in C1
fIf the inverted file entry of t on C1 (I t
1 ) is not in the memory
If the available memory space can accommodate I tread in I t
Else
find the inverted file entry in the memory with the lowest document
frequency and replace it with I t
accumulate similarities;
find the documents in C1 which have the - largest similarities with d;
For each inverted file, there is a B+tree which is used to find whether a term is in the collection and if
present where the corresponding inverted file entry is located.
One possible way to improve the above algorithm is to improve the selection of the next document
to process. Intuitively, if we always choose an un-processed document in C2 whose terms' corresponding
inverted file entries on C1 have the largest intersection with those inverted file entries already in the memory
as the next document to process, then the likelihood of an inverted file entry already in the memory to
be reused can be increased. For example, consider three documents each with three terms:
t5g. Suppose terms with smaller subscripts have lower document
frequencies. Suppose the memory buffer is only large enough to hold three inverted file entries. If D1, D2
and D3 are processed in the given order, then each inverted file entry needs to be read in exactly once.
However, if the processing order is D1, D3 and D2, then the inverted file entry corresponding to t2 will be
read in twice and all other inverted file entries will be read in exactly once. Clearly, for this example, order
fD1, D2, D3g is better than order fD1, D3, D2g.
An order is optimal if it incurs the minimum I/O cost. The question is can an optimal order be found
efficiently. Unfortunately, as shown by the proposition below, the problem of finding an optimal order is
NP-hard.
Proposition: The problem of finding an optimal order of documents in C2 so that the best performance
can be achieved is NP-hard.
Proof: It was shown in [11] that the following problem known as the Optimal Batch Integrity Assertion
Verification (OBIAV), which is to find an optimal order for verifying a set of integrity constraints and
verifying each such constraint requires a set of pages be brought in from secondary storage to the memory,
is an NP-hard problem. It can be seen that the optimal order problem in our case is essentially the same as
the optimal order problem in OBIAV because the following correspondences between the two problems can
be easily established: processing a document in C2 corresponds to verifying an integrity constraint; the need
to read in a set of inverted file entries for processing each document in C2 corresponds to the need to bring
in a set of pages for verifying each integrity constraint; that an inverted file entry read in for processing one
document may be used for processing another document corresponds to that a page brought in for verifying
one integrity constraint may be used for verifying another integrity constraint. Therefore, the optimal order
problem in our case is also NP-hard.
We decide not to pursue the issue of finding an optimal order further because in addition to its NP-hard
nature, there is another problem associated with any optimal order, that is, by reading in documents in any
order rather than their storage order, more expensive random I/Os will be incurred.
4.3 Algorithm VVM
Algorithm VVM uses inverted files on both collections to compute the similarities. The strength of this
algorithm is that it only needs to scan each inverted file once to compute similarities between every pair
of documents in the two collections regardless of the sizes of the two collections provided that the memory
space is large enough to accommodate intermediate similarity values. In this case, algorithm VVM can be
at least as good as algorithm HHNL because algorithm HHNL needs to scan each document collection at
least once and the size of the inverted file on a collection is about the same as the size of the collection
itself. Algorithm VVM tries to compute similarities between every pair of documents in the two collections
simultaneously, as a result, it needs to save the intermediate similarities. Thus, the memory requirement for
saving these similarities is proportional to N 1 N 2 (independent of the number of terms in each document),
which can be so large such that algorithm VVM can not be run at all. In summary, algorithm VVM is likely
to perform well for document collections that are large in size (such that none can be entirely held in the
but small in number of documents. This is possible if each document has a large size. Another
situation that algorithm VVM may do well is when the vocabularies of the two document collections are
very different. For example, one collection is on medicine and the other is on computer science. In this case,
the number of non-zero similarities between documents in the two collections is likely to be small.
Algorithm VVM can be described as follows: we scan both inverted files on the two collections. During the
parallel scan, if two inverted file entries correspond to the same term, then invoke the similarity accumulating
process.
Recall that we assumed that inverted file entries are stored in ascending term numbers. Therefore, one
scan of each inverted file is sufficient (very much like the merge phase of merge sort). The similarity accumulating
process can be described as follows. Let I t
um )g and I t
two inverted file entries for the same term t on the two collections, respectively. After the two inverted file
entries are processed, the similarity between documents r p and s q as accumulated so far will be U pq +u p v q ,
where U pq is the accumulated similarity between r p and s q before t is considered,
We can extend the above algorithm VVM as follows to tackle the problem of insufficient memory space
for all intermediate similarities. Suppose SM is the total number of pages needed to store the intermediate
similarities when all pairs of documents in the two collections are considered at the same time. Suppose
M is the available memory space for storing the intermediate similarities. If SM ? M, divide collection C2
into dSM=Me subcollections and then compute the similarities between documents in each subcollection
and documents in C1, one subcollection at a time. Since for each such subcollection, one scan of the original
inverted files on both collections is needed, this extension incurs a cost which will be dSM=Me times higher
than that when the memory is large enough to hold all intermediate similarities. For a more detailed cost
analysis, see Section 5.3.
5 I/O Cost Analysis
In this section, we provide analysis of the I/O cost of each algorithm presented in Section 4.
5.1 Algorithm HHNL
Let X be the number of documents in C2 that can be held in the memory buffer of size B as defined in
Section 4.1. Since for each X documents in C2, C1 needs to be scanned once, the total I/O cost of HHNL
can be estimated as below:
where the first term is the cost of scanning C2 and the second term is the cost of scanning C1, and dN 2 =Xe
is the number of times C1 needs to be scanned.
The above cost formula assumes that all I/Os are sequential I/Os (i.e., both C1 and C2 are sequentially
scanned in). This is reasonable only when each document collection is read by a dedicated drive with no
or little interference from other I/O requests. If this is not the case, then some of the I/Os may become
more costly random I/Os. We first consider the case when N 2 - X. The following interleaved I/O and CPU
patterns can be observed. After each X documents in C2 are read in, for each document d in C1 read in,
the CPU will take some time to compute the similarities between the X documents and d. When the CPU
is doing the computation, I/O resources may be allocated to other jobs. If this is the case, then the next
document from C1 will use a random I/O, so does the read-in of the next X documents in C2. In other
words, in the worst case, all documents in C1 will be read in using random I/O and for every X documents
in C2, there will be a random I/O. The number of actual random I/Os for scanning documents in C1 once
also depends on the document size and can be estimated as minfD 1 should be used;
otherwise, N 1 should be used). Therefore, when N 2 - X, in the worst scenario, the total I/O cost can be
estimated as follows:
When then the entire collection C2 can be scanned in sequentially and held in the memory, and
the remaining memory space can be used to hold documents in C1. Therefore, C1 can be
read in in dD 1 blocks and each block can be read in sequentially. In this case, we have
5.2 Algorithm HVNL
Recall that a B+tree is maintained for each document collection for quickly locating the inverted file entry of
any given term. The size of the B+tree can be estimated as follows. Typically, each cell in the B+tree occupies
9 bytes (3 for each term number, 4 for address and 2 for document frequency). If a document collection
has N terms, then the size of the B+tree is approximately 9*N/P (only the leaf nodes are considered). The
size is not terribly large. For example, for a document collection with 100,000 distinct terms, the B+tree
takes about 220 pages of size 4KB. We assume that the entire B+tree will be read in the memory when the
inverted file needs to be accessed and it incurs a one-time cost of reading in the B+tree.
Let X be the number of inverted file entries on C1 that can be held in the memory when the memory buffer
is fully used. In addition to X inverted file entries, the memory (size B) also needs to contain a document in
C2 of size dS 2 e, a B+tree of size Bt 1 , the non-zero similarities values between the document in C2 currently
under processing and all documents in C1 and the list containing the terms whose corresponding inverted
file entries are in the main memory (size Xjt#j=P ). Therefore, X can be estimated as follows:
c
If we assume that the read-in of the documents in C2 incurs sequential I/Os, then the I/O cost of HVNL
can be estimated as follows:
where the first case corresponds to the case when X is greater than or equal to the total number of inverted
file entries on C1 (i.e., T 1 ). In this case, we can either read in the entire inverted file on C1 in sequential
order (this corresponds to the first expression in minfg) or read in all inverted file entries needed to process
the query (the number is T 2 q) in random order (this corresponds to the second expression in minfg. The
memory is large enough to do this since X - the second case corresponds to the case when
the memory is not large enough to hold all inverted file entries on C1 but is large enough to hold all of the
necessary inverted file entries; the last expression is for the case when the memory is not large enough to
hold all needed inverted file entries on C1. In this case, the second term is the cost of finding and reading in
the inverted file entries on C1 which correspond to the terms in documents in C2 until the memory is fully
occupied. Suppose the memory is just large enough to hold all the inverted file entries on C1 corresponding
to the terms in the first documents in C2 and a fraction (X1) of the inverted file entries corresponding
to the terms in the s-th document in C2 (i.e., the inverted file entries on C1 corresponding to the terms in
the first s documents in C2 can be held in the memory). Let Y be the number of new inverted file
entries that need to be read in when a new document in C2 is processed after the memory is fully occupied.
Then the third term is the total cost of reading in new inverted file entries for processing the remaining
documents in C2. We now discuss how s, X1 and Y can be estimated. First, the number of distinct terms
in m documents in C2 can be estimated by Therefore, s is the smallest
satisfying q f(m) ? X. Note that (X \Gamma q f(s \Gamma 1)) is the number of inverted file entries that can
still be held in the memory after all the inverted file entries on C1 corresponding to the terms in the first
documents in C2 have been read in and (q is the number of new inverted
file entries that need to be read in when the s-th document in C2 is processed, X1 can be estimated by
Finally, Y can be estimated by (q f(s
As discussed in Section 5.1, it is possible that some or all of the I/Os of reading in the documents in C2
are random I/Os due to other obligations of the I/O device. If after inverted file entries are accommodated,
there is still more memory space left, then the remaining memory space can be used to sequentially scan in
multiple documents in C2 at a time. Based on this observation, when random I/Os are considered, the total
I/O cost of HVNL can be estimated as:
It would be easier to understand the above formula when compared with the formula for computing hvs.
In the first expression in minfg, is the remaining memory space after all inverted file entries
are accommodated.
With slight modification on similarity accumulation, C1 can be used as the outer collection to process
the query. In this case, the memory space needed to store intermediate similarities will be 4-ffiN 2 =P . The
cost of the backward order can be estimated in the same way as in the case of the forward order.
5.3 Algorithm VVM
To avoid the much higher cost of random I/O's, we can simply scan both inverted files on the two collections.
During the parallel scan, if two inverted file entries correspond to the same term, then invoke the similarity
accumulating process. Recall that we assumed the inverted file entries are stored in ascending term numbers.
Therefore, one scan of each inverted file is sufficient to compute all similarities if the memory is large enough
to accommodate all intermediate similarities. Therefore, if all the I/Os are sequential I/Os, the total I/O
cost of the algorithm VVM is:
Again, some or all of the I/Os could actually be random I/Os due to other obligations of the I/O device.
In the worst case scenario, i.e., all I/Os are random I/Os, the total I/O cost of the algorithm VVM can be
estimated as:
Algorithm VVM usually requires a very large memory space to save the intermediate similarity values.
If only non-zero similarities are stored, then the memory space for storing intermediate similarity values for
the algorithm VVM is 4ffi N 1 N 2 =P . When the memory space is not large enough to accommodate all
intermediate similarity values, a simple extension to the algorithm VVM can be made (see Section 4.3). In
this case, the total cost can be estimated by multiplying vvs (or vvr) by dSM=Me, where
is the total number of pages needed to store the intermediate similarities when all pairs of documents in the
two collections are considered at the same time and e is the available memory space
for storing the intermediate similarities. Therefore, a more general formula for estimating the total I/O cost
when all the I/Os are sequential I/Os can be given below:
and a more general formula for estimating the total I/O cost when all the I/Os are random I/Os is:
5.4 Comparisons
Algorithm HHNL uses two document collections as the input. Each of the two document collections needs
to be scanned at least once, which constitutes the lower bound of the I/O cost of this algorithm. Algorithm
HHNL does not use any special data structures such as inverted files and B+trees. Thus, it is more
easily applicable and easier to implement. Since algorithm HHNL uses documents directly for similarity
computation, it benefits quite naturally from any possible reductions to the number of documents in either
one or both collections resulted from the evaluation of selection conditions on non-textual attributes of the
relevant relations. The memory space requirement of this algorithm for storing intermediate similarity values
is generally small compared with those of other algorithms.
Algorithm HVNL uses one document collection, one inverted file and the B+tree corresponding to the
inner collection as the input. While the document collection is always scanned once, the access to inverted
file entries is more complex. On the one hand, not all inverted file entries need to be read in. In fact, only
those inverted file entries whose corresponding terms also appear in the other document collection need to be
accessed. On the other hand, some inverted file entries may be read in many times due to their appearances
in multiple documents in C2 although effort is made by the algorithm to reuse inverted file entries currently
in the memory. It is expected that this algorithm can be very competitive in one of the following two
situations:
1. One of the document collection, say C2, is much smaller than the other collection. In this case, it is
likely that only a small fraction of all inverted file entries in the inverted file needs to be accessed. This
means that only a small portion of the document-term matrix corresponding to C1 will be accessed in
this case. In contrast, if algorithm HHNL is used, then the entire matrix needs to be accessed at least
once even when C2 can be held entirely in the memory.
When C2 contains only one document, this situation becomes an extreme case of processing a single
query against a document collection. As we have mentioned before, using the inverted file to process a
single query has been shown in IR to be superior to using documents directly. Note that an originally
large document collection may become small after conditions on attributes of the relevant relation are
evaluated.
2. For the collection where documents are used, close documents in storage order share many terms and
non-close documents share few terms. This increases the possibility of reusing inverted file entries in
the memory and reduces the possibility of re-reading in inverted file entries. This could happen when
the documents in the collection are clustered.
Algorithm HVNL accesses inverted file entries in random order. As such, it has two negative effects on the
I/O cost. One is that random I/Os are more expensive than sequential I/Os. The other is that even when an
inverted file entry occupies a small fraction of a page, the whole page containing the entry has to be read in.
In other words, if e is the size of an inverted file entry, we need to read in dee even if e is very small, say 0.1.
Therefore, when the size of each inverted file entry is close to an integer, the competitiveness of algorithm
HVNL will be increased. Algorithm HVNL uses primarily two data structures, one is the inverted file and
the other is the B+tree for the terms. One disadvantage of using the inverted file is that the size of the file
remains the same even if the number of documents in the corresponding document collection can be reduced
by a selection unless we construct another inverted file for the reduced set, which is highly unlikely due to
the cost involved. The memory space requirement of algorithm HVNL for storing intermediate similarities
is higher than that of algorithm HHNL but lower than that of algorithm VVM.
Algorithm VVM uses two inverted files as the input. As we discussed before, this algorithm has a
very nice one-scan property, namely, it only needs to scan each inverted file once to compute the similarities
regardless of the sizes of the two collections provided that the memory space is large enough to accommodate
intermediate similarity values. When the memory space is large enough to accommodate intermediate
similarity values, algorithm VVM can be at least as efficient as algorithm HHNL as far as I/O cost is
concerned. The major drawback of algorithm VVM is that it needs a very large memory space to save the
intermediate similarities. There are two situations that algorithm VVM is likely to perform well. The first is
when the document collections are large in size but small in number of documents. The second is when the
vocabularies of the two document collections are very different. In both of the two situations, the number of
non-zero similarities between documents in the two collections is likely to be small. Another disadvantage of
algorithm VVM is that the sizes of the inverted files will remain the same even if the number of documents
in the corresponding document collections can be reduced.
6 Simulation Results
Due to the large number of parameters in the cost formulas of the algorithms presented, it is very difficult
to compare the performance of these algorithms based on these formulas directly. In this section, the
algorithms are compared based on simulation results computed from the cost formulas derived in Section
5. Our objective is to identify the impact of the variations of the parameters on the algorithms. In other
words, we would like to find out in what situation an algorithm performs the best.
The statistics of three document collections which were collected by ARPA/NIST [8], namely, WSJ (Wall
Street Journal), FR (Federal Register) and DOE (Department of Energy), are used in our simulation. The
statistics of these collections are shown in Table 1 (the last three rows are estimated by us based on
Among the three document collections, FR has fewer but larger documents and DOE has more but
smaller documents. The number of documents in WSJ lies between those of FR and DOE. So is the average
size of documents in WSJ.
For all simulations, the page size P is fixed at 4KB, the fraction of the similarities that are non-zero ffi
is fixed at 0.1 and - is fixed at 20 (note that only algorithm HHNL and the backward order of algorithm
involve - and none is really sensitive to - if it is not very large, say in the hundreds). The probability
q is computed as follows:
WSJ FR DOE
#documents 98736 26207 226087
#terms per doc 329 1017 89
total # of distinct terms 156298 126258 186225
collection size in pages 40605 33315 25152
avg. size of a document 0.41 1.27 0.111
avg. size of an inv. fi. en. 0.26 0.264 0.135
Table
1: Statistical information of several document collections
The formula says that, given the number of distinct terms in C2 (i.e., T 2 ), the smaller the number of distinct
terms in C1, T 1 , is, the smaller the probability that a term in C2 also appears in C1 will be; and when
much larger than T 2 , then q will become closer to 1; otherwise, q is 0.8. Probability p can be
computed in a similar manner.
For parameters B (memory size) and ff, we assign a base value for
When the impact of a parameter is studied, we vary the values of the parameter while let the other parameter
use its base value.
We present the following five groups of simulation results.
Group 1: In this group, a real collection will be used as both collection C1 and collection C2. Since there
are three real collections (WSJ, FR and DOE) and two parameters (B and ff), six simulation results
will be collected.
Group 2: In this group, different real collections will be used as C1 and C2. B will vary while ff will use
its base value. From the three real collections, six simulations can be designed.
Group 3: In this group, while C1 and C2 will continue to use real collections, only a small number of
documents in C2 will be used to participate in the join. These experiments are used to investigate the
impact of local selections. All simulations in this group use only the base values of the two parameters.
Since there are three real collections, three simulation results will be collected in this group.
Group 4: In this group, C1 again will continue to use real collections, but C2 will be collections with only
a small number of documents. The difference between Group 3 and Group 4 is that the former uses
a small number of documents (in C2) from an originally large collection C2 and the latter uses an
originally small collection C2. This difference has the following impacts on the cost: (1) documents in
C2 need to be read in randomly by the former but can still be read in sequentially by the latter; and
(2) the size of the inverted file and the size of the B+tree on collection C2 for the former are computed
based on the original collection, not just the documents used. This will have an impact on the cost of
algorithm VVM. In our experiments, after a real collection is chosen to be C1, C2 will be derived from
C1. Again, all simulations in this group use only the base values of the two parameters. Since there
are three real collections, three simulation results will be collected in this group.
Group 5: In this group, both collection C1 and collection C2 will use new collections but they will remain to
be identical. Each new collection is derived from a real collection by reducing the number of documents
in the real collection and increasing the number of terms in each document in the real collection by
the same factor such that the collection size remains to be the same. The simulations in this group
are especially aimed at observing the behavior of algorithm VVM. Again, only the base values of the
two parameters will be used and three simulation results will be collected in this group since there are
three real collections.
For space consideration, the simulation results for the backward order approach will not be presented.
Notice that the backward order approach makes a difference only when HHNL and HVNL are used (see the
discussions in Section 4.1 and Section 5.2). Compared with the forward order, the backward order requires
more memory space to store intermediate similarities. As a result, the backward order with outer collection
and inner collection A2 incurs a somewhat higher cost than the forward order with outer collection B1
and inner collection B2 when A1 and B1 are the same collection and A2 and B2 are the same collection.
For all the figures in this section, a value k on y-axis is equivalent to 10 k sequential page I/Os. For figures
1, 3 and 4, each unit on x-axis is equivalent to 10,000 pages.
Simulation results in Group 1
The following simulations are conducted in this group:
Simulation 1: changes from 10000 to 50000 with an increment of 5000
Simulation 2: changes from 10000 to 50000 with an increment of 5000
Simulation 3: changes from 10000 to 50000 with an increment of 5000
Simulation 4: changes from 3 to 10 with an increment of 1
Simulation 5: changes from 3 to 10 with an increment of 1
Simulation changes from 3 to 10 with an increment of 1
The following observations can be made from the result of simulation 1 (see Figure 1).
1. Algorithm HHNL outperforms the other two algorithms, especially when B is small.
2. There are several reasons that algorithm HVNL performs poorly. First, the outer document collection
has too many documents (N which causes repeated read-ins of many inverted file entries
on C1. Second, algorithm HVNL requires more random I/Os. Third, both S 2 and J 1 are not close to
integers and as a result, for each document or inverted file entry read in, algorithm HVNL incurs more
than twice as much cost as that by algorithm HHNL (1 versus 0.41 for document and 1 versus 0.26 for
inverted file entry).
3. The main reason that algorithm VVM performs very poorly is because the memory requirement for
storing intermediate similarities (952,031 pages) is much greater than the available memory. As a
result, many scans of the two inverted files are needed to process the join.
4. All algorithms perform better with larger available memory. When larger, one document
collection or an inverted file can be held in the memory in its entirety. When this happens, algorithm
HHNL and algorithm HVNL have very similar performances since in this case, algorithm HHNL scans
each of the two document collections once and algorithm HVNL scans one document collection and
one inverted file which has the same size as a document collection.
Similar observations as made from the result of simulation 1 can also be made from the results of
simulation 2 and simulation 3 (not shown). Relatively speaking, the performance of algorithm VVM in
simulation 2 has the largest improvement due to the larger size of documents and fewer number of documents.
However, the memory requirement for storing intermediate similarities in this case (67,071 pages) is still too
large for the available memory to handle and at least two scans of the two inverted files are used to process
the join. Not surprisingly, the relative performance of algorithm VVM in simulation 3 has become much
worse due to the smaller size of documents and larger number of documents.
I/O Cost
hhs 3
hvr \Theta
\Theta \Theta \Theta \Theta
\Theta
\Theta \Theta
\Theta \Theta
vvs 4
Figure
1: Result of Simulation 1579
ff
I/O Cost
hhs 3
hvr \Theta
\Theta \Theta \Theta \Theta \Theta \Theta \Theta \Theta
vvs 4
Figure
2: Result of Simulation 4
The following observations can be made from the result of simulation 4.
1. Algorithm HHNL is the best performer among the three algorithms.
2. hhs and vvs are independent of ff because they involve no random I/Os.
3. Others become worse when ff increases.
4. Algorithm HVNL is more sensitive to larger ff.
Similar observations can be made from the results of simulation 5 and 6.
Simulation results in Group 2
In this group, different real collections will be used as C1 and C2 and the base values will be used for B
and ff. From the three real collections, the following six simulations can be designed.
Simulation 7: changes from 10000 to 50000 with an increment of 5000
Simulation 8: changes from 10000 to 50000 with an increment of 5000
Simulation 9: changes from 10000 to 50000 with an increment of 5000
Simulation 10: changes from 10000 to 50000 with an increment of 5000
Simulation 11: changes from 10000 to 50000 with an increment of 5000
Simulation 12: changes from 10000 to 50000 with an increment of 5000
Comparing the result of simulation 7 with the result of simulation 8 (see Figures 3 and 4), the following
observations can be made.
1. While algorithm HHNL is the best performer in simulation 7, algorithm HVNL sometimes beats HHNL
in simulation 8. The reason is that while algorithm HHNL lets the outer collection use as much memory
space as possible, algorithm HVNL lets the inner collection use as much memory space as possible.
For example, consider Figure 4 when in the figure). In this case, the entire
inverted file on FR can be held in the memory. As a result, when algorithm HVNL is used, only one
scan of WSJ and the inverted file on FR is needed to process the join. However, when algorithm HHNL
is used, the memory is not large enough to hold the entire outer collection WSJ. As a result, one scan
of WSJ and two scans of the inverted file on FR are needed to process the join when algorithm HHNL
is used.
2. There is no change on the cost of algorithm VVM because it is completely symmetric to the two
document collections.
3. When none of the two collections can be entirely held in the memory, we get mixed results for algorithm
HHNL, that is, sometimes, it has better result in simulation 7 than that in simulation 8, but sometimes
the opposite is true. When only the smaller collection can be held in the memory, better performance
can be achieved using the smaller collection as the outer collection. This is the reason that algorithm
HHNL has better result in simulation 7 than that in simulation 8 when B becomes 35,000 or larger. This
observation also supports our earlier argument in Section 4.1 that the backward order can outperform
the forward order if the backward order implies a much smaller outer collection.
4. The situation for algorithm HHNL is reversed for algorithm HVNL. The reason is that while algorithm
HHNL lets the outer collection use as much memory space as possible, algorithm HVNL lets the inner
collection use as much memory space as possible.
Similar observations made above between the result of simulation 7 and the result of simulation 8 can
also be made between the result of simulation 9 and the result of simulation 10, as well as between the result
of simulation 11 and the result of simulation 12 (the results of simulations 9 - 12 are not shown).
Simulation results in Group 3
In this group, C1 and C2 will continue to be real collections but only a small number of documents in
C2 will be used to participate the join. Let M be the number of such documents in C2. Since M !! N 2 ,
we should read each of the M documents individually in random order. As a result, the cost of reading in
the M documents will be M dS 2 e ff. Based on this, we have the following new formula for hhs:
I/O Cost
hhs 3
hvr \Theta
\Theta \Theta
\Theta
\Theta
\Theta \Theta \Theta
\Theta \Theta
vvs 4
Figure
3: Result of Simulation 7579
I/O Cost
hhs 3
hvr \Theta
\Theta \Theta
\Theta
\Theta
\Theta
\Theta \Theta \Theta \Theta
vvs 4
Figure
4: Result of Simulation 8
Since M is small, it is likely that all of the M documents in C2 can be held in the memory. In addition,
the remaining memory space can be used to read in as many documents in C1 as possible.
As a result, we have the following formula for hhr:
To compute hvs and hvr, we need to estimate the number of distinct terms in the M documents. This
number can be estimated by f(M . The cost formula for hvs is the same as that
in Section 5.2 except D 2 is replaced by M dS 2 e ff and T 2 is replaced by f(M ). Let this new formula be
denoted by (HVS2). Since all I/O's in hvs have become random I/Os,
The cost formulas for vvs and vvr remain the same. However, the memory requirement for storing the
intermediate similarities is now reduced to 4 N 1 M delta=P . Other quantities such as the size of inverted
file entries and the size of the B+tree on collection C2 remain as before.
The following three simulations are carried out:
Simulation 13: changes from 5 to 50 with an increment of 5
Simulation 14: changes from 5 to 50 with an increment of 5
Simulation 15: changes from 5 to 50 with an increment of 5
The following observations can be obtained from the result of simulation 13 (see Figure 5).
1. When M is very small (- 30), algorithm HVNL outperforms others as expected. Algorithm HHNL
becomes the best performer when M becomes larger.
2. Since M is so small, the M documents can easily fit into the memory. As a result, algorithm HHNL
requires only one scan of the inner document collection in addition to reading in the M documents
from the outer collection.
3. In this case, the memory is able to accommodate all intermediate similarities for algorithm VVM. The
reason that algorithm VVM incurs much higher cost than algorithm HHNL is because the size of the
inverted file on collection C2 did not change although only a small number of documents in C2 are
I/O Cost
hhs 3
hvs 22
\Theta
\Theta
\Theta \Theta \Theta \Theta \Theta \Theta \Theta \Theta
vvs 4
Figure
5: Result of Simulation
I/O Cost
hhs 3
hvs 22
\Theta
\Theta
\Theta \Theta \Theta \Theta \Theta \Theta \Theta \Theta
vvs 4
Figure
of Simulation
Comparing the result of simulation 14 (not shown) with the result of simulation 13, a noticeable difference
is that the relative performance of algorithm HVNL deteriorated - algorithm HVNL becomes worse than
algorithm HHNL before M reaches 10. This is because each document in FR contains much more terms than
each document in WSJ and therefore more inverted file entries need to be read in by algorithm HVNL for
processing a document in FR.
Comparing the result of simulation 15 (not shown) with the result of simulation 13, a noticeable difference
is that the relative performance of algorithm HVNL is improved - algorithm HVNL outperforms algorithm
HHNL even after M reaches 50. This is because each document in DOE contains much fewer terms than
each document in WSJ and therefore fewer inverted file entries need to be read in by algorithm HVNL for
processing a document in DOE.
For space consideration, we do not present simulation results for situations when the numbers of documents
in both collections are reduced by selections. However, it is not difficult to see that comparing with the
situation when only one collection is reduced, algorithm HHNL will benefit the most when both collections
are reduced.
Simulation results in Group 4
In this group, C1 will continue to use real collections, but C2 will be collections with only a small number
of documents. Since we do not have real collections that contain a small number of documents, we derive
such a collection from a real collection. This turns out to be quite easy. From a given document collection,
we first keep its document size and then decide the number of documents we want in the new collection.
From this number, say M, the number of distinct terms in the new collection can be computed by f(M ).
all key statistics of the new collection become available. With these statistics, the cost formulas in
Section 5 can be used to find the cost of each algorithm.
The following three simulations are conducted in the group:
Simulation changes from 5 to 50 with an increment of 5
Simulation 17: changes from 5 to 50 with an increment of 5
Simulation changes from 5 to 40 with an increment of 5
Comparing the result of simulation 16 (see Figure 6) with that of simulation 13 (see Figure 5), the
following observations can be made.
1. There is little change for algorithm HHNL. Since M is so small, reading in the M documents sequentially
or randomly makes little difference.
2. Algorithm HVNL degraded somewhat. This is the effect of q - the probability that a term in collection
C2 also appears in collection C1. In Simulation 13, q is computed based on the original T 1 and T 2 .
In Simulation 16, q is computed based on the original T 1 and
the new f(M ). Since f(M ) is much smaller than T 1 , q between 0.92 to 0.99 are computed using the
formula. Higher q values imply more inverted file entries on collection C1 need to be read in and as a
result, the performance of algorithm HVNL is down.
3. The cost of algorithm VVM is reduced substantially. The main reason behind the reduction is the
reduction of the size of the inverted file on C2. In Simulation 13, the size is computed based on the
original C2, but in Simulation 16, the size is computed based on the reduced collection.
Similar observations as above can be made for simulation 17 and simulation 18.
Simulation results in Group 5
In this group, both C1 and C2 will use new collections but they will remain to be identical. Each new
collection is derived from a real collection by reducing the number of documents and increasing the number
of terms in each document in the real collection by the same factor F to ensure that the collection size
remains to be the same.
The following three simulations are carried out:
Simulation 19: are derived from WSJ, 5, the decreasing (increasing) factor changes
from 1 to 13 with an increment of 2
Simulation 20: are derived from FR, 5, the decreasing (increasing) factor changes
from 1 to 5 with an increment of 1
Simulation 21: are derived from DOE, 5, the decreasing (increasing) factor
changes from 1 to 28 with an increment of 3
The following observations can be made from result of simulation 19 (see Figure 7).
1. When factor F is small (- 5), algorithm HHNL outperforms other algorithms. However, when F is 7
or larger, the sequential version of algorithm VVM (i.e., vvs) becomes the best performer.
2. vvs decreases rapidly as F increases as expected. When F reaches 11, all intermediate similarities can
be held in the memory. As a result, vvs reaches its lower bound - each inverted file is scanned once.
When the number of documents in the collection is reduced to 8,976 and the number of terms
in each document becomes 3,619.
3. hvs and hvr are insensitive to the changes.
4. hhr decreases as F increases. This is because as F increases, the number of documents in C1 decreases.
Since the number of random I/Os is bounded by the the number of documents in C1, hhr decreases
as a result.
Similar observations as for Simulation 19 can be made for Simulation 20 with the only difference that vvs
reaches its minimum faster for the latter. The reason is that the number of documents in FR is originally
much smaller than that in WSJ. Again, similar observations as for Simulation 19 can be made for Simulation
21 with the only difference that vvs reaches its minimum slower for the latter. The reason is that the number
of documents in DOE is originally much larger than that in WSJ.
6.1 Summary of the Simulation Results
The following main points can be summarized from the above extensive simulations.
F
I/O Cost
hhs 3
hvr \Theta
\Theta \Theta \Theta \Theta \Theta \Theta \Theta
vvs 444 4
Figure
7: Result of Simulation 19
1. The cost of one algorithm under one situation can differ drastically from that of another algorithm
under the same situation. For example, in Simulation 1, Algorithm HVNL incurs a cost which is about
4,000 times higher than that of Algorithm HHNL when the memory buffer is small
in Simulation 13, the cost incurred by Algorithm HHNL is more than 5 times higher than that by
Algorithm HVNL. As a result, it is important to choose an appropriate algorithm for a given situation.
2. If the number of documents in one of the two document collections, say M, is originally very small
or becomes very small after a selection, then algorithm HVNL has a very good chance to outperform
other algorithms. Although how small for M to be small enough mainly depends on the number of
terms in each document in the outer collection, M is likely to be limited by 100 (it is 70 for simulation
15).
3. If the number of documents in each of the two collections is not very large (roughly N 1
and both document collections are large such that none can be entirely held in the memory, then
algorithm VVM (the sequential version) can outperform other algorithms.
4. For most other cases, the simple algorithm HHNL performs very well.
5. The costs of the random versions of these algorithms depict the worst case scenario when the I/O
devices are busy satisfying different obligations at the same time. Except for algorithm VVM, these
costs have no impact in ranking these algorithms.
Overall, the simulation results match well with our analysis in Section 5.4.
6.2 An Integrated Algorithm
Since no one algorithm is definitely better than all other algorithms in all circumstances, it is desirable to construct
an integrated algorithm that can automatically determine which algorithm to use given the statistics
of the two collections query parameters (-,
selectivities of predicates on non-textual attributes). This integrated algorithm can be sketched as follows:
If none of the two collections has inverted file /* in this case, only HHNL can be used */
fcompute hhs using formula (HHS1);
compute bhhs; /* the counterpart of hhs when the backward order is used (formula not shown) */
If hhs - bhhs, use the forward order of HHNL;
Else use the backward order of HHNL;
If only one collection has inverted file /* only HHNL and HVNL can be used in this case */
fIf there is no selection
fcompute hhs using formula (HHS1);
compute bhhs;
compute hvs using formula (HVS1);
compute bhvs; /* the counterpart of hvs when the backward order is used (formula not shown) */
Else
festimate the number of documents that can participate in the join using the selectivities;
compute hhs using formula (HHS2);
compute bhhs;
compute hvs using formula (HVS2);
compute bhvs;
use the algorithm with the lowest estimated cost;
If both collections have inverted file
fIf there is no selection
fcompute hhs using formula (HHS1);
compute bhhs;
compute hvs using formula (HVS1);
compute bhvs; /* the counterpart of hvs when the backward order is used (formula not shown) */
compute vvs using formula (VVS);
Else
festimate the number of documents that can participate in the join using the selectivities;
compute hhs using formula (HHS2);
compute bhhs;
compute hvs using formula (HVS2);
compute bhvs;
compute vvs using formula (VVS);
use the algorithm with the lowest estimated cost;
7 Concluding Remarks
In this paper, we presented and analyzed three algorithms for processing joins between attributes of textual
type. From analysis and simulation, we identified, for each algorithm, the type of input document collections
with which the algorithm is likely to perform well. More specifically, we found that algorithm HVNL can be
very competitive only when the number of documents in one of the two document collections is/becomes very
small, and algorithm VVM can perform very well when the number of documents in each of the two collections
is not very large and both document collections are large such that none can be entirely held in the memory.
In other cases, algorithm HHNL is likely to be the top performer. Since no one algorithm is definitely better
than all other algorithms, we proposed the idea of constructing an integrated algorithm consisting of the
basic algorithms such that a particular basic algorithm is invoked if it has the lowest estimated cost. We
also indicated that the standardization of term numbers will be very useful in multidatabase environments.
Further studies in this area include (1) investigate the impact of the availability of clusters on the
performance of each algorithm; (2) develop cost formulas that include CPU cost and communication cost;
(3) develop algorithms that process textual joins in parallel; and (4) conduct more detailed simulation and
experiment.
Acknowledgments
We would like to thank the anonymous reviewers for their valuable suggestions to improve the paper.
This research is supported in part by the following grants: NSF grants under IRI-9309225 and IRI-9509253,
Air Force under AFOSR 93-1-0059, NASA under NAGW-4080 and ARO under BMDO grant DAAH04-0024.
--R
"An ADT Approach to Full Text"
"Query Processing in a System for Distributed Databases (SDD-1)"
"Automatic Retrieval with Locality Information Using Smart"
"View Definition and Generalization for Database Integration in a Multi-database system"
"Query Optimization in Heterogeneous Databases"
"Automating the Assignment of Submitted Manuscripts to Reviewers"
"File Organization: The Consecutive Retrieval Property"
"Overview of the First Text Retrieval Conference"
"Interoperability of Multiple Autonomous Databases"
"Introduction to Combinatorial Mathematics"
"A Scheme for Batch Verification of Integrity Assertions in a Database System"
"Query Processing in Multidatabase Systems"
"A Theory of Translation from Relational Queries to Hierarchical Queries"
"Introduction to Modern Information Retrieval"
"Design of an Integrated Information Retrieval/Database Management System"
"Federated Database Systems for Managing Distributed, Heterogeneous, and Autonomous Databases"
"On the Consecutive-Retrieval Problem"
"Incremental Updates of Inverted Lists for Text Document Retrieval"
"Translation of Object-Oriented Queries to Relational Queries"
--TR
--CTR
Nikos Mamoulis, Efficient processing of joins on set-valued attributes, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Gltekin zsoyolu , Ismail Sengr Altingvde , Abdullah Al-Hamdani , Selma Aye zel , zgr Ulusoy , Zehra Meral zsoyolu, Querying web metadata: Native score management and text support in databases, ACM Transactions on Database Systems (TODS), v.29 n.4, p.581-634, December 2004 | textual database;join algorithm;multidatabase;information retrieval;query processing |
627933 | Declustering and Load-Balancing Methods for Parallelizing Geographic Information Systems. | AbstractDeclustering and load-balancing are important issues in designing a high-performance geographic information system (HPGIS), which is a central component of many interactive applications(such as real-time terrain visualization. The current literature provides efficient methods for declustering spatial point-data. However, there has been little work toward developing efficient declustering methods for collections of extended objects, like chains of line-segments and polygons. In this paper, we focus on the data-partitioning approach to parallelizing GIS operations. We provide a framework for declustering collections of extended spatial objects by identifying the following key issues: 1) the work-load metric, 2) the spatial-extent of the work-load, 3) the distribution of the work-load over the spatial-extent, and 4) the declustering method. We identify and experimentally evaluate alternatives for each of these issues. In addition, we also provide a framework for dynamically balancing the load between different processors. We experimentally evaluate the proposed declustering and load-balancing methods on a distributed memory MIMD machine (Cray T3D). Experimental results show that the spatial-extent and the work-load metric are important issues in developing a declustering method. Experiments also show that the replication of data is usually needed to facilitate dynamic load-balancing, since the cost of local processing is often less than the cost of data transfer for extended spatial objects. In addition, we also show that the effectiveness of dynamic load-balancing techniques can be improved by using declustering methods to determine the subsets of spatial objects to be transferred during runtime. | Introduction
A high performance geographic information system (HPGIS) is a central component of many interactive
applications like real-time terrain visualization, situation assessment, and spatial decision making. The
geographic information system (GIS) often contains large amounts of geometric and feature data (e.g.
location, elevation, soil type, etc.) represented as large sets of points, chains of line-segments, and
polygons. This data is often accessed via range queries and map-overlay queries. The existing sequential
methods for supporting the GIS operations do not meet the real-time requirements imposed by many
interactive applications. Hence, parallelization of GIS is essential in meeting the high performance
requirements of several real-time applications.
A GIS operation can be parallelized either by function-partitioning [2, 3, 5, 30] or by data-
partitioning [4, 8, 13, 17, 19, 25, 32, 33]. Function-Partitioning uses specialized data structures (e.g.
distributed data structures) and algorithms which may be different from their sequential counterparts.
Data-Partitioning techniques divide the data among different processors and independently execute the
sequential algorithm on each processor. Data-Partitioning in turn is achieved by declustering [11, 27]
the spatial data. If the static declustering methods fail to equally distribute the load among different
processors, the load-balance may be improved by redistributing parts of the data to idle processors using
dynamic load-balancing (DLB) techniques. In this paper, we focus on parallelizing a range-query
operation for GIS data using the data-partitioning approach.
1.1 Application Domain: Real-Time Terrain Visualization
A real-time terrain-visualization system is an environment that lets users navigate and interact with a
three-dimensional computer generated geographic environment in real-time, like other virtual environments
[16], visualization systems [28], and distributed interactive simulation systems [1]. This type of
system has three major components: interaction, 3-D graphics, and GIS. Figure 1 shows the different
components of a terrain visualization system for a typical flight simulator. The HPGIS component of the
system contains a secondary storage unit for storing the entire geographic database and a main memory
for storing the data related to the current location of the simulator. The graphics engine receives the
spatial data from the HPGIS component and transforms these data into 3-D objects which are then sent
to the display unit.
As the user moves over the terrain, the part of the map that is visible to the user changes over time,
and the graphics engine has to be fed with the visible subset of spatial objects for a given location and
user's viewport. The graphics engine transforms the user's viewport into a range query and sends it to
the HPGIS unit. For example, Figure 2 shows a polygonal map and a range query. Polygons in the map
are shown with dotted lines. The range query is represented by the rectangle, and the result of the range
query is shown in solid lines. The HPGIS unit retrieves the visible subset of spatial data from the main
memory and computes their geometric intersection with the current viewport of the user and sends the
results back to the graphics engine. The frequency of this operation depends on the speed at which the
user is moving over the terrain. For example, in the terrain visualization of a flight simulator, a new range
query may be generated twice a second, which leaves less than half a second for intersection computation.
A typical map used in this application contains tens of thousands of polygons (i.e., millions of edges),
and the range-query size can be 20-30% of the total map. This requires millions of intersection-point
computations in less than half a second. In order to meet such response-time constraints, HPGIS often
caches a subset of spatial data in main memory. The main-memory database may in turn query the
secondary-storage database to get a subset of data to be cached. The frequency of this operation should
be very small for the caching to be effective.
Secondary
Storage
Main
Memory
Graphics
Analysis
Display 30/sec
feed back
view
range-query
2/sec, 8kmX8km
Set of
Polygons
secondary storage
range-
query
Set of
Polygons
Engine
High Performance GIS
Component
Database
Database
Figure
1: Components of the Terrain-Visualization System.
Figure
2: A sample polygonal map and a range-query.
1.2 Problem Formulation
The range-query problem for the GIS can be stated as follows: Given a rectangular query box B, and a
set SP of extended spatial objects (e.g, polygons, chains of line segments), the result of a range query
over SP is given by the set
where\Omega gives the geometric intersection of two
extended objects. we call this problem the GIS-range-query problem. The GIS-range-query problem has
three main components: (i) Approximate filtering at the polygon level, (ii) Intersection computations,
and (iii) Polygonization of the result. (See [29] for a detailed discussion of a sequential algorithm.) Note
that this problem is different from the traditional range query, where the objects in the given range are
retrieved from secondary memory (disk) to main memory without clipping the objects, but it is similar
to the polygon-clipping problem [26] in computer graphics.
The existing sequential solutions [6, 15, 31] for the range-query problem cannot always be directly
used as a solution to the GIS-range-query problem, due to the high performance requirements of many
applications. For example, the limit on response time (i.e. half a second, as shown in Figure 1) for
solving the GIS-range-query problem allows the processing of maps with no more than 1500 polygons
(or 100,000 edges) on many of the latest processors available today, like the IBM RS6000/590 and DEC-
Alpha (150Hz) processors. However, the maps used in many HPGIS applications are at least an order
of magnitude larger than these simple maps. Hence we need to consider parallel processing to deliver
the required performance.
In this paper, we focus on parallelizing the GIS-range-query problem over a set of processors to
meet the high performance requirements imposed by a typical HPGIS application. The goal of the
parallelization is to achieve the minimum possible response time for a set of range queries. we use data-
partitioning with declustering and dynamic load-balancing for parallelizing a sequential algorithm to the
GIS-range-query problem. Figure 3 describes the steps in this scheme. The bounding box is initially
broadcast to all processors. Each processor then executes the sequential GIS-range-query algorithm on
the local set of polygons. After processing the local data, a processor checks for any load imbalances
and seeks more work from another processor which has not yet finished its work. DLB methods are used
for transferring the work between processors during run-time.
partition
Apprx. Filtering
Computation
Apprx. Filtering
Computation
Polygonization
Of the result
Polygonization
Of the result
Intersection
Computation
Intersection
Computation
Y
Get
Next
Bbox
Figure
3: Different modules of the parallel formulations.
1.3 Related work and Our Contributions
Declustering and load-balancing are important issues in parallelization of the typical HPGIS operations
like range-query and map-overlay operations. Several researchers have used declustering and load-balancing
towards parallelization of the traditional range-query problems. Kamel and Faloutsos [22]
used local load-balancing-based data declustering to maximize the throughput of range queries over
data-sets consisting of two-dimensional rectangles. Zhou et al. [33] describe mapping-function-based
declustering methods for parallelizing the grid files in the context of traditional range queries. Brunetti
et al. [8] used row-wise division of two-dimensional regular grids in parallel algorithms for characterizing
terrain data. Armstrong et al. [4] used row-wise partitioning of 2-d grids for parallelizing an algorithm
to determine the spatial association measures for point data.
It has been shown that customized declustering techniques based on space-division mapping functions
[9, 33], proximity-based local load-balance [17, 19, 22, 27], and similarity graph-partitioning [27]
are needed to effectively partition spatial data. In the case of uniformly distributed point data, it has
been shown that the static declustering is often adequate for achieving a good load-balance, by formal
methods [33] as well as by experimental studies [4, 8, 33]. However, the effective declustering of sets of
extended objects has not received adequate attention in the literature.
In the case of extended spatial objects, static-declustering methods alone might not be enough to
achieve good load-balance. In such a case, both static partitioning and DLB techniques can be used.
Wang [32] used dynamic allocation of work at different levels (e.g, polygons, edges) for map-overlay
computation. In addition, several dynamic load-balancing methods have been developed [12, 20, 23, 25]
for load-balancing in different applications. Data-Partitioning for map-overlay [32], spatial-join, and
access methods [18, 19] is not related to the work presented in this paper.
Declustering and dynamic load-balancing for extended spatial-data have not received adequate attention
in the literature. In this paper, we focus on static data-declustering and dynamic load-balancing
methods for parallelizing the GIS-range-query problem over sets of extended objects like line-segments
and polygons. we provide a framework for declustering collections of extended spatial objects by identifying
the following issues: (i) the work-load metric, (ii) the spatial extent of an object's work-load, (iii)
the distribution of the work-load over the spatial extent of the object, and (iv) the declustering method.
In addition, we also provide a framework for dynamic load-balancing for GIS operations by identifying
the issues of (i) work transfer methods, (ii) identifying the donor processor, and (iii) the granularity
of work transfer. we identify and experimentally evaluate alternatives for each of these issues for the
range query operation, using vector data for Killeen, Texas. The experiments are carried out on the
Cray T3D which is a distributed memory MIMD machine consisting of DEC-Alpha (150Hz) processors
interconnected by a 3-D torus network.
we show that the traditional declustering methods [27] for multi-dimensional point data need significant
extensions to be applicable for extended spatial data. we also show that neither declustering nor
dynamic load-balancing alone are sufficient by themselves for achieving good speedups beyond 8 proces-
sors. Static declustering of extended spatial data is hard, due to highly non-uniform data distribution as
well as great variation in the size and extent of spatial data. Experiments show that the spatial-extent
and the work-load metric are important measures in developing a declustering method. we show that
data replication is often needed for dynamic load-balancing, as the cost of local processing is usually
less than the cost of data transfer for extended objects. In addition, experimental results also show that
the effectiveness of dynamic load-balancing techniques can be further improved by using declustering
methods to determine the subsets of spatial objects to be transferred during run-time.
1.4 Scope and Outline of the Paper
Figure
1 shows two types of queries: First, a query to retrieve data from secondary storage to main
memory. Second, a query (8kmX8km) to retrieve data from main memory to the graphics engine. In
this paper, we focus on the latter type of range-queries where the data is assumed to be in the main
memory.
Several techniques like preprocessing the spatial data can be used to reduce the sequential cost of the
GIS-range-query problem. The cost of the range-query processing can also be reduced by noting that
consecutive range-queries may spatially overlap with the previous range-queries. In this case, the new
range query can be considered as an increment of the previous range query and hence, incremental range-
query methods can be used to solve this problem. But this incremental range-query can be expressed as
a combination of one or more smaller range-queries.
The GIS-range-query problem can also be solved using pre-computation of the results. For this, a
fine grid is laid on top of the data and the intersections of all the spatial objects and the grid cells are
computed and stored in the main memory. Since every range-query will be some combination of the grid-
cells, the intersection results for each of the grid-cells which make up the range-query can be retrieved
and sent to the graphics engine. On the other hand, in the case of data-partitioning approaches, large
objects may be decomposed into smaller objects to improve the load-balance among different processors,
thus increasing the efficiency of the solution.
But these two approaches result in increased total work for the graphics engine, as it has to process
more objects in the same amount of time. The cost of rendering at the graphics engine also increases
with the increased number of polygons. In addition, the decomposition of objects requires more memory
to store the objects. On the other hand, if the smaller pieces are to be merged again into a single
object after the range-query operation, the merging will result in increased total work for the HPGIS
component, as merging of the smaller objects increases the total work.
For example, Figure 4 shows different combinations for partitioning polygonal data into smaller sets.
These combinations can be grouped into four types: Type I has no division of data. Type II divides
the set of polygons into subsets of polygons. However, each polygon is treated as an atomic unit and
sub-division at the polygon level is not allowed. In contrast, type III divides the areas of individual
polygons/bounding-boxes among different processors. Type IV schemes divide both the areas and the
edges of individual polygons and the bounding box. The potential advantage of type III and IV schemes
over a type II scheme is the possibility of better load-balance and less processor idling, resulting in
reduced parallel computation time [32]. However, note that types III and IV schemes result either in
increased total work or in increased work for the polygonization of the result.
Options
for
Dividing
Bounding
subsets subsets of
Division
into
Divide
divide
into
Edges
I
IV-d
III-a
III-d
IV-e
IV-c
small boxes
of polygons small polygons subsets
of edges
II IV-a
III-c
III-b IV-b
IV-f
Options for Dividing the Polygon Data
Figure
4: Alternatives for Polygon/Bounding-Box division among processors.
Let Tcomm be the response-time overhead, due to additional communication cost, or the increased
cost for the polygonization of the resulting polygons for type III and IV schemes. The gain in parallel-
computation time due to improved load-balancing is bounded by the difference between the ideal value
(T seq =P ) and the actual TP value achieved by a type II scheme. The net gain in response time by any
type III or IV scheme over a type II scheme is bounded by [T P (scheme II) - Tseq
This gain is
positive only when polygon-size distributions are extremely skewed, leading to high load imbalances for
type II schemes. Even though these techniques can potentially increase the load-balance and response
time for the GIS-range-query, we do not consider these techniques in this paper. In the rest of this
paper, we focus only on type II schemes.
The rest of the paper is organized as follows. In Section 2, we discuss the issues in declustering
extended spatial data. In Section 3, we present the experimental results for different issues in declustering
spatial data. In Section 4, we discuss the dynamic load-balancing issues in GIS. In Section 5, we present
the experimental results for DLB issues in GIS. Finally in Section 6, we present the conclusions and
future work.
Declustering Spatial Data
The goal of a declustering method is to partition the data so that each partition imposes exactly the
same load for any range query. Intuitively, the polygons close to each other should be scattered among
different processors such that for each range query, every processor has an equal amount of work. For
example, consider the raster representation of a set S of spatial vector objects in a 2-d plane. Suppose
that each point of the raster representation is associated with the work-load of the vector objects that
pass through that point. Now consider the distribution D of this work-load associated with each point.
For example, the distribution might look like the surface shown in Figure 5. Now consider another
distribution DP , which is the scaled down version of the distribution D, by a factor of P . Suppose that
the set S is declustered into P subsets so that each subset is assigned to a different processor. Then
if each of the P subsets has the work-load distribution DP , the work-load imposed for a query will be
equal at all the processors. Hence, this data-partitioning achieves the goal of optimally declustering S
into P subsets.
Distribution D515
0102050150250Distributions D_P
Figure
5: An example of work-load distributions for P=2
Optimal declustering is not achievable in all cases due to the non-uniform distribution and variable
sizes of polygons (or chains of line-segments). In addition, the load imposed by each polygon (or chain)
for a query operation is a function of the size and location of the query. Since the location of the query
is not known a priori, it is hard to develop a strategy that will be optimal for all queries. In general,
there exists no algorithm which can achieve the ideal declustering for all 2-d range-queries for more
than 5 processors [33]. Even in cases where it is possible to achieve the ideal declustering, it is hard to
determine this partitioning, since the declustering problem is NP-Hard, as shown below.
Definition 1. The optimization version of the GIS-Declustering problem: Given a set S of extended-
objects, P processors, and a set of n range-queries, partition the set S among P
processors such that the load at each processor is balanced for all Q Q. The load of an object x 2 S
for a given range-query Q i is given by a function f i
is the set of non-negative integers.
Definition 2. The decision version of the GIS-Declustering problem: Given a set S of extended-
objects, P processors, and a set of n queries, is there a partition of set S into P
subsets , such that
Theorem. The GIS-Declustering problem is NP-Hard.
Proof. we reduce the PARTITION problem [14] to the GIS-Declustering problem. An instance of
the PARTITION problem is defined as follows: Given a finite set A and a "size" s(a) for each
a 2 A, is there a subset A 0 ' A such that:
s(a) (2)
This problem can be transformed in polynomial time to an instance of the decision version of the
GIS-Declustering problem with
Hence, we conclude that
the GIS-Declustering problem is NP-Hard. 2
Since the declustering problem is NP-Hard, heuristic methods are used in practice for declustering
extended spatial data. In this section, we identify the issues for declustering sets of extended spatial
objects and develop heuristic methods for declustering maps with extended objects.
2.1 Issues in Declustering Spatial-Data
There are three major issues in declustering sets of extended spatial objects: the work-load metric, the
spatial-extent of work-load, and the load-density over the spatial-extent.
Work-Load Metric
The load imposed by a spatial object is a function of the shape and extent of the object. In the case
of point data, this load may be uniform, i.e., the same for all spatial points. In the case of chains of
line-segments, this load may be a function of the number of edges, and in the case of a polygon, the load
may be a function of the number of edges and/or the area of the polygon. For example, as the number
of edges increases, the work for each range query also increases, due to the increase in intersection point
computations or the increase in size of the result. Similarly, an increase in the area of a polygon (with
the number of edges being fixed) results in more range queries intersecting the polygon. So in the case
of an extended spatial object A, either the area, the number of edges, or the actual intersection points
with the query boundary can be used in estimating the work-load (denoted by load(A)) for A.
we note that for extended spatial data, there is no accurate method of estimating the amount of
work other than to actually solve the problem. The number of edges/points in a spatial object may not
accurately reflect the amount of work required for that object for a particular range query, and we can
only get a rough estimate of the work by the work-load metric.
The Spatial Extent of the Work-Load
The spatial extent of the work-load is defined as the region R(A) of space affected by an object A, i.e.
if a query Q overlaps with R(A), then the work required to process Q is influenced by the object A.
Usually, R(A) depends on the space occupied by object A. However, it is often expensive to use the exact
geometry of each spatial object in estimating the extent of that object. Thus, approximate geometries
are considered in estimating the spatial extent. Spatial-Extent R(A) is often approximated
if A is approximated with a point
if A is approximated with a box
if A is approximated with n boxes
For example, when bb(A) is the smallest rectangular box enclosing the object A and is represented by
its two corners function may be defined as:
Figure
6 shows some example polygons with different approximations of the extent of the work-load.
The figure also shows a sample range query in dotted lines. Polygon A is approximated with a point
which is shown in the middle of the polygon. The main drawback of the point approximation is that
even though the object is in the region of interest (e.g, Q 1 ), it might be still be considered to be outside
if the point lies outside that region, as shown in the case of polygon A. Alternatively, the bounding box
approximation can be used, as shown in Figure 6, for polygons A, B, C, and E. The drawback with this
approximation is that even though the polygon is not in the region of interest, the bounding box might
still be in the region of interest, as shown for polygon E. Alternatively, multiple bounding boxes may be
used to represent a polygon, as shown for polygon D. But note that even though a greater number of
bounding boxes gives a better representation of the spatial extent of the work, it is also more expensive
to construct this kind of representation.
A C
query Q_1
Figure
Examples of approximations for the extent of the work-load.
Load Density for Spatial Extent
In the case of extended objects, the distribution (or density) of the work-load over their spatial extent
affects the declustering decisions. If it is expensive to determine the actual work-load distribution, an
approximate distribution or a uniform distribution may be used instead of the actual distribution. An
approximate distribution of the work can be determined by considering multiple bounding boxes or by
dividing the region into small cells and counting the work in each of the cells.
For example, in the case of polygon B shown in Figure 6, the clipped load (denoted by
clipped load(B; corresponding to query Q 1 (shown by the dotted line) can be estimated in different
ways. If we assume that the work-load distribution of the polygon is uniform in the bounding box
of polygon B, then we can compute the clipped load as:
clipped load(B;
area(bb(B))
\Theta load(B) (4)
Note that this work estimate may be inaccurate in a few cases. For example, an edge-based work-load
metric coupled with an assumed uniform work-load distribution overestimates the work required for
polygon C for range-query Q 1 , and an area-based work-load metric coupled with a uniform work-load
distribution overestimates the work required for polygon E for range-query Q 1 .
2.2 Declustering Methods
Since the declustering problem is NP-Hard, heuristic methods are used for declustering spatial data.
Here, we describe three heuristic methods based on the ideas of space-partitioning with mapping-
local load-balance, and similarity-graph. In addition, we propose a new population-
distribution-based declustering method for declustering spatial data. For simplicity, we describe these
methods for polygon data, but they can be applied to other extended spatial objects as well.
2.2.1 Space-Partitioning Mapping Functions
Space-Partitioning mapping-function-based methods provides a mapping function from the domain of
data items to the set of processor IDs. For example, a mapping function can be based on the Hilbert
Space-filling curve [7, 21]. (See [10] for a survey of other mapping functions.) The Hilbert curve gives a
total ordering of points in 2-dimensional space. Polygons can be declustered using the Hilbert method
as follows.
Let L s be the set of input objects, and let L p be the ordered list of polygons corresponding to the
Hilbert order for the set and let n be the number of polygons in the list. The
polygons in the list are then assigned to each processor in a cyclic manner. That is, the polygons in the
list L p with indices are assigned to the ith processor.
2.2.2 Local Load-Balance (LLB) Method
Local load-balancing methods [22] consider a sample window of space (based on the frequent range-
queries) and try to equally distribute the load in that window to all the processors. The local load-balance
method with a parameter window W has the following steps: (i) From the set of polygons
, assign the first P polygons to P processors, (ii) For the next polygon in the list, consider the load
corresponding to window W at each processor and select the processor with the minimum load, and (iii)
Assign the next polygon to that processor. Repeat the steps (ii) to (iii) until all the polygons have been
assigned.
At step (ii) of the above method, a processor with the minimum load is selected as follows. Let
clipped load(p j ; W ) such that p j is at processor i. Then select processor k such
that weight(W; i) for is minimum at
2.2.3 Similarity-Graph Method
The similarity-graph declustering method [27] has been shown to outperform other methods for declustering
non-uniformly distributed data. This is a heuristic method based on the max-cut graph-partitioning
of a weighted similarity-graph (WSG), where WSG models the data and some properties of the queries.
As in the case of the LLB method, a rectangular window W can be used as a sample query for
efficiency. The WSG is then constructed w.r.t. this window W by assigning clipped load(v; W ) as t(v)
for each object v in the input. In our experimental study, we use the incremental max-cut partitioning [27]
approach for declustering the spatial data. See Appendix A for details of the similarity-graph declustering
method and how it can be applied to extended spatial-data.
2.2.4 Population Distribution-Based (PDB) Method
The goal of a population-distribution-based declustering method is to achieve identical load distribution
on each partition of the data. we discuss an example of the population-distribution-based method for
declustering polygonal data. The basic idea behind this method is to partition the data sets into groups
of similar work-load distribution over the entire space, as shown in Figure 5. The work-load distributions
in each group over the entire space are compared for allocating a new object to a group. The new object is
allocated to a group such that the statistical difference between the different groups is minimal. However,
tracking and comparing two distributions for statistical differences is expensive. An economical but less
accurate method is to use an approximate distribution instead of the actual work-load distribution. we
use a pair of discrete 1-d distributions to approximate the actual 2-d distribution.
This method uses the actual intersection points of polygons with a grid consisting of vertical and
horizontal scan-lines imposed on top of the polygonal data as shown in Figure 7. Assume that there are
scan-lines parallel to the x-axis and m scan-lines parallel to the y-axis. Then let f(x i
be the number of intersection points of the line all the polygons in the input. Similarly, let
n, be the number of intersection points with the line
Without loss of generality, let the polygons in the input be To distribute these polygons
among the processors, allocate the first P polygons P processors such that polygon p i is assigned to
the ith processor. For the next polygon pw , determine the distribution of intersection points for all the
assigned polygons plus the current polygon and scale down the distribution by P . Let this distribution be
the base-distribution. That is, base distributions f w are similar to f(x i ) and g(y i ),
Figure
7: Distribution-Based Method.
but the base distributions contain the intersection points of polygons
different assignments of polygon pw to P processors, and estimate the total population mismatch due
to each assignment. The total population mismatch of an assignment is estimated as the sum of the
squared differences of the distributions at each processor with the base-distribution. Then select the
processor corresponding to the minimum population mismatch as the processor for assigning the current
polygon. The minimization function for assigning polygon pw is given as:
min
where the current polygon pw is temporarily assigned to the lth processor in each iteration of the minimization
function, and f i and g i are the distribution functions (corresponding to f and g, respectively)
at the ith processor. Note that f i and g i contain the intersection points of only those polygons which
are assigned to the ith processor.
Complexity of the PDB Method for Allocating 1 Polygon
The innermost sum of Equation 5 takes \Theta(n+m) time, and since this sum is computed for each processor,
it takes \Theta(P (n for the double summation. Since there are P iterations of this double sum
(i.e. P iterations of the minimization function), it takes a total of \Theta(P 2 (n +m)) time for a brute force
implementation of this method. But note that between two iterations of the minimization function,
only four terms of the innermost summation change at each processor. Hence we need not compute
the entire sum for each iteration of the minimization function, as we can reuse the rest of the terms
from the previous iteration. Hence, after the first iteration of the minimization function, each further
iteration takes a constant amount of time. This reduces the overall complexity of the PDB method to
3 Experimental Evaluation of Declustering Issues
we compare the performance of different alternatives for each of the issues in declustering extended
spatial objects for a range of map sizes and for different number of processors via experiments carried
out on the Cray T3D parallel computer.
we use spatial vector data for Killeen, Texas, for this experimental study. This data is divided into
seven themes representing the attributes slope, vegetation, surface material, hydrology, etc. we used the
"slope" attribute-data map with 729 polygons and 41162 edges as a base map in our experiments (this is
denoted by 1X map). For studying the effect of increased map size, we derived new maps from this base
map using the following method: Scaling down the base map along the x-axis by two and combining
two such scaled-down maps by translating one of the scaled-down maps along the x-axis. This results in
a map of 1458 polygons with 82324 edges (2X map). A similar technique is used by alternately scaling
down along the y-axis and the x-axis to get maps of different sizes. we also use the chain data from
Fort Sill which has 9667 creeks with 188,678 edges, as shown in Figure 8. Table 1 shows the details of
the maps and the range queries.
Figure
8: Creek data map with a sample range query.
Table
1: Maps and range queries used in our experiments
Map #Objects #edges range-query
size number
polygons 41162 25% 75
polygons 82324 25% 75
polygons 164648 25% 75
8X 5832 polygons 329296 25% 75
Creek 9667 chains 188,678 20% 75
3.1 Experimental Methodology
The issues in declustering are studied by comparing the performance of different methods for a set of
range queries. For this, a sequence of 75 range queries is constructed such that the sequence of the
center points of the range query represents a random walk on the data set. Post-processing is done on
this sequence to ensure that all range queries are unique and that the range-query lies completely within
the map. The size of each range query is approximately 25% of the total area of the map. In all our
measurements, we obtain the run time of the program for each of the 75 queries and report the observed
mean of these 75 values. Figure 9 shows our experimental methodology. The number of different options
we tried for each parameter is shown in parentheses, and the number of possible combinations after each
module is also shown in the figure.
we restrict our experiments to due to the memory limitation. Individual
nodes on Cray T3D have only 64 MBytes of main memory, limiting the size of the map (4X) for which
sequential run-time can be measured directly. This map (4X) does not have adequate work for each
processor beyond P?16 as is evident from the absolute run-times (- 0:05 sec) shown in Tables 2 and 5.
generator
work-load
G
I
spatial-extent
(b'box,point)
Map Generator
size of sample
window (30%,100%)
load-density
(uniform, apprx)
range
queries
#of range-queries
desired
size of
range-query
Decluster
work-load
(#edges,
area)
(1,2,4,
size
Map
(1X,2X,
4X,8X,
Base
declustering-method
options
measurements
Analysis Data collection5 maps
Figure
9: Experimental Method for Evaluating Declustering Methods.
In our experiments, we only measure and analyze the cost per range-query and exclude any preprocessing
cost. This preprocessing cost includes the cost of loading the data into main memory and the
cost of declustering the data among different processors. Note that this preprocessing cost is paid only
once for each data set that corresponds to the current window of interest. As the query range moves out
of the current window, new data is fetched from the disk discarding data for the old window. Since the
next location of the window can often be predetermined, preprocessing the new data need not affect the
performance of the rest of the system. Moreover, once a new data set is loaded into the main memory,
it would be active for several minutes before the window has move out of the current range. Thus,
this would leave several minutes for preprocessing the next data set. Hence, in this study, we are only
interested in measuring the performance of our algorithm in terms of the variable cost per range query
for the preprocessed data.
3.2 Experimental Results
we conduct experiments to study alternatives for each of the following issues: the work-load metric, the
spatial extent of the work-load, and the load density over the spatial extent. In addition, we compare
the different declustering methods: Local Load-Balance, Similarity-Graph, and PDB.
In these experiments, the data is initially distributed among different processors. A processor acts as
the leader processor and is responsible for broadcasting each range query to the rest of the processors.
After receiving the range-query information, each processor works only on its local data until all the
local data is exhausted. After the local data is processed, the processor waits for the next range query
to be processed. The lead processor waits for all the processors to finish the work before broadcasting
the next range query. Note that the only communication required for each bounding box is a broadcast
of the parameters of the range query.
3.2.1 Comparison of Alternatives for Work-Load Metrics
we compare the area and the number of edges as alternatives for the work-load metric in the case of polygonal
data. The spatial extent of the work-load is based on the bounding-box approximation, and the load
density over the spatial extent is assumed to be uniform. Thus the clipped load(polygonP; windowW ) is
estimated using Equation 4. we used the LLB method with a sample window of 30% as the declustering
metric. The number of processors P varies from 2 to 16 and the 4X map is used as the data set.
The results of this experiment are shown in Figure 10(a). The x-axis gives the number of processors,
and the y-axis gives the average speedups for 75 range queries. The main trends observed from this
graph are: (i) Number of edges as the work-load metric results in better speedups and hence appears
to be a more accurate work-load metric for 4), the difference between the
two work-load metrics is negligible.
3.2.2 Comparison of Alternatives for the Spatial-Extent of the Work-Load
we compare point and bounding-box approximators as alternatives for the spatial-extent of the work-load
in the case of polygonal data. The work-load metric is fixed to be the number of edges, and the load
density over spatial extent is assumed to be uniform. we used the LLB method with a sample window
Number of Processors
"llb_edge_box"
"llb_area_box"2468
Number of Processors
"llb_edge_box"
"llb_edge_point"
(a) (b)
Figure
10: Speedups for LLB method for 4X map.
of 30% as the declustering method. The number of processors P ranges from 2 to 16 and 4X map is
used as the data set.
The results of this experiment are shown in Figure 10(b). The x-axis gives the number of processors
and the y-axis gives the average speedups for 75 range-queries. The main trends observed from this
graph are: (i) The bounding-box approximator for the spatial extent results in better speedups, and
hence appears to be a more accurate estimator for 4), the difference
between the two estimators is negligible.
3.2.3 Comparison of Different Declustering Methods
we compare the performance of different declustering methods: Hilbert, LLB, similarity-graph, and
PDB. In addition, we compare the effect of the size of the sample window on the performance of
similarity-graph and LLB methods. For simplicity, the work-load metric is fixed to be the number of
edges, and the spatial extent is assumed to be a point. The load-density over the spatial extent is
assumed to be uniform in the case of LLB and similarity-graph methods.
Figure
11 gives the results showing the effect of sample window size for LLB and similarity-graph
methods. The x-axis gives the number of processors and the y-axis gives the average speedups for 75
range-queries. In Figure 11, "llb-30" ("sim-30") refers to the LLB (similarity-graph) method with a
sample window which is 30% of the total area of the map. Similar notation is used for a 100% window
for both methods. The main trends observed from these graphs are: (i) Increased window sizes gives
increasing speedups. (ii) For the LLB method, the increase in speedup from a 30% window to a 100%
window is negligible.
Figures
12 and 13 show a comparison of different declustering methods for polygon and chain data,
respectively. In these figures, the x-axis gives the number of processors, and the y-axis gives the speedup
number of proceccors
"sim-100"
"llb-100"
"llb-30"
"sim-30"24681012
number of proceccors
"sim-100"
"lldb-100"
"lldb-30"
"sim-30"
(a) (b)
Figure
11: Speedups for LLB and Similarity-Graph methods for different window sizes. Speedups for
maps 2X and 4X are given in (a) and (b) respectively.
value. The main trends observed from these graphs are: (i) Bigger maps lead to better speedups for
most schemes, probably due to the improved load-balance. (ii) Similarity-Graph and PDB methods give
the best speedups among the different methods. (iii) Speedups are better for the chain data than for
the polygon data. This may be due to less variance in work-loads for line data, when compared to the
polygon data. (iv) Mapping-Function-Based methods like Hilbert provide inferior speedups beyond 8
processors. (v) Even the best declustering method does not provide good speedups for more than 8
processors, for the maps used in our experiments.
3.3 Comparison of Static Load-Balancing
The effectiveness of declustering methods in achieving load-balance is shown in Table 2. The data shown
in
Table
2 is represented as Mean \Sigma SD for the 75 range queries used in our experiment. The column
Avg: Static gives the average static execution time over 16 processors and 75 range queries. The column
Max: Static gives the maximum static execution time over 16 processors, averaged over 75 range queries.
In this experiment, we observe that the static declustering alone does not achieve a good load-balance,
and that the static methods need to be augmented with dynamic load-balancing.
Table
2: Performance Evaluation of SLB for
Method Avg. Static Max. Static Speedup
SIM 0:0454 \Sigma 0:003 0:0621 \Sigma 0:004 11.70
PDB 0:0454 \Sigma 0:003 0:0626 \Sigma 0:004 11.60
LLB 0:0454 \Sigma 0:003 0:0660 \Sigma 0:003 11.00
speedup
number of proceccors
"hilbert"
"sim-100"
"llb-100"
number of proceccors
"hilbert"
"sim-100"
"llb-100"
"PDB"
(a) (b)
Figure
12: Speedups for different static-declustering methods. Speedups for maps 2X and 4X are given
in (a) and (b) respectively.
4 Dynamic Load-Balancing (DLB) Techniques
If static declustering methods fail to equally distribute the load among different processors, the load-balance
may be improved by transferring some spatial objects to idle processors using dynamic load-balancing
techniques.
4.1 DLB Issues in GIS
A typical dynamic load-balancing technique addresses three issues: (i) what methods are good for
transferring work (spatial objects) between two processors, (ii) how much more work should an idle
processor fetch, and (iii) which processor should an idle processor ask for more work.
4.1.1 Methods for Transferring the Work
Extended spatial objects are large (e.g., 50 edges on average in maps of Killeen, Texas) in size and
require special data structures for solving the range-query problem. Hence, sometimes it may be more
expensive to send the complete object data and the corresponding data structures to another processor
than to solve the problem locally. To compare the relative costs of local processing and data transfer,
we develop cost models for these two operations.
The cost of computing the intersection of range query Q with a polygon A depends on whether A
intersects Q or not. For example, if A is completely inside Q, it can be detected in a constant amount
of time. On the other hand, if A intersects Q, the cost of intersection computation and polygonization
depends on the number of intersection points and the size of the result. Let p 0 be the probability that
speedup
number of proceccors
"hilbert"
"sim"
"llb"
"PDB"
Figure
13: Speedups for different static-declustering methods for line data.
a polygon intersects at least one of the edges of the range query, and let x be the number of edges of A.
Then the sequential cost T s (A) is given by:
where ff 0 is the fraction of the edges of A that actually intersect Q, and t c is the cost of one step of com-
putation. For simplicity, we assume that the cost of the intersection computation and the polygonization
of the result is a linear function of (ff 0 p 0 x). Constant C 2 accounts for checking if the bounding-box of a
polygon is completely inside or completely outside the query box Q. Since this test can be performed
using 8 comparisons, C Typically, for the data used in our experiments.
Similarly, transfer cost T t (A) can be modelled as a linear function of the number of edges x as:
Here constant C 3 is included to account for the transfer, packing, and unpacking of the datastructures
and the data associated with A, and typically C 3 ? 2. Assuming t cost T t is
more than local processing cost T s when:
For the GIS-range-query computation, the value of x is small (close to 1). This implies that even for
small objects, the transfer cost T t is more than the local processing cost. we note that even when t s ? 0,
this relation remains the same. This drawback may be overcome by selectively duplicating the data on
different processors and exchanging only the object IDs. Since the object ID is only a word of data,
this will result in minimum communication overhead for each data transfer. Note that this replication
of data at different processors results in memory overhead.
4.1.2 Partitioning Method and Granularity of Transfers
Granularity of work division determines how much work is transferred between a donor processor and an
idle processor. This granularity may depend on the size of the remaining work, the number of processors,
the cost of the work transfer, and the accuracy in estimating the remaining work. Several strategies
like self-scheduling [12], factoring scheduling [20], and chunk scheduling [23] exist for determining the
amount of work to be transferred. Also, the simplest case of transferring one piece of work at a time is
also considered in some cases.
If communication cost is negligible or very small when compared to the average cost of solving the
range-query problem for a set of objects, chunks of single objects may yield the best possible load-
balance. On the other hand, chunks of more than one object are suitable if the communication cost is
comparable to the average cost of solving the range-query problem for a set of objects (which is true for
most of the distributed memory systems). In the case of chunks of more than one object, it is desirable
to keep a comparable amount of work in each chunk, so that the load-imbalance can be kept low.
we note that this problem of dividing work into chunks of equal work is similar to the static declustering
problem. Even though the traditional DLB methods use simple methods like random partitioning,
round robin, etc., we hypothesize that the load-balance of any DLB method can be improved by using
a systematic declustering method for dividing the work into chunks. Since the declustering operation is
very expensive, this chunking can be done statically. Also note that, for simplicity, we do not consider
dynamically variable size chunks in this paper.
4.1.3 Which Processor Should an Idle Processor ask for More Work?
Methods to decide which processors an idle processor should ask for more work are discussed and analyzed
in [24, 25]. These methods can be divided into two categories: (1) In a pool-based method (PBM), a
fixed processor has all the available work, and an idle processor asks this fixed processor for more work.
(2) In a peer-based method, all the work is initially distributed among different processors, and an idle
processor selects a peer processor as the work donor using random polling, nearest neighbor, and global
round robin (GRR) or asynchronous (local) round robin (ARR).
Pool-Based Method
The structure of the GIS-range-query problem imposes a limitation on the amount of work that can
be kept in the shared pool. If all the work is initially at a single processor, the approximate filtering
computation for each range query cannot be parallelized. As a result of this non-parallelizable work,
the rest of the processors have to wait for a single processor to finish the filtering computation before
fetching the objects for intersection computation.
This processor idling can be avoided by initially partitioning the data into two parts: Static and
Pool. Initially, the Static part of the data is declustered into P sets to ith
processor for . The Pool part of the data is then assigned to a leader processor (processor 1).
For each range query, each processor other than the leader processor starts working on the local data
corresponding to the Static part. The leader processor first completes filtering the Pool, and then starts
working on its local data which corresponds to the Static part. This situation is shown in Figure 14.
Process S_2 DLB
Process S_P DLB
Process S_2
Process S_P DLB
ApprxFil(Pool) Process S_1 DLB ApprxFil(Pool) Process S_1 DLB
Small Pool Large Pool
(a) (b)
Figure
14: A Small pool may result in high a static load imbalance. A Large pool may result in processor
idling.
If any of the processors finish work on their local data before the filtering step for the Pool part is
finished, that processor would have to wait for the lead processor to finish the filtering work with the
Pool part of the data, as shown in Figure 14(b). This idling in turn results in increased run time, which
decreases the performance of the algorithm. Hence, there should be enough work at each processor so
that the filtering step for the Pool can be completed without leading to any processor idling. But on the
other hand, the Static work at each processor should not be so much that the static load-imbalance is
too high. A high static load-imbalance can also result in processor idling, as shown in Figure 14(a).
Let W be the total work required to solve the range-query problem and let j be the fraction of the
total time spent in approximate filtering (i.e., the stage of range-query computation). Also, let ' be the
load imbalance due to the static declustering of the data. That is, if the total work W is declustered
among P processors, the maximum time taken by a processor is W (1 and the minimum time
taken is W Further, assume that t o =P is the overhead incurred due to the parallelization.
Here t o is the increase in total run-time due to the communication overhead and processor idling.
Suppose x fraction of the total work is taken as the Pool data. Then, the P ool should be large enough
to overcome the static load imbalance incurred due to the Static part of the data (Figure 14(a):
Also, the filtering cost jxW for the P ool should be less than the maximum time corresponding to the
Static work (Figure 14(b)):
Combining Equations 7 and 8 we get the lower and upper bounds for pool size as:
Table
3 gives sample upper and lower bounds for x, estimated using Equation 9. The parallel overhead
t is assumed to be zero and j is assumed to be 0.05.
Table
3: Estimated lower and upper bounds for pool size with
LowerBound
UpperBound 0.80 0.78 0.66 0.63 0.48 0.46
Peer-Based Methods
In peer-based methods, data is divided among all processors with no common pool, and an idle processor
asks another peer-processor for more work. In this paper, we evaluate the global round robin (GRR)
and asynchronous round robin (ARR) methods for the GIS-range-query problem. See [24] for a complete
discussion of these two algorithms.
In GRR, a single processor acts as the scheduler and is responsible for sending the ID of the next
available processor with work to a requesting idle processor. The idle processor then requests work
from this processor which has more work. The main drawback of such a scheme is that the scheduler
processor may become a bottleneck as the number of processors increases. In our experimental study,
this bottleneck is not significant, as the number of processors is relatively small, i.e., less than 32.
In ARR, every processor maintains a local target pointer. Whenever a processor runs out of work, it
uses the target pointer as the label of a donor processor and sends it a work request. The target value is
incremented modulo P each time a work request is sent. If the processors that receives the request has
more work, it sends some work to the requesting processor. Otherwise, the requesting processor sends
another request to the next processor given by the target pointer until more work is received from a
donor processor.
Note that of these two methods, the ARR method does not have the single processor bottleneck
as in the case of GRR. But the ARR method needs extra work to check for termination detection,
since there is no single source of information about the remaining work for each range query. Hence,
the advantage of this method over GRR may be offset due to the termination-detection overhead. For
the GIS-range-query problem, the performance of these two methods may be comparable, for up to 16
processors.
4.2 A Framework for Parallel Formulations
In our approach, we use declustering at the static and dynamic load-balancing levels. we present a
general framework for this method which can be used with any of the declustering and DLB methods
discussed so far. This is a two-phase scheme, since we use an initial static declustering of the data and
use additional load-balancing at run time. Pseudocode for this general method is given in Figure 15.
In the following discussion, let P be the number of processors used in the system. Initially, all the
data is declustered into two sets, S a and S b . The set S a is used as the static data: Once the objects
from this set are allocated to a processor, that processor alone is responsible for processing these objects.
That is, objects from this set are never transferred between processors during the DLB phase. Similarly,
the set S b is used as the dynamic data: Objects from this set can be transferred between processors
during the DLB phase. we call the set S b the shared pool of data, since the objects from this set can
be shared between processors during the DLB phase.
This initial declustering of the data into two sets is done depending on the desired size of the shared
pool of polygons and on the number of processors. (In the following section, we experimentally show
the variation in the size of S a across a different number of processors.) The data in S a is statically
declustered into P sets S i
a , and each processor P i is assigned the set S i
a , for 1). The choice
of the declustering method is determined by the number of processors, the type of data, and the data
distribution.
The data in S b is also statically declustered into x buckets and is replicated at all the processors.
Again, note that any of the static-declustering methods discussed so far can be used for this static-
declustering purpose. The value of x is dependent on the size of S b , the number of processors, and the
communication cost. Hence, this parameter should be tuned depending on the data.
When a bounding box for the next range query is received, a designated lead processor (for example
processor broadcasts the bounding-box parameters to all the other processors in the group. After
receiving the bounding-box parameters, each processor P i performs the approximate polygon-level filtering
and retrieves the candidate polygons from its local data set S i
a and places the result in set L i . In
addition, each of the processors performs the approximate filtering for the data from set S b , and keeps
the resulting object IDs in a dynamic set, such that the set of object IDs from each of the x buckets is
in a separate bin. Each processor P i then independently works on data from the set L i until no more
objects are left in this set.
When a processor P i finishes work on the data from L i , it goes into the DLB mode. In this mode,
only the data from the dynamic set are used for dynamic load-balancing. The work is transferred by
transferring a bin of object IDs between processors. The algorithm terminates when the DLB method
terminates.
VAR local data: Array [pidSet] of objects map[i] to processor i using DECLUSTER();
corresponding to data from S a /
VAR global data: Array [pidSet] of objects map[i] to processor i using DECLUSTER();
corresponding to data from S b /
BEGIN
one to all broadcast(0, pidSet, bbox);
phase /
parallel for(pid in pidSet) do
sequential algorithm(local data[pid]);
goto the DLB phase /
parallel while (more work) do
object next(object IDs from next unprocessed bucket id);
sequential algorithm(object ids);
END
Figure
15: Pseudo-code for Parallel Formulation.
5 Experimental Evaluation of DLB methods
we compare different DLB methods when applied to the range-query problem over a set of extended
spatial objects. we use the framework given in Figure 15 for implementing the the parallel range-query
algorithm. Our experiments are carried out on the Cray T3D parallel computer using the polygonal
data described in Table 1.
The alternatives for each of the DLB issues are evaluated by comparing their average performance
over a set of 75 range queries. In these experiments, the similarity-graph method with a 100% window
is used as the static declustering method, unless mentioned otherwise. For simplicity, number of edges is
used as the work-load metric in the static declustering of data. Similarly, the spatial extent is assumed to
be a point and the load-density is assumed to be uniform. Figure 16 shows our experimental methodology
for evaluating the DLB issues. The number of different options we tried for each parameter is shown in
parentheses, and the number of possible combinations after each module is also shown in the figure.
The message start-up time t s for the Cray T3D is about 100 nano seconds, i.e., 0.1 micro seconds. To
study the effect of parallel formulations on different communication networks, we simulate the different
networks by increasing the value of t s (0.1, 10.1, and 100.1 - sec).
5.1 Evaluation of Work-Transfer Strategies
Work-Transfer strategies can be compared on the basis of following two parameters: (i) The average cost
T t of transferring the complete object data, including the data structures, from one processor to another
processor, and (ii) the average cost T s of solving the GIS-range-query (after polygon-level filtering) on
a single processor. Here T t includes the cost of packing and unpacking any data structures related to
the polygons after the filtering and the cost of sending the packed data from one processor to another
processor.
Table
4 shows actual experimental values for T s and T t for 5 randomly chosen range queries
Analysis Data collection
Map Generator
Map
Base
size
4 maps
range
queries
size of
range-query
measurements
Decluster
pool size
(1,2,4,
methods
(GRR,ARR,PBM)
options
partitioning/declustering
method
Parallel
HPGIS generator
work-load
#range-queries
desired=75
Figure
Experimental Method for Evaluating DLB Methods for the parallel GIS-range-query.
over the polygonal data from the 2X map. The table also shows the average values of
range-queries over 2X map: As shown in Equation 7, note that T t is consistently more
than T s in all these cases and that this gap will be more for other parallel computers such as CM-5,
IBM SP-2, as t s is substantially higher for these machines. This result is consistent with the analysis
shown in Equation 7. From this we conclude that it is not desirable to transfer the complete polygon
data between processors at run time. Instead, only the polygon IDs should be transferred at run time.
This is facilitated by selectively duplicating the polygon data at some processors. In the rest of the
experiments, work transfers are always done by transferring the object IDs unless otherwise stated.
Table
4: Cost of Transfer vs the cost of Solving the Problem at a Single Processor (cost in seconds)
Time (sec) Avg. over 75 queries
5.2 Declustering for DLB Methods
In this experiment, the effect of chunking based on systematic declustering is compared to that of random
declustering for the DLB method. we used GRR as the DLB method and compared random, similarity-
graph, and LLB methods of declustering. The dynamic data is declustered with polygons per chunk.
Figure
17 shows the experimental results for t seconds. The x-axis gives the number
of processors and the y-axis gives the average speedup over 75 queries.
From this data, it is clear that random declustering of data is not as effective as systematic declustering
for achieving a good load-balance for the GIS-range-query problem. Moreover, the ordering of
the methods remains the same as for the static case. This shows that systematic declustering of data
improves the load-balance. Also, the load-balance can be improved by using more information during
Number of Processors
t_s=0 "Sim"
"LLB"
Number of Processors
t_s=100 "Sim"
"LLB"
"Rand"
(a) (b)
Figure
17: Speedups for different declustering methods for the GRR method (Map=4X).
the declustering phase.
5.3 Evaluation of the Granularity of Work-Allocation in DLB
we compare the effect of different chunk sizes with the number of polygons in chunks ranging from 1 to
30, using GRR as the DLB method and similarity-graph-100 as the declustering method. In addition,
we compared the effect of increasing the value of t s with decreasing size of the chunk (i.e. increasing
number of chunks). The experiment is conducted using 4X map for replicated data being
40% of the total data.
Chunks of single polygons usually result in the best possible load-balance, but this also results
in maximum overhead due to the increased number of chunks. Figure 18 shows the graph for this
experiment. When the t s value is low, chunks of single polygons result in the best possible speedups.
As the value of t s is increased, the maximum speedup is achieved for some chunk size other than single
polygon chunks. This is due to the increased communication overhead, as the increased number of
chunks requires the exchange of more messages between processors. Note that t s - 100- seconds is a
typical value seen in a MIMD message passing computer like the IBM SP-2.
5.4 Effect of the Pool Size
we evaluate the effect of the pool size, using the Pool-Based Method for varying number of processors
and varying data files. The number of processors is varied from 4 to 16 and the data files are varied
from 1X map to 4X map. The pool size is varied from 0 to 100% of the total data. Note that a 0%
Polygons per Chunk
"t_s=0.1"
"t_s=10.1"
"t_s=100.1"
Figure
18: Granularity of Work-Transfers
pool refers to the static declustering with no DLB. Work transfers are done by transferring the polygon
IDs, and we used one polygon per chunk.481216
Speedups for Map_4X
"P=4"
"P=8"
Speedups for P=16
"Map=1X"
"Map=2X"
"Map=4X"
(a) (b)
Figure
19: Speedups for different pool sizes for Pool-Based method.
Figure
19 shows the results of this experiment. The x-axis gives the size of the pool as a percentage
of the total data, and the y-axis gives the average speedups over 75 range queries. As expected, the
speedups increase as we increase the pool size, up to a point, and then they start decreasing. The
initial increase in speedup may be due to the increased load-balance. The decrease in speedup after a
achieving a maximum value is due to the non-parallelizable overhead of the approximate filtering, as
shown in Equation 8. Note that this decrease is greater as P increases. This is due to the increase in
non parallelizable overhead with increasing P . The maximum speedup occurs at different pool sizes for
different number of processors and for different data sets. Also note that the maximum speedups occur
in the ranges predicted in Table 3.
5.5 Comparison of DLB methods
we compare the performance of the three DLB methods (GRR, ARR, and PBM) for t
100:1. The number of processors is varied from 4 to 16 and the 4X map is used as the input data.
The number of polygons per chunk is 1 for t 100:1. Work is transferred by
transferring the polygon IDs, and similarity-graph-100 is used for declustering the data.481216
Number of Processors
"PBM"
"ARR"
Number of Processors
"PBM"
"ARR"
"GRR"
Figure
20: Speedups for different DLB methods.
Figure
20 shows the speedups for these three methods. The x-axis gives the number of processors,
and the y-axis gives the average speedups over 75 range queries. For t both GRR and ARR have
comparable performance, while PBM performs better than these two methods, as shown Figure 20(a).
However, GRR has inferior speedups relative to other methods for t shown in Figure 20(b).
This may be attributed to the centralized overhead of maintaining the list of possible donor processors
in GRR.
5.6 Effectivness of Dynamic Load-Balancing
The effectiveness of DLB methods in achieving a good load-balance is shown in Table 5. The data
is collected with and with a 40% pool for PBM and 40% replicated data for GRR and ARR.
Similarity-Graph-100 is used as the declustering method with one polygon per chunk for the shared data.
Work transfers are done by transferring the polygon IDs. The data shown in Table 5 is represented as
Mean \Sigma SD for the 75 range queries used in our experiment. The column Avg: Static gives the static
execution time averaged over 16 processors and 75 range queries. The column Max: Static gives the
maximum static execution time over 16 processors, averaged over 75 range queries. Similarly, Avg: T otal
time is the average total time over 16 processors for 75 queries, and T otal is the total parallel run time
averaged over 75 range queries. In this experiment, we observe that the DLB methods have achieved a
good load-balance (i.e., the percentage difference between the avg., and the total) even though there is
a very high load-imbalance after the static part.
Table
5: Performance Evaluation of DLB for
Method Avg. Static Max. Static Avg. Total Max. Total Speedup
PBM 0:0307 \Sigma 0:004 0:0492 \Sigma 0:007 0:0484 \Sigma 0:006 0:0518 \Sigma 0:007 14:04 \Sigma 0:69
GRR 0:0329 \Sigma 0:004 0:0518 \Sigma 0:008 0:0543 \Sigma 0:008 0:0557 \Sigma 0:008 13:07 \Sigma 0:64
ARR 0:0241 \Sigma 0:003 0:0422 \Sigma 0:006 0:0508 \Sigma 0:006 0:0556 \Sigma 0:006 13:03 \Sigma 0:59
6 Conclusions and Future Work
Data-partitioning is an effective approach towards achieving high performance in GIS. we parallelize the
GIS-range-query problem using data partitioning and dynamic load-balancing techniques. Partitioning
extended spatial-data maps is difficult, due to the varying sizes and extents of the polygons and the
difficulty of estimating the work load. Hence, special techniques are needed to parallelize the GIS-range-
query problem.
we identify the main issues in declustering collections of extended spatial objects like chains of line-segments
and polygons. we experimentally evaluate several alternatives for each of these issues on a
distributed memory MIMD machine for the range-query operation. Experimental results show that the
number of edges is a better load estimator than the area of the object. The bounding box approximator
for the spatial extent of an object gives more information than the point estimator. But going to a higher
order estimator like multiple bounding boxes is not practical as these estimators are expensive to obtain
and are expensive to use for declustering extended spatial data. The results also show that, among the
static declustering methods, similarity-graph and distribution based methods outperform other static
declustering methods.
we also show that the performance of DLB methods can be further improved by using the declustering
methods for determining the subsets of polygons to be transferred during run-time. In the proposed
approach, we use the ideas of declustering in a hierarchical fashion, increasing the load balance over
purely static methods, and decreasing the communication cost over purely dynamic methods.
In our future work, we are planning to scale up our methods to larger numbers of processors, larger
maps, and queries. we also plan to extend our work to map-overlay problems and other computationally
intensive HP-GIS operations. Another major effort would focus on high performance techniques for
secondary and tertiary-storage terrain mapping and the effect of I/O (e.g. swapping) and indexing
methods. Finally, we would like to evaluate these techniques on the workstation clusters which are
common in many GIS applications.
Acknowledgments
This work is sponsored by the Army High Performance Computing Research Center under the auspices
of the Department of the Army, Army Research Laboratory cooperative agreement number DAAH04-
95-2-0003/contract number DAAH04-95-C-0008, ARO contract number DA/DAAH04-95-1-0538, the
content of which does not necessarily reflect the position or the policy of the government, and no official
endorsement should be inferred. This work is also supported by the Federal Highway Authority and
the Minnesota Department of Transportation. we would like to thank the AHPCRC, University of
Minnesota, and the Pittsburgh Super Computing Center for providing us with access to the Cray T3D.
we would also like to thank Minesh Amin and Christiane McCarthy for improving the readability and
technical accuracy of this paper.
--R
Page. http://dis.
Parallel Computational Geometry.
Parallel Computational Geometry.
Experiments in the Measurement of Spatial Association Using a Parallel Supercomputer.
Efficient Plane Sweeping in Parallel.
Algorithms for Reporting and Counting Geometric Intersections.
Parallel Processing of Spatial Data for Terrain Characterization.
Disk Allocation for Product Files on Multiple Disk Systems.
Allocation Methods Using Error Correcting Codes.
The Idea of Declustering and its Applications.
Dynamic processor self-scheduling for general parallel nested loops
Uniform Grids: A Technique for Intersection Detection on Serial and Parallel Machines.
Computers and Intractability: A Guide to the Theory of NP-Completeness
A Dynamic Index Structure for Spatial Searching.
Visualizing Large Data Sets in the Earth Sciences.
Data Parallel R-Tree Algorithms
Data Parallel Spatial Join Algorithms.
Performance of Data-Parallel Spatial Operations
Linear Clustering of Objects with Multiple Attributes.
Parallel R-Trees
Allocating independent subtasks on parallel processors.
Introduction to Parallel Computing: Design and Analysis of Algorithms.
Scalable load balancing techniques for parallel computers.
An Analysis and Algorithm for Polygon Clipping.
A Similarity Graph-Based Approach to Declustering Problem and its Applications
Range Search In Parallel Using Distributed Data Structures.
A Generic Solution to Polygon Clipping.
A Parallel Intersection Algorithm for Vector Polygon Overlay.
Allocation Methods for Parallelizing Grid Files.
--TR
--CTR
Shashi Shekhar , Sivakumar Ravada , Vipin Kumar , Douglas Chubb , Greg Turner, Parallelizing a GIS on a Shared Address Space Architecture, Computer, v.29 n.12, p.42-48, December 1996
Mehmet Koyutrk , Cevdet Aykanat, Iterative-improvement-based declustering heuristics for multi-disk databases, Information Systems, v.30 n.1, p.47-70, March 2005
Thu D. Nguyen , John Zahorjan, Scheduling policies to support distributed 3D multimedia applications, ACM SIGMETRICS Performance Evaluation Review, v.26 n.1, p.244-253, June 1998
Jignesh M. Patel , David J. DeWitt, Clone join and shadow join: two parallel spatial join algorithms, Proceedings of the 8th ACM international symposium on Advances in geographic information systems, p.54-61, November 06-11, 2000, Washington, D.C., United States
N. An , R. Lu , L. Qian , A. Sivasubramaniam , T. Keefe, Storing spatial data on a network of workstations, Cluster Computing, v.2 n.4, p.259-270, 1999
Hakan Ferhatosmanoglu , Aravind Ramachandran , Divyakant Agrawal , Amr El Abbadi, Data space mapping for efficient I/O in large multi-dimensional databases, Information Systems, v.32 n.1, p.83-103, March, 2007
Hakan Ferhatosmanoglu , Ali aman Tosun , Guadalupe Canahuate , Aravind Ramachandran, Efficient parallel processing of range queries through replicated declustering, Distributed and Parallel Databases, v.20 n.2, p.117-147, September 2006 | high performance;geographic information systems;polygon clipping;declustering methods;range query;load-balancing |
627934 | Navigational Accesses in a Temporal Object Model. | AbstractA considerable research effort has been devoted in past years to query languages for temporal data in the context of both the relational and the object-oriented model. Object-oriented databases provide a navigational approach for data access based on object references. In this paper, we investigate the navigational approach to querying object-oriented databases. We formally define the notion of temporal path expression, and we address on a formal basis issues related to the correctness of such expressions. In particular, we focus on static analysis and give a set of conditions ensuring that an expression always results in a correct access at runtime. | Introduction
temporal join
-join time-
join
navigational
Path expressions
implicit joins explicit
joins
dangling
The importance of temporal data management has long been recognized by the database community
and many techniques for modeling and managing temporal data have been introduced [19, 21]. Most
research in temporal databases has been developed in the framework of the relational data model
[8, 10, 17, 20]. However, also temporal object-oriented databases have recently received increasing
attention, and several object-oriented data models have been proposed [16]. The reason for this
interest is that most applications for which object-oriented database management systems are
expected to provide support, exhibit some form of temporality. Examples are engineering databases,
multimedia systems, and office information systems. However, as pointed out also by Snodgrass in
[16], in contrast to temporal relational data models, the specification of temporal object-oriented
data models is in most cases informal. To overcome this drawback, we have proposed in [1] a
formal temporal object-oriented data model, and we have addressed on this formal basis several
issues deriving from the introduction of time in an object-oriented context.
A considerable research effort on temporal databases has been devoted to the design of temporal
query languages, for both the relational and the object-oriented data model. In particular, temporal
extensions of relational algebra [8, 18], relational calculus [18], and relational commercial query
languages such as QUEL [15] and SQL [13, 17] have been proposed. In those temporal query
languages, a relevant operation is represented by the . Indeed, in temporal relational
query languages the join relational operator has two different flavors, referred to as 2 and
in [8]. Two different join operators are required because in temporal data models two kinds
of values (ordinary and temporal) are represented. 2-join plays the same role as in nontemporal
relational data models, and allows to compare only attribute values that occur at the same point
in time, whereas temporal join allows to impose conditions on times associated to tuples.
Several temporal object-oriented query languages have also been proposed [16]. Many of these
query languages are extensions of relational query languages, like DAPLEX [14], QUEL [12] and
SQL [9], rather than of existing nontemporal object-oriented query languages. It is however important
to note that object-oriented database systems [3] support a approach for data
access, in addition to traditional query language constructs available in relational database systems.
This modality must be taken into account in designing temporal object-oriented query languages.
The navigational approach is based on object identifiers and aggregation relationships: given an
OID, the system directly accesses the corresponding object and navigates through objects referred
to by its components. This access modality can be combined with a classical (e.g. SQL-like) ac-
cess. Thus, the conditions in a query are imposed on nested attributes of the hierarchy rooted
at the object under examination. allow to conveniently describe joins, aiming at
getting a component from an object. In an object-oriented query language a distinction can be
made between , corresponding to the hierarchical structure of objects, and
, analogous to the relational ones, explicitly comparing two objects. While issues related to
explicit joins are quite similar to those arising in the relational context, implicit joins poses some
new problems. Indeed, when the value of an object attribute is the identifier of another object, the
identifier can be seen as a pointer to the referred object. Obviously, for the access to be correct,
that pointer must not be .
In this paper, we investigate issues related to implicit joins and navigational accesses in a
temporal context. Therefore, the goal of this paper is not to propose a new temporal object-oriented
query language, rather is to investigate the impact of time on the peculiar features of object-oriented
query languages. To this purpose, we first introduce a formal notion of temporal path expression.
Temporal path expressions are obtained as an extension of classical path expressions of object-oriented
languages, in that for each attribute access a time can be specified, in addition to the
Ttype
OI CI AN
temporal T
temporal T
2 The Reference Temporal Object-Oriented Model
integer real bool character string
time
time
time
Function is the identity function on nontemporal types.
referential integrity constraint
structured types
temporal types
now
now
attribute name. The time can be expressed either explicitly, by specifying a time instant or a set of
time instants, or implicitly, by means of a formalism to symbolically denote sets of time instants.
Then, we investigate the notion of path expression correctness. As remarked by Clifford and Croker
in [8], a temporal model must enforce with respect to the temporal
dimension. For example, the information that an employee worked in a division at time , is correct
if both the employee and the division existed in the database at time . This means that some
correctness conditions should be imposed on a database to ensure it satisfies temporal constraints.
We have proposed a notion of consistency for a temporal object-oriented database [1]. However, the
consistency of the database alone is not enough to ensure that all the navigations through objects
are correct. Thus, in this paper we investigate the issue of correctness of navigational accesses, and
whether and how correctness can be statically verified.
To best of our knowledge, this is the first extensive investigation concerning navigational accesses
in a temporal context and addressing the problem of a static analysis of path expressions. One of
the few papers considering navigational access is the one by Cheng and Gadia [7]. Their language
OOTempSQL provides a sublanguage for associative navigation relying on notions very similar to
our concept of temporal expression. However, neither a formal semantics is given for the language
nor correctness conditions have been stated.
The remainder of the paper is organized as follows. Section 2 presents a brief overview of
the Chimera data model. Section 3 describes the formalism we use to symbolically denote a
set of time instants in a temporal path expression. Section 4 formally introduces temporal path
expressions and addresses the problem of path expression correctness. Section 5 deals with the
static analysis of path expressions, whereas path expression equality is considered in Section 6.
Finally, Section 7 concludes the paper. The paper includes two appendixes reporting the syntax
of the language we use to specify boolean expressions and a sketch of the proofs of main results,
respectively.
Chimera is the temporal extension of the Chimera object-oriented data model [11]. Chimera
provides all concepts commonly ascribed to object-oriented data models, such as: object identity,
complex objects, user-defined operations, classes and inheritance. In the following, we denote with
a set of object identifiers, with a set of class identifiers, that is, class names and with
a set of attribute names. Moreover, denotes the set of Chimera legal values.
In Chimera the notion of is supported. The existence of a finite set of basic predefined
value types is postulated, containing the types , , , and . The
type is also a predefined value type, used in the definition of temporal types. Chimera
supports such as sets, lists and records, and allows the use of class names in the
definition of structured types. In addition to the above-mentioned nontemporal types, Chimera
supports a collection of , to handle in a uniform way temporal and nontemporal
domains. For each type , a corresponding temporal type, ( ), is defined. Intuitively,
instances of type ( ) are partial functions from instances of type to instances of type
. In Chimera, temporal types can be used in the definition of structured types. A function
is defined, which takes as argument a temporal type ( ), and returns the corresponding
static type .
We assume as the domain of the type the domain = 0,1,. , isomorphic
to the set of natural numbers IN. Symbol '0' denotes the relative beginning, while denotes the
c c
OI
instance
member
class
Example 1
I
temporal
dom a; c a
c
lifespan
A
dom
dom
dom
dom
dom
lifespan
valid
signature
implementation
lifespan
temporal
immutable
nontemporal
Suppose that . The following are examples of Chimera classes:
class ,
class , with = , subclass of , such that:
According to the usual terminology, an object is an of a class , if is the most specific class, in the
inheritance hierarchy, to which the object belongs. If an object is an instance of a class it is also a of all the
superclasses of this class.
time
employee [10,now]
employee name salary status division manager
name employee temporal(string)
salary employee temporal(integer)
status employee string
division employee temporal(string)
manager employee temporal manager
manager [10,now] employee
manager employee dependents official car
current time. An interval, denoted is a set of consecutive time instants, including
all time instants between and , and included. A single time instant can be represented
as the time interval [ denotes the null interval. The time dimension considered by our
model is time.
A in Chimera consists of two components: the , containing all the information
for the use of the class and its instances, and the , providing an implementation
for the signature. A is associated with each class, representing the time interval during
which the class has existed. We assume the lifespan of a class to be contiguous. The class signature
contains information about the attributes (name and domain) and the methods (name and type of
input and output parameters) of its instances. Attributes in Chimera can be either temporal,
immutable or nontemporal. A (or historical) attribute is an attribute whose value may
change over time, and whose values at different times are recorded in the database. An
attribute is an attribute whose value cannot be modified during the object lifetime, whereas a
(or static) attribute is an attribute whose value can change over time, but whose
past values are not meaningful for the application at hand, and are thus not recorded in the
database. Immutable attributes can be regarded as a particular case of temporal ones, since their
value is a constant function from a temporal domain. Temporal attributes have temporal types as
domains and their values are functions from the temporal domain to the set of legal values
for the attribute. Throughout the paper we represent the value of a temporal attribute of type
as a set of pair . , where . are legal values for type
, and . are time intervals such that the attribute assumes the value
for each time instant in Given a class , ( ) denotes the set of
attributes of instances of that class, whereas ( ) denotes the domain of attribute in class
Finally, since the objects belonging to a class vary over time, each class maintains the history
of the objects instances or members of the class over time. The set of objects members of a
class changes dynamically over time. Thus, to represent the extension of Chimera classes, we
introduce a function : 2 , assigning an extent to each class, for each instant .
For each , ( ) is the set of the identifiers of objects that, at time , belongs to either as
instances or as members.
object
Example 2
fh ig
f fh ig fh ig
fh ig
fh ig
dom
class history
lifespan
class history
lifespan
class history
lifespan
class history
Instances of class contain an immutable attribute , whose value never changes during
the instance lifespans, a static attribute , for which we do not keep track of the history of
changes, and three temporal attributes , and , for which we record the
whole history of changes. Class is a subclass of with the additional attributes
and .
Consider the classes of Example 1, and suppose that . The following
are examples of Chimera objects:
dependents manager temporal(set-of(employee))
official car manager string
manager
employee name
status
salary division manager
manager employee
dependents official car
now
name [20,now] Alan Smith salary [20,60],15k [61,now],30k
status full-time division [20,100],'Disks' [101,now],`Printers'
manager [20,49],i [50,now],i
[20,now],employee
name [50,now] Mary Dole salary [50,now],50k
status full-time division [50,now],'Printers'
manager [50,now],i dependents [50,100]
[50,now],manager
The notion of Chimera is formalized by the following definition.
An object is a 4-tuple ( - ), where
is the oid of ;
( ) is the lifespan of ;
is a record value are the names of the attributes
of , and . are their corresponding values;
is a set . , where . are time inter-
vals, . are class identifiers, such that is the class identifier of the most specific
class to which belongs in , 1 .
Given an oid , we make use of function : to refer the class the object
denoted by belongs at time , that is, ( . Note that
( ) is defined for any .
c
3 Temporal Expressions
first last slice first instant last instant inst slice
th
select the employees that earned more than their managers in
the time interval [10,100]
select the employees
that work in the Printers department when it was headed by Mary Dole
The query:
, is an example of query in which the set of time instants in which the
query condition must be verified is given explicitly. By contrast, the query:
, is an example
of query in which the set of time instants is implicitly specified.
temporal
expression
The consistency of an object is checked only with respect to its most specific class. If an object is consistent with
respect to its most specific class, it is consistent with respect to all its superclasses.
Two intervals are considered disjoint if they cannot be collapsed into a single one (note that [1,2] and [3,4] are
not disjoint).
We require that each object is a consistent instance of all the classes it belongs to. Our notion
of consistency keeps into account that, in a temporal context, both the object state and the class
an object belongs to change over time. Therefore, verifying the consistency of an object requires
two steps. First the set of attributes characterizing the object for each instant of its lifespan must
be determined. Then, the correctness of their values, with respect to the most specific class the
object belongs at time , must be verified. Note that, if we consider an instant lesser than the
current time, we are able to identify only the temporal attributes characterizing the object at time
, since for static attributes we record only their current values. Thus, for instants lesser than the
current time, it only makes sense to check the correctness of the values of the temporal attributes of
the objects. Moreover, at the current time also the correctness of the values of the static attributes,
with respect to the most specific class the object currently belongs, must be checked. We refer the
interested reader to [1] for further details.
Generally, in a nontemporal object-oriented database a query selects objects based on the evaluation
of boolean expressions, involving attribute values, method invocation results, and so on. In a
temporal context, queries must allow the retrieval of objects satisfying a given boolean expression
(or a set of boolean expressions) for a specific set of time instants, that can be defined either
explicitly or implicitly.
To implicitly represent the time with respect to which a boolean expression must be evaluated,
we need a formalism to symbolically denote a set of time instants. The symbolic formalism we
use is similar to the one proposed by Gadia and Nair in [10], based on the notion of
. A temporal expression is a symbolic representation of a set of time instants. The main
difference between our notion of temporal expression and the one in [10] is that we allow the use of
the selection operators: (), (), (), (), (), () in the
definition of temporal expressions (see Defintion 3 introduced later on).
In the following, we use a set of disjoint intervals . as a compact
notation for the set of natural numbers included in these intervals. The operations of union
have the usual
semantics of set operations. Moreover, 7 is true if is one of the natural numbers represented
by 7. Finally, we define a projection operation 5(7 ), that takes as input a set of
disjoint time intervals 7 and a natural number . Function 5() orders the elements in 7 in
increasing order, with respect to their upper bound, and returns the - interval in the or-
dering, if the cardinality of 7 is lesser than or equal to ; it returns otherwise. For ex-
2and
and
Example 4
Example 5
be
be be be
be ; be
te te te te te te te te te
te first te last te slice te; n first instant te last instant te
inst slice te; n
first ?
te
te / te
/ be be be
now
salary 20k division
salary 40k salary
salary 40k
status 'full-time'
now
30k salary 40k manager.name = 'Mary Dole'
The temporal interpretation of the boolean expression
, when the above expression is evaluated on object of Example 2, is ,
whereas the temporal interpretation of is the empty set, as attribute has
never reached this value during lifespan. Note that the temporal interpretation of ,
when it is evaluated on object , is . Finally note that the interpretation of
is the set , on both and , as attribute is static. This does
not imply that attribute has assumed a value different from for the instants
lesser then the current time. However, since we record the value of the attribute only at the current
time, we are able to check the truth of the condition only at .
, are examples of temporal expressions
ample,
In the following denotes the set of boolean expressions. Boolean expressions are specified
using the language described in Appendix A.
Before formally defining the notion of temporal expression, we need to introduce the following
definition.
Let be a boolean expression. The temporal interpretation
of (written is the set of instants in which is true.
The set of Chimera temporal expressions is recursively
defined as follows:
the temporal interpretation of a boolean expression is a temporal expression:
if and are temporal expressions, then , , , are temporal
if is a temporal expression, then (
are temporal expressions.
Each temporal expression uniquely denotes a set of time intervals. The semantics of a temporal
expression, that is, the set of intervals it denotes, is formalized by means of function :
defined as follows.
Let be a temporal expression. The semantics
of , denoted as ( ), is defined as follows:
c
and
or
Example 6
4 Temporal Path Expressions
The usefulness of instant-valued temporal expressions will be made clear in the following section.
Consider objects and of Example 2:
when the expression is
evaluated on . Similarly,
.
instant-valued
manager.salary 40k [101,now] [50,now] [50,now]
manager.salary 40k [50,50] division
now
first te / te ; te
last te /
first instant te ; min / te ; te
last instant te
inst slice te;
first instant ? / last
te
is the n-th instant in . are time
intervals such that (
In the following we refer to temporal expressions denoting a single time instant as
temporal expressions. Formally, a temporal expression is an instant-valued temporal expression
if and only if (
Temporal path expressions are obtained as an extension of classical path expressions of object-oriented
languages [4], in that for each attribute access a time can be specified, in addition to the
attribute name. In such a way, the value of the attribute at a specified time is denoted. The
time can be expressed either explicitly, by specifying a time instant or a set of time instants, or
implicitly, by means of a temporal expression. The language to express path expressions allows
the nesting of attribute accesses expressed by means of postfix dot notation, but the restriction is
imposed that all attribute accesses in the path expression, but the last one, are qualified with a time
instant, and not with a set of time intervals, thus denoting a nontemporal value, on which a further
attribute access can be specified. Alternatively, if a set of time intervals were specified to qualify
an intermediate attribute access in the path expression, that set would be seen as the set of time
instants belonging to the intervals, so that the access would denote a set of nontemporal values,
on which further accesses can be evaluated. The following subsections give a formal treatment of
path expressions in a temporal context, and address the problem of path expression semantics and
correctness.
4.2 Semantics
Example 7
employee
division
salary 'Mary Dole'
manager
ar
e a te
e:a te
e a t
e a e:a
e a te
e:a te
e a
e:a
e a e:a all
X:
last interval
first instant
Consider the classes of Example 1, and let be a variable of type . The
following are examples of path expressions:
In this section we formally define the set of Chimera path expressions. In the following we denote
with a set of object-denoting variables.
The set of Chimera path expressions is recursively defined
as follows:
a simple path expression is defined as follows:
if is an object-denoting variable, then is a simple path expression;
if is a simple path expression, is an attribute name and is an
instant-valued temporal expression, then is a simple path expression;
if is a simple path expression, is an attribute name and is a time
instant, then is a simple path expression;
if is a simple path expression and is an attribute name, then is a simple
path expression;
a terminal path expression is defined as follows:
if is a simple path expression, is an attribute name and is a temporal
expression, then is a terminal path expression;
if is a simple path expression, is an attribute name and 7 2 is
a set of time intervals, then 7 is a terminal path expression;
if is a simple path expression, is an attribute name, then is a terminal
path expression;
simple path expressions and terminal path expressions are path expressions.
The semantics of a path expression (i.e., the value denoted by it) can only be specified starting
from an oid-assignment, that is, a function assigning oids to object-denoting variables. However,
the value denoted by a path expression depends on the temporal specifications it contains.
Consider first the case of a path expression for which a time instant is specified (either implicitly
or explicitly). If the path expression is of the form , that is, a temporal specification occurs
in the terminal part of the path expression, then it denotes the nontemporal value ( ), where
T t now
manager
Definition 6 (Time of a Path Expression)
Example 8
strong
time
Referring to Example 7:
For the sake of simplicity, here and in the following we represent the value of a static attribute of type as a
partial function: , defined only at = .
e e:a
e:a
o:v:a now
a a now
e:a e e:a now
e:a
o:v:a
o:v:a
e:a
e:a all o:v:a
e
e
ar e now
X:
first instant
is the object identified by the oid to which evaluates. Consider now an expression , that is, an
expression with no temporal specification in the terminal part. The most intuitive interpretation
is that, if no time is specified, the expression is evaluated at the current time, that is, is simply
a shorthand for . However, consider a sequence of attribute accesses composing a path,
that is, suppose = . Let be the object identified by the oid to which evaluates.
Because of the consistency of the database, is certainly defined at time and so its attribute
is; by contrast, could be undefined at time . Thus, when a path expression contains a
temporal qualifier, it is evaluated at the time denoted by the qualifier till another explicit temporal
qualifier is encountered. Therefore the path expression
is equivalent to . A path expression
, where does not contain any temporal specification, is simply a shorthand for .
Consider now the terminal path expression 7, where 7 is a set of time intervals (either
explicitly or implicitly denoted). The value associated with this expression is , that is, the
restriction of the function which is the value of to the time intervals in 7. However two
different interpretations of this path expression are possible: according to the interpretation
the denoted function must be defined for each 7, otherwise the path expression denotes no
value. According to the interpretation, the path expression denotes a function which can be
partial on 7. We assume that the default interpretation of a path expression is the weak one.
However, the strong interpretation can also be required by using the alternative syntax 7. A
(terminal) path expression denotes the temporal value whenever it is defined.
We now formalize these notions. First we introduce the associated with a path expression
, denoted as 0( ).
Let be a simple path expression. Its time,
denoted as 0( ), is a time instant in recursively defined as follows:
is an instant-valued temporal expression and
Consider now the restriction of a temporal value to a set of time intervals, as specified by the
following definitions.
Consider the path expressions of Example 7,and suppose that , where is
the object of Example 2:
division
salary 'Mary Dole' [50,60],15k
[61,now],30k
manager
Definition 7 (Temporal Value Weak Restriction)
Definition 8 (Temporal Value Strong Restriction)
Definition 9 (Path Expression Semantics)
Example 9
temporal T
undefined
temporal T v
undefined
e
ar
te / te t; t e:a te e :v:a t
te e:a te e :v:a
te e:a te e :v:a
e:a all e :v:a
X:
last interval
first instant
Let be temporal value of type ( ),
and let 7 2 be a set of time time intervals. The weak restriction of to 7, denoted
as , is a function: such that:
otherwise
Let be a temporal value of type
be a set of time intervals. The strong restriction of
to 7, denoted as , is a function such that:
otherwise
We are now ready to define the semantics (the value denoted by) of a path expression .
Let : be an oid-assignment. The
semantics of a path expression under oid-assignment , denoted as recursively defined
as follows:
if , it is the oid assigned to variable by ;
if ,
if is an instant-valued temporal expression, and (
if is a temporal expression,
if is a temporal expression,
4.3
Proposition 1
Proposition 2
Here and in the following, given a function , ( ) denotes the set of values on which is defined.
e:a all
e:a dom e:a e:a e:a e
e:a
dom
e
ar
e t a
t a t
dom #
Let , , be a path expression. Under the weak semantics,
the following relationship holds:
By contrast, under the strong semantics, relationship (*) holds only if:
A path expression is correct for an oid-
assignment if all the following conditions are satisfied:
for each :
Note that, according to Definition 9, simply a shorthand for
Similarly, equivalent to Therefore in the
remainder of the discussion we do not explicitly consider these cases.
The following proposition relates the semantics of a path expression on a set of time intervals,
either explicitly or implicitly denoted, to the semantics of the path expression on the time instants
composing the set. This proposition thus characterizes nontemporal valued terminal expressions
as derivable from temporal valued ones.
In this subsection we focus on conditions ensuring that a path expression is correct, for a given
assignment . Such conditions ensure that defined, that is, it denotes a value. The
correctness conditions depend on the structure of the path expression. Starting from the basis of
the inductive definition of path expressions, the path expression , with , is correct for
, provided that the oid-assignment assigns an oid to variable . Thus, given a path expression
, we must consider oid-assignments that are defined on variables in . Consider now the case
, with time instant (either implicitly or explicitly denoted). For this expression to denote
a value, the object denoted by must exist at time , and must be an attribute of the class to
which that object belongs at ; finally, the value for at time must be stored. For terminal path
expressions, the correctness conditions differ according to the two different interpretations (strong
and weak) of the path expressions. In particular, the strong interpretation requires the conditions
above to be satisfied for all time instants in the specified set of time intervals, whereas the weak
interpretation only requires the existence of at least an instant in the specified set of intervals, in
which the conditions are satisfied.
The following proposition states the correctness conditions for a path expression.
Corollary 1
Proposition 3
employee
division 5
status 'Mary Dole'
division
dom a ; c
A a t i n
A a t i n
A a t i n
c a a A c
n a e t now
A a t a e t now
A a a e now
A a a e now; now
X:
last interval
first instant
A a dom e t
A a dom e
A a all dom e dom X:a t :a
2. , and ;
3.
4. ;
, and conditions 1 3 above hold (with ), or
, and for which conditions 1 3 above hold (with ), or
, and conditions 1 3 above hold (with ).
Let be a path expression. If,
, such that is a static attribute, then is correct only if . Furthermore:
if , and is a static attribute, then is correct only if ;
if , and is a static attribute, then is correct only if ;
if ,and is a static attribute, then is correct only if .
Referring to objects of Example 2, the path expressions of Example 7 are correct,
whereas the following are examples of incorrect path expressions, given variable of type :
Let be a correct path expression. Then:
if , is
correct
if , .
We remark that the condition of Proposition 2 for terminal expressions under strong semantics
does not require that object never migrated during the
intervals in 7, that is, that (
7. By contrast, it requires that for all the classes - to which
has belonged during 7, - contains attribute , that is, (-).
The following corollary states correctness conditions for path expressions containing static attributes
The correctness test for a path expression can be performed in linear time with respect to the
length of the path expression, where the length of a path expression is the number of attribute
accesses (that is, "dots") appearing in it.
The following proposition specifies the domains of temporal values denoted by correct path
expressions. Their knowledge is relevant in order to check the correctness of subsequent uses of
those values, for instance when path expressions are used in comparison formulas.
(R 2a)
5 Static Analysis of Path Expressions
Rule 1
e X:a t:e X V ar
ar
now
The meaning of the inference rule is the following: if the conditions in the rule premises (the upper part of the
rule) are satisfied, then the rule consequence (the lower part of the rule) can be inferred.
In the previous subsection we have stated conditions ensuring that a temporal path expression is
correct, that is, it denotes a value. However, these conditions can only be checked dynamically,
since they depend on the specific object assigned to the variable appearing in the path expression
by the oid-assignment . In this section, we establish conditions ensuring the correctness of a
path expression for any oid-assignment . The relevance of these conditions is that they can be
checked statically, that is, at program (query) compilation time, thus allowing to ensure, by means
of static checks, that a given path expression will always be correct at run-time. Our approach
is related to static type checking techniques for nontemporal (database) programming languages.
In such database languages, type declarations are exploited to statically check the correctness of a
program, to ensure that no run-time type errors occur [6].
First, consider the problem of determining legal oid-assignments. In a (nontemporal) typed
object-oriented language, a type information is associated with each variable (that is, a type is
declared for the variable). We express the information that type is declared for variable as
. This type declaration implies that at run-time variable can only be instantiated to
an object member of class (that is, instance of or of a subclass of ). Moving to a temporal
context, we need to express information of the form : , to denote that variable can only be
instantiated with an object member of at time , as stated by the following rule.
Let be a path expression. The (temporal) type information : is associated with
in , if has the form and is the type declared for . Formally:
Therefore, given a variable with type information : , an oid-assignment is legal for
if the object assigned to by is an instance of class at time .
Let be a path expression and let : the type
information associated with . An oid-assignment is legal for if ( ) and (
Figure
1 shows the typing rules for path expressions. We denote the type associated with a
path expression as ( ). Rule 1 states that the type of a path expression = is the
type declared for . Rules 2, 3 and 4 deal with path expressions on time instants, thus denoting
nontemporal values, whereas the remaining rules deal with (terminal) path expressions on time
intervals, thus denoting temporal values.
For simplicity, throughout the paper, we have considered static attributes as a particular case
of temporal attributes, whose values are defined only at time instant . To take into account the
distinction between static and temporal attributes, the above rules should be refined. For example,
Rule 2 should be replaced by the following two rules:
Figure
1: Temporal path expression typing rules2
(R
(R
(R
instant-valued
(R
(R
(R
non instant-valued
(R
(R
non instant-valued
(R
ar
type e:a te dom a; T
te
type e:a dom a; T
type e:a dom a; T
type e:a te dom a; T
te
type e:a dom a; T
type e:a te dom a; T
te
type e:a all dom a; T
now
(R 2b)
type e:a now dom a; T
Proposition 4
Example 11
Corollary 2
Example 12
manager
50 division 20
string
employee 50 i division 20
manager 50 division 40 now
manager 50 division
now i division 50 now
now
last instant te first instant te inst slice te; n te n
e
e
e
e X:
:lif espan
Let be a type correct path expression. Then, for any legal
assignment , is correct for . Moreover, if the type deduced for is , then
Referring to classes of Example 1, consider the path expression
, with . The path expression is type correct, and
. Consider the oid assignment such that of Example 2. The oid assignment
is legal, since . However, is undefined, since
Let be a type correct path expression, if , then for
any legal assignment , is correct for .
Referring to classes of Example 1 and to objects of Example 2 it is easy to see that the
path expression is type correct but it is not correct with respect
to the legal oid assignment such that . Indeed,
is undefined, since .
and similar refinements should be done for other rules. However, there is no mean to statically
check whether the only instant denoted by a temporal expression will be . Thus, in the
following, we will focus on temporal attributes.
In order to apply the above typing rules, an important information is whether a temporal expression
is instant-valued. In general, this property cannot be statically decided, since it depends
on the content of the database. However, there are some sufficient syntactic conditions on temporal
expressions ensuring their instant-valuedness. In particular, temporal expressions of the form
are instant-valued, , IN.
We are now able to introduce the notion of type correctness of a path expression and to relate
this notion to the notion of (dynamic) correctness discussed in Section 4.3.
A path expression is said to be type correct if a type for
it can be deduced according to rules in Figure 1.
By contrast, if a path expression contains the specification of two different time instants,
type correctness does not imply the correctness of for any legal oid-assignment, as shown by the
following example.
For path expressions denoting a temporal value, a similar result can be obtained, under the
weak interpretation, as stated by the following corollary.
As a particular case, note that if = is a type correct path expression,
then for any legal assignment , is correct for . By contrast, under a strong interpretation, type
correctness does not imply the correctness for any legal oid-assignment, as shown by the following
example.
6 Path Expression Equality
inst
ins #
ins #
salary 'Mary Dole' salary [50,now]
salary 'Mary Dole' salary
ar
ar
ar
t dom e t dom e e t e t
last interval X
last interval X
ar e e
e e dom e dom e
e e dom e dom e
Definition 14 (Weak-value Equality)
Example 13
Proposition 5
instantaneous-value
equality weak-value equality
Referring to path expressions of Example 9:
. ,
. , and this implies
that the two path expressions are also weak value equal.
Let be an oid-assignment. Let and be two path expressions:
is correct only if .
is correct only if
is correct only if and .
The value a path expression denotes can be compared (by means of a comparison operator such
as =, =, , , etc.) with a value of appropriate type, or with the value denoted by another path
expression. Clearly, for a comparison expression to be correct, several constraints must be satisfied,
involving both the type of the compared values and the time at which the comparison expression
is evaluated. For the sake of simplicity, in this section we focus on the notion of path expression
equality. However, similar considerations apply for other comparison operators.
Path expression equality is formalized by the following definition.
Let : be an oid-assignment. Two
path expressions and are equal
Note that the notion of path expression equality uniformly applies to path expressions denoting
both temporal and static values.
In a temporal context, two further notions of equality can be devised:
and . Two path expressions are instantaneously value equal if there
exists an instant in which they denote the same value. Two path expressions are weakly value
equal there exist two instants, not necessarily the same, in which they denote the same value.
These notions are formalized by the following definitions.
Let : be an oid-assignment. Two
path expressions and are instantaneously value equal there exists an
instant such
Let : be an oid-assignment. Two path
expressions and are weakly value equal there exist two instants and
such
The above notions of equality obviously make sense for path expressions denoting temporal
values. Also a path expression denoting a static value can be compared under these types of
equalities to path expressions denoting either static or temporal values, but only at the current
time. Therefore, several constraints must be satisfied by the values denoted by two path expressions
compared under one of the above types of equality. These constraints are formalized by the following
proposition.
ins
Definition 15 Restricted Equality
Example 14
Proposition 6
salary 'Mary Dole'
salary
ins #
ins #
inst
inst w
inst inst inst inst
We omit the definition of = and = as they are analogous to that of = .
The same results hold for restricted equality.
Given two path expressions and :
if , then ;
if , then ;
if , then:
if (resp. , ), then .
ar e e
e e te te e e
e e te te t / te dom e dom e
e e te te t / te dom e t / te dom e
last interval
e e te te
We can also be interested in comparing two path expressions, under one of the above notions
of equality, only for a specific set of time instants, either implicitly or explicitly denoted. This
possibility is formalized by the following definition.
be an oid-assignment. Let and
be two path expressions:
that
such that
Note that, conditions stated by Proposition 5 can be easily extended to the case of an an
expression involving restricted equalities.
Finally, the following relationships hold between the various types of equality.
snapshot conditions
Conclusions
--R
Submitted for publication.
Semantic Assumptions and Query Evaluation in
On Understanding Types
An Object-Oriented Model for
The Historical Relational Data Model (hrdm) Reviseted.
In this paper we have addressed the problem of querying a temporal object-oriented database through a navigational approach
This work can be extended in several directions.
available via anonymous ftp from as
HSQL: A Historical Query Language.
The Functional Data Model and the Data Language DAPLEX.
The Temporal Query Language TQUEL.
Temporal Object-Oriented Databases: A Critical Comparison
The TSQL2 Temporal Query Language.
A Generalized Relational Framework for Modeling Temporal Data.
Temporal Relational Data Model.
Temporal Database Bibliography Update.
In the following we give the syntax in BNF form for the language provided by Chimera to define boolean expressions.
by Definition 9
The proposition can be proved by induction on the structure of the path expression.
We now prove the correctness of the terminal expression 7 given
If is a static attribute
We consider each case separately:
First of all we recall that is simply a shorthand for
Basis Inductive step i) ii) i) ii) iii) Proof of Corollary 2 Proof of Proposition 5 T T t
Let us consider the first item.
Let us consider the second item.
Let us consider the fourth item.
--TR
--CTR
S. Adali , M. L. Sapino , V. S. Subrahmanian, An algebra for creating and querying multimedia presentations, Multimedia Systems, v.8 n.3, p.212-230, October 2000
Salvatore T. March , Charles A. Wood , Gove N. Allen, Research Frontiers in Object Technology, Information Systems Frontiers, v.1 n.1, p.51-74, July 1999 | static analysis and type checking;navigational data accesses;temporal query languages;temporal object-oriented data models |
627971 | Techniques and Systems for Image and Video Retrieval. | AbstractStorage and retrieval of multimedia has become a requirement for many contemporary information systems. These systems need to provide browsing, querying, navigation, and, sometimes, composition capabilities involving various forms of media. In this survey, we review techniques and systems for image and video retrieval. We first look at visual features for image retrieval such as color, texture, shape, and spatial relationships. The indexing techniques are discussed for these features. Nonvisual features include captions, annotations, relational attributes, and structural descriptions. Temporal aspects of video retrieval and video segmentation are discussed next. We review several systems for image and video retrieval including research, commercial, and World Wide Web-based systems. We conclude with an overview of current challenges and future trends for image and video retrieval. | Introduction
The increasing availability of multimedia information combined with the decreasing storage and processing
costs have changed the requirements on information systems drastically. Today, general purpose
database systems are incorporating support for multimedia storage and retrieval, as well as features
which used to be found in specialized imaging systems or multimedia databases.
Increased use of multimedia has important implications for overall information system design regarding
storage, processing, retrieval and transmission. In this paper, we provide an overview of techniques
for convenient access to images and video; in addition, several state-of-the-art systems are sketched.
In the following section we describe what visual and non-visual features are used for image retrieval
and the querying primitives that make use of these features. In Section 3 we focus on techniques specific
to video retrieval. In light of these discussions, we look at some of the currently available image and
Research supported in part by NSF grant IRI-9509253.
video retrieval systems in Section 4. Systems for image retrieval on the World Wide Web are dealt with
separately in subsection 4.3. We discuss future directions and conclude in Section 5.
Image Retrieval
Image retrieval is concerned with retrieving images relevant to users' queries from a large collection.
Relevance is determined by the nature of the application. In a fabric image database, relevant images
may be those that match a sample in terms of texture and color. In a news photography database,
date, time and the occasion in which the photograph was taken may be just as important as the actual
visual content. Many relational database systems support fields for binary large objects (BLOBs) and
facilitate access by user-defined attributes such as date, time, media type, image resolution, and source.
On the other hand, content based systems analyze the visual content of images and index extracted
features. We are also seeing a rapid emergence of object oriented and extensible relational database
systems which offer standard database features and support user defined procedures.
2.1 Visual Content Based Image Retrieval
Visual content can be modeled as a hierarchy of abstractions. At the first level are the raw pixels
with color or brightness information. Further processing yields features such as edges, corners, lines,
curves and color regions. A higher abstraction layer may combine and interpret these features as objects
and their attributes. At the highest level are the human level concepts involving one or more objects
and relationships among them. An example concept might be "a person giving a speech." Although
automatic detection and recognition methods are available for certain objects and their attributes, their
effectiveness is highly dependent on image complexity. Most objects, attribute values and high-level
concepts cannot be extracted accurately by automatic methods. In such cases semi-automatic methods
or user-supplied keywords and annotations are employed. In the following we describe the various levels
of visual features and the techniques for handling them.
2.1.1 Visual Features: Color, Texture and Shape
Color plays a significant role in image retrieval. Different color representation schemes include red-green-
blue (RGB), chromaticity and luminance system of CIE (International Commission on Illumination),
hue-saturation-intensity (HSI), and others. The RGB scheme is most commonly used in display devices.
Hence digital images typically employ this format. HSI scheme more accurately reflects the human
perception of color.
All perceivable colors can be reproduced by a proper combination of red, green and blue components.
A 24-bit per pixel RGB color image may have 2 24 or approximately 16.7 million distinct colors. In order
to reduce the number of colors for efficiency in image processing, colors are quantized with a suitable
algorithm.
Texture is a visual pattern where there are a large number of visible elements densely and evenly
arranged. A texture element is a uniform intensity region of simple shape which is repeated. Texture
can be analyzed at the pixel window level or texture element level. The former approach is called
statistical analysis and the latter structural analysis. Generally, structural analysis is used when texture
elements can be clearly identified, whereas statistical analysis is used for fine (micro) textures [TT90].
Statistical measures characterize variation of intensity in a texture window. Example measures
include contrast (high contrast zebra skin versus low contrast elephant skin), coarseness (dense pebbles
vs coarse stones), directionality (directed fabric versus undirected lawn). Fourier spectra are also used to
characterize textures. By obtaining the Fourier transform of a texture window, a signature is generated.
Windows with same or similar signatures can be combined to form texture regions.
Structural texture analysis extracts texture elements in the image, determines their shapes and estimates
their placement rules. Placement rules describe how the texture elements are placed relative
to each other on the image and include measures such as the number of immediate neighbors (connec-
tivity), the number of elements in unit space (density), and whether they are layed out homogeneously
(regularity). By analyzing the deformations in the shapes of the texture elements and their placement
rules, more information can be obtained about the scene or the objects in it. For instance, a density increase
and size decrease along a certain direction might indicate an increase in distance in that direction.
Shape-based image retrieval is one of the hardest problems in general image retrieval. This is mainly
due to the difficulty of segmenting objects of interest in the images. Consequently, shape retrieval is
typically limited to well distinguished objects in the image [FBF
In order to detect and determine the border of an object, an image may need to be preprocessed.
The preprocessing algorithm or filter depends on the application. Different object types such as skin
lesions, brain tumors, persons, flowers, and airplanes may require different algorithms. If the object of
interest is known to be darker than the background, then a simple intensity thresholding scheme may
isolate the object. For more complex scenes, noise removal and transformations invariant to scale and
rotation may be needed. Once the object is detected and located, its boundary can be found by edge
detection and boundary-following algorithms [SHB93]. The detection and shape characterization of the
objects becomes more difficult for complex scenes where there are many objects with occlusions and
shading.
Once the object border is determined the object shape can be characterized by measures such as
area, eccentricity (e.g. the ratio of major and minor axes), circularity (closeness to a circle of equal
area), shape signature (a sequence of boundary-to-center distance numbers), shape moments 1 , curvature
(a measure of how fast the border contour is turning), fractal dimension (degree of self similarity) and
others. All of these are represented by numerical values and can be used as keys in a multidimensional
index structure to facilitate retrieval.
2.1.2 Indexing and Retrieval
For indexing visual features a common approach is to obtain numeric values for n features and then
represent the image or object as a point in the n-dimensional space. Multidimensional access methods
such as K-D-B-trees, quad-trees [PF95, Sam89], R-trees [Gut84] or their variants (R*-trees, hB-trees,
X-trees, TV-trees, SS-trees, SR-trees, etc.), are then used to index and retrieve relevant images. Three
problems need to be solved for this scheme to work properly: First, most multidimensional methods
work on the assumption that different dimensions are independent, and hence the Euclidean distance
is applicable. Second, unless specifically encoded, feature layout information is lost. In other words,
the locations of these features can no longer be recovered from the index. The third problem is the
number of dimensions. The index structures become very inefficient as the number of dimensions grow.
In order to solve these problems, several approaches have been developed. We first look at the color
indexing problem. Texture and shape retrieval share some of these problems and similar solutions are
applicable.
A. Color Indexing and Retrieval
A color histogram records the number of pixels in an image for each color. Two color histograms can
be compared by summing the absolute differences or the squared differences of the number of pixels in
1 For a description of various shape moments see for instance [SHB93].
each color. Such a scheme is simple and tolerant to small changes in a scene. However, it suffers from
all three of the problems mentioned above.
Cross Correlation
The similarity of color i and color j may not be 0 even though i 6= j. Many colors that are very
different in terms of their RGB values may be perceived as similar by humans. Niblack et al
use a matrix in which each entry a ij
gives the similarity between color i and color j.
But because of the complexity of the computation, the images are preprocessed with a filter which
underestimates the actual histogram distance. The filter involves a transform with orthonormal
basis functions. After the transform, the dimensions are independent. Hence well established
multi-dimensional spatial access methods can now be applied. Since this transform serves as a
lower bound for the actual distance, there are no false dismissals. The drawback is that there are
false positives, which can be eliminated by going through a verification phase.
Layout
In order to use the location information, significant color regions are extracted and their locations
are recorded [HCP95, SC96b]. The regions can be represented by minimum bounding rectangles
and stored in a structure like R-trees. A search on a specific color at a specific location can be
performed in two steps: In the first step, two searches are performed based on color and on location.
The intersection of the results of these searches yields images which satisfy both conditions.
In a slightly simplified version of this scheme [PZM96], pixels belonging to significant color regions
and those that do not form two histograms, which are compared separately.
Dimensionality
Color histograms may include hundreds of colors. In order to efficiently compute histogram dis-
tances, the dimensionality should be reduced. Transform methods such as K-L and Discrete
Fourier Transform (DFT), Discrete Cosine Transform (DCT) or various wavelet transforms can
be used to reduce the number of significant dimensions. Another idea is to find significant colors
by color region extraction and compare images based on presence of significant colors only [SC96b].
Spatial partitioning strategies such as the "Pyramid-Technique" [BBK98] map n-dimensional
spaces into a 1-dimensional space and use a B+ tree to manage the transformed data. The
resulting access structure shows significantly better performance for large number of dimensions
compared to methods such as R-tree variants.
B. Texture and Shape Retrieval
Texture and shape differ from color in that they are defined not for individual pixels but for windows
or regions. Texture segmentation involves determining the regions of image with homogeneous texture.
Once the regions are determined, their bounding boxes may be used in an access structure like an R-
tree. Dimensionality and cross correlation problems also apply to texture and can be solved by similar
methods as in color. Fractal codes capture the self similarity of texture regions [HCA95] and are used
for texture-based indexing and retrieval.
Shapes can be characterized by methods mentioned in section 2.1.1 and represented by n-dimensional
numeric vectors which become points in n-dimensional space. Another approach is to approximate the
shapes by well defined, simpler shapes. For instance, triangulation or a rectangular block cover can be
used to represent an irregular shape. In this latter scheme, the shape is approximated by a collection of
triangles or rectangles, whose dimensions and locations are recorded. The advantage of this approach
is that storage requirements are lower, comparison is simpler and the original shape can be recovered
with small error. The two methods can be combined to have the advantages of both.
Sketch-based retrieval can be considered a special case of shape retrieval. Here, the user may describe
a single object or the whole image by the layout of objects in it. Sketch retrieval may be facilitated by
an edge map which consists of detected and thinned edges in an image [KKOH92]. Thinning provides a
binary (black and white) edge image which greatly reduces the amount of information to be stored and
compared.
2.1.3 Object Detection and Recognition
Object detection involves verifying the presence of an object in an image and possibly locating it precisely
for recognition. In both feature based and template based recognition, standardization of global image
features and registration (alignment) of reference points are important. The images may need to be
transformed to another space for handling changes in illumination, size and orientation. Both global and
local features play important roles in object recognition. In local feature-based object recognition, one
or more local features are extracted and the objects of interest are modeled in terms of these features.
For instance, a human face can be modeled by the sizes of the eyes, the distance between the eye and
the nose, etc. Recognition then can be transformed into a graph matching problem. In holistic or global
feature-based object recognition, characteristics of the object as a whole or a template of the desired
object is compared against target images [JZL96]. For instance, to recognize a person, an unknown
facial image (or its transform) is matched (as a whole) against (transforms of) images of known people.
Psychophysical and neurophysiological studies suggest that both local and global features play a role in
human face recognition [CWS95].
Transform methods such as Fourier, Wavelet or K-L also provide characteristics that can be used to
detect objects of interest [CSW95, PPS94]. Many objects have unique spectra when transformed with
the above methods which serves as a signature for the object. These signatures can be used to detect
the presence of the object.
2.1.4 Spatial Relationships
Efficient methods for indexing and retrieving images based on the spatial relationships (such as lef t of ,
inside; and above) among objects in the image were developed [Ege93, CSY87, YM98]. Deduction of
spatial relationships such as A left of B, B left of C ) A left of C are employed to retrieve images
which have spatial relationships not explicitly stated in the user query. Chu et. al [HCT96] detect
objects such as bones in X-rays, brain tumors and breast outlines in medical images and employ a
knowledge-based image data model. The model represents selected features and spatial relationships
among them in the form of a type abstraction hierarchy. The SEMCOG system [LCHH97] developed at
NEC implements spatial relationship inference mechanism. Topological relationships within the context
of minimum bounding rectangles are investigated in [PSTE95].
2.2 Non-visual Features
Commercial imaging systems typically use relational database technology with enhancements for image
data types. In these systems, image-specific fields such as the source, the date and the time the image
was taken, the media type, the resolution, the input device, the compression method, etc. as well as
annotations are the primary non-visual features for indexing.
Captions and annotations are free text descriptions of a scene. They are natural for the users and
standard text retrieval methods can be applied. However, they also present major challenges for the
retrieval system. Two users may describe the same scene in very different manners. They may use
different words, emphasize different aspects of the image and describe at different detail. One way to
match different descriptions of the same scene is to expand the query and the database image descriptions
with an electronic thesaurus [SQ96, ATY + 97]. However, the inherent ambiguity in natural language
and typically short descriptions may make word sense disambiguation a difficult task [Voo93].
In order to deal with the challenges of description based retrieval, various methods have been developed
such as restricting the sentence types, using inference rules, relevance feedback [HCK90, JG95,
structured descriptions.
Structured descriptions may be natural language sentences with restrictions, symbolic or iconic descriptions
involving objects, attributes and relationships [LCHH97, ATY
2.3 Querying
Possible queries involving one or more features are listed below. Combination queries may involve any
number of these query primitives as long as the retrieval system supports such queries.
Simple Visual Feature Query: The user specifies certain values possibly with percentages for a
feature. Example: "Retrieve images which contain 25% red, 50% blue, 25% yellow".
Feature Combination Query: The user combines different features and specifies their values and
weights. Example: "Retrieve images with green color and tree texture where color has weight 75% and
texture has weight 25%".
Localized feature query: The user specifies feature values and locations by placing regions on a
canvas. Example: "Retrieve images with sky blue at the upper half and green at the bottom half".
Query By Example: The system generates a random set of images. The user selects one of these
and retrieves similar images. Similarity can be determined based on user selected features. For instance
"retrieve images which contain textures similar to this example". A slightly different version of this type
of query is where the user cuts a region from an example image and pastes it onto the query canvas.
Object versus Image: The user can describe the features of an object in an image as opposed to
describing a complete image. Example: "Retrieve images containing a red car near the center."
User Defined Attribute Query: The user specifies the values of the user defined attributes. Example:
"Retrieve images where location is Washington DC and the date is July 4 and the resolution is at least
300dpi (dots per inch)".
Object Relationship Query: The user specifies objects, their attributes and the relationships among
them. Example: "Retrieve images where an old man is holding a child in his arms".
Concept Queries: Some systems allow the user to define simple concepts based on the features
extracted by the system [OS95, AHKR96]. For instance, the user may define the concept of a beach as
"small yellow circle at top, large blue region in the middle and sand color in the lower half".
3 Techniques for Video Retrieval
Video retrieval involves content analysis and feature extraction, content modeling, indexing and query-
ing. Video naturally has a hierarchy of units with individual frames at the base level and higher level
segments such as shots, scenes and episodes. An important task in analyzing video content is to detect
segment boundaries.
3.1 Video Segmentation
A shot is a sequentially recorded set of frames representing a continuous action in time and space by a
single camera. A sequence of shots focusing on the same point or location of interest is a scene. A series
of related scenes form an episode [WDG95]. An abrupt shot change is called a cut. Camera operations
such as zooming, tilting, panning make it difficult to detect shot changes. Techniques for shot change
detection include the following:
ffl Direct Pixel or Histogram Comparison: Pixels of consequent frames can be compared pair-
wise. If a significant percentage of pixels differ, a shot change is detected. This is a costly
operation and is sensitive to minor camera operations like zooming. A more robust method is
histogram comparison . A shot change is detected if the histograms of two consequent frames
differ significantly [WKSS96]. However, this method can not handle gradual changes.
ffl Compressed domain features: Compressed video provides additional clues such as DCT transform
coefficients and motion vectors which can be used for cut detection [KDL96, CSM96]. In
MPEG video compression standard [Gal91] the image is compressed in units of 16x16 pixel macro-
blocks. The motion vectors of subsequent frames are coherent unless there is a shot change. By
comparing the DCT coefficients and the motion vectors of these blocks with preceding and succeeding
blocks, shot changes can be detected.
ffl Text Recognition and Closed Captions for Video Indexing: A newly emerging field is
using textual information whenever available for video indexing. Optical character recognition
(OCR) of the text appearing on the video or closed captions available for certain broadcast TV
programs are used for segmentation and retrieval [Lie96, Moh96]. In the case of OCR on video,
scene models involving likely keywords can be used for model-based segmentation. For instance,
an anchor shot of a particular news agency would involve specific texts in the background and a
human subject in the foreground. In the case of closed captions, text retrieval techniques can be
combined with visual analysis to obtain a better semantic segmentation of the video. Video (TV)
capture cards that can monitor and store closed captions, and alert the user for selected keywords
are readily available (http://www.diamondmm.com/products/current/dtv-2000.cfm).
In model based segmentation [ZTSG95] models of different scenes are built using a-priori knowledge
about the scenes. First, the video is divided into different shots by using one or more of the above
techniques. Then the shots are classified based on the models. An example classification might involve
anchor person shots, weather forecast shots and commercials.
3.2 Object Detection and Tracking
In video, there are two sources of information that can be used to detect and track objects: Visual
features (such as color and texture) and motion information. A typical strategy is to initially segment
regions based on color and texture information. After the initial segmentation, regions with similar
motion vectors can be merged subject to certain constraints such as adjacency [ZKS93, JMA79].
Human skin color and DCT transform coefficients in MPEG as well as broad shape information can
also be used for detecting human faces in compressed video [CSW95].
Systems for detecting particular movements such as entering, exiting a scene and placing/removing
objects using motion vectors are being developed [Cou96]. It is possible to recognize certain facial
expressions and gestures using models of face or hand movements.
3.3 Content Modeling, Indexing and Retrieval
The temporal nature and comparatively huge size of video data requires special browsing and querying
functions. A common approach for quick browsing is to detect shot changes and associate a small icon
of a key frame for each shot [NT91]. Retrieval using icons, text and image (frame) features is possible.
The hierarchical and compositional model of video [WDG95] consists of a hierarchy of segments such
as shots, scenes and episodes. This model facilitates querying and composition at different levels and
thus enables a very rich set of temporal and spatial operations. Example temporal operations include
"follows", "contains" and "transition". Example spatial operations are "parallel to", and "below".
The Hierarchical Temporal Language (HTL) [SYV97, YM98] also uses a hierarchical model of video
consisting of units such as frames, shots, and sub-plots. The language uses the classical temporal
operators to specify properties of video sequences as well as certain modal operators to specify properties
of sequences at different levels in the hierarchy. The semantics of the language is designed for similarity-based
retrieval.
Object based querying involves detection and tracking of moving objects and queries based on an
example of the object provided/selected by the user [CA96].
An important criteria for the performance of a video retrieval system, or a Multimedia On Demand
(MOD) system in general is the quality of service. Example quality of service parameters are delay jitter
and the skew (synchronization difference) between the mono-media streams that make up the multimedia
data. Since most users of such systems access the system via some sort of a network, continuity and
the synchronization of the media streams have to be ensured under stringent communication subsystem
limitations. Various buffering and disk scheduling techniques have been proposed and implemented for
ensuring quality of service in such systems [Ran93, Mar97, Aur98]. A sample multimedia-on-demand
system is illustrated in figure 1.
Server
User
User
User
buffer
buffer
Figure
1: A sample multimedia-on-demand system with buffering and disk array replication.
Systems
Several research and commercial systems provide automatic indexing and querying based on visual
features such as color and texture. These include Photobook, VisualSEEk, Cypress, QBIC and Virage.
Certain unique features of these systems will be discussed in the following subsections.
4.1 Research Systems
The Photobook system [PPS94] enables users to plug in their own content analysis procedures and selecting
among different content models based on user feedback via a learning agent. Sample applications
include a face recognition system, image retrieval by texture similarity, brain map, and semi-automatic
annotation based on user-given labels and visual similarity. Cypress [OS95] lets users define concepts
using visual features like color. For instance, a user may coin the term "beach" for a certain combination
of yellow color (sun), beige (sand), and blue (sea). VisualSEEk [SC96b] allows localized feature queries
and histogram refinement for feedback using a web-based tool.
Systems such as CVEPS [CSM96], and JACOB [CA96] support automatic video segment decompo-
sition, and video indexing based on key frames or objects. The users can employ image analysis and
retrieval components to index and retrieve key frames or video objects based on their visual features or
spatial layout. The CVEPS system also provide these functions as well as video editing in compressed
domain. JACOB uses artificial neural networks for automatic shot detection.
Among systems using captions or annotations for image retrieval, the caption based image retrieval
system of Dublin City University uses WordNet, an electronic dictionary/theasurus, for query expansion
[SQ96]. Rohini and Srihari [RS95] describe a system that uses a semantic model for interpreting captions
in order to guide person recognition . The SCORE system [ATY uses an extended Entity-Relationship
model to represent image contents and WordNet to expand queries as well as database
descriptions. The SEMCOG system [LCHH97] performs semi-automatic object recognition.
4.2 Commercial Systems
QBIC supports shape queries for semi-manually segmented
objects and local features as well as global features. The Virage system (http://www.virage.com)
[Gup95] supports feature layout queries and users can give different emphasis to different features. Informix
data blades (http://www.informix.com, formerly Illustra) enable user defined content processing
and analysis routines to be stored in a multimedia database. Data blades for free text, images, sound and
video are becoming available by Informix and third party suppliers. Excalibur (http://www.excalib.com)
Visual RetrievalWare systems enables queries on gray shape, color shape, texture, and color using adaptive
pattern recognition techniques. Excalibur also provides data blades for Informix databases. An
example data blade is a scene change detector for video. The data blade detects shots or scenes in
video and produces a summary of the video by example frames from each shot. Oracle offers a video
server which runs on top of the Oracle Universal Server product and provides concurrent delivery of
full-motion video as well as remote control operations. IBM's DB2 system supports video retrieval via
"video extenders"(http://www.software.ibm.com/data/db2/extenders). Video extenders allow for the
import of video clips and querying these clips based on attributes such as the format, name/number or
description of the video as well as last modification time.
4.3 Systems for the World Wide Web
WebSEEk [SC96a] builds several indexes for images and videos based on visual features such as color as
well as non-visual features such as key terms assigned subjects and image/video types. In order to classify
images and videos into subject categories, a key term dictionary is built from selected terms appearing in
a URL (Uniform Resource Locator, the address of a page on the world wide web). The terms are selected
based on their frequency of occurrence and whether they are meaningful subject terms. The latter judgment
is made manually. For an example, the URL "http://www.chicago.com/people/michael/final98.gif"
would produce the following terms: "people", "michael", "final". After the key term dictionary is built,
directory portion of the image and video URLs are parsed and analyzed. This analysis produces an
initial set of categories of the images and the videos which is then manually verified. Videos are summarized
by picking one frame for every second of video and then packaging them as an animated GIF
image.
The WebSeer project [SFA96] aims at classifying images based on their visual characteristics. Novel
features of WebSeer include (1) image classification such as photographs, graphics, etc., (2) integration
of CMU face detector [RBK95], and (3) multiple keyword search on associated text such as a HTTP
reference, alternate text field of HTML reference or page title.
Yahoo Image Surfer (http://isurf.yahoo.com) employs Excalibur Visual RetrievalWare for searching
images and video on the WWW.
5 Looking into Future
Information overload has become almost synonymous with information age. Advanced filtering and
retrieval methods for all types of multimedia data are highly needed. Current image and video retrieval
systems are the results of combining research from various fields. Better collaboration of computer
vision, database and user interface techniques will provide more effective and efficient image retrieval.
Improved compression and object tracking techniques will increase the accessibility of digital video.
One issue that needs to be addressed is the design of generic, customizable user interfaces that can
be used for a variety of domains. The ability to customize the schema for image retrieval or effective
visualization of video are among the objectives of such interfaces. Incorporation of voice may bring
another little explored dimension to image and video retrieval. Systems that combine visual features,
sound, text as well as structured descriptions will improve better user interaction.
The performance and effectiveness of multimedia database systems continue to be open issues. In
fields such as Geographical Information Systems, there is a dire need for high performance multimedia
databases which can support concurrent access for thousands of users while providing powerful query
tools and languages. Query and transaction models of multimedia database systems differ from those
of the traditional database systems. Research in these areas is likely to gain momentum as more
commercial activity rely on highly available multimedia data.
While multimedia-on-demand has been on the agenda of researchers for a while, availability of such
systems remains limited. As faster connection methods such as cable modems and digital subscriber
lines increasingly become available for the household, a web-based killer application may be the key to
increased demand for such systems. The cost issue, however, is likely to remain an obstacle for the near
future.
The decreasing cost of multimedia storage and retrieval systems is encouraging medical institutions to
transform their existing data into digital form. These institutions also prefer digital capture technologies
for future applications. Multimedia systems are certain to play a leading role in tomorrow's clinic.
Among the open problems in this domain are the use of multi-modal images for improved diagnosis,
automatic feature registration for standardized object recognition and the integration of heterogenous
databases.
--R
Video Retrieval with IRIS.
Using Semantic Contents and WordNet(TM) in Image Retrieval.
Control of perceived quality of service in multimedia retrieval services: Prediction-based mechanisms vs
The Pyramid-Technique: Towards Breaking the Curse of Dimensionality
JACOB: Just a content-based query system for video databases
Efficient Techniques for Feature-Based Image/Video Access and Manipulation
Automatic Feature Extraction and Indexing for Content-Based Visual Query
Iconic indexing by 2-d string
Human and Machine Recognition of Faces: A Survey.
What's Special About Spatial?
Efficient and Effective Querying by Image Content.
MPEG: A Video Compression Standard for Multimedia Applications.
Visual Information Retrieval Technology
A dynamic index structure for spatial searching.
Machine Learning and Vectorial Matching for an Image Retrieval Model: EXPRIM and the system RIVAGE.
An Integrated Color-Spatial Approach to Content-Based Image Retrieval
A Knowledge-Based Approach for Retrieving Images by Content
Adaptive Query reformulation in Attribute based Image Retrieval.
Segmentation Through the Detection of Changes Due to Motion.
Object Matching Using Deformable Templates.
Indexing and Retrieval of Video in the Compressed Domain.
A Sketch Retrieval Method for Full Color Image Database
SEMCOG: an object-based image retrieval system and its visual query interface
Automatic Text Recognition for Video Indexing.
Impact of video scheduling on bandwidth allocation for multiplexed mpeg streams.
Automatic Video Indexing and Full Video Search for Object Appearances.
Retrieval from a Relational database of Images.
Similarity Searching in Large Image Databases.
Tools for content-based manipulation of image databases
Topological Relations in the World of Minimum Bounding Rectangles: A Study with R-trees
Comparing Images Using Color Coherence Vectors.
Efficient storage techniques for digital continuous media.
Human Face Detection in Visual Scenes.
Rohini and Srihari.
The Design and Analysis of Spatial Data Structures.
Searching for Images and Videos on the World-Wide Web
VisualSEEk: a fully automated content-based image query system
An Image Search Engine for the World Wide Web.
Image Processing
Experiments on Using Semantic Distances Between Words in Image Caption Retrieval.
Similarity Based Retrieval of Videos.
Computer Analysis of Visual Textures.
Using WordNet to Disambiguate Word Senses for Text Retrieval.
Composition and Search with a Video Algebra.
Intelligent Access to Digital Video: Informedia Project.
Principles of Database Query Processing for Advanced Applications.
Automatic Partitioning of Full Motion Video.
Automatic Parsing and Indexing of News Video.
--TR
--CTR
Gisele Busichia Baioco , Agma J. M. Traina , Caetano Traina, Jr., An effective cost model for similarity queries in metric spaces, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Tatiana Almeida Souza Coelho , Pvel Pereira Calado , Lamarque Vieira Souza , Berthier Ribeiro-Neto , Richard Muntz, Image Retrieval Using Multiple Evidence Ranking, IEEE Transactions on Knowledge and Data Engineering, v.16 n.4, p.408-417, April 2004
Timothy C. Hoad , Justin Zobel, Video similarity detection for digital rights management, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.237-245, February 01, 2003, Adelaide, Australia
Ruofei Zhang , Zhongfei Zhang, A robust color object analysis approach to efficient image retrieval, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.871-885, 1 January 2004
M. L. Kherfi , D. Ziou , A. Bernardi, Image Retrieval from the World Wide Web: Issues, Techniques, and Systems, ACM Computing Surveys (CSUR), v.36 n.1, p.35-67, March 2004
Ankush Mittal , Loong-Fah Cheong, Addressing the Problems of Bayesian Network Classification of Video Using High-Dimensional Features, IEEE Transactions on Knowledge and Data Engineering, v.16 n.2, p.230-244, February 2004
Yuksel Alp Aslandogan , Clement T. Yu, Evaluating strategies and systems for content based indexing of person images on the Web, Proceedings of the eighth ACM international conference on Multimedia, p.313-321, October 2000, Marina del Rey, California, United States
Iraklis Varlamis , Michalis Vazirgiannis, Bridging XML-schema and relational databases: a system for generating and manipulating relational databases using valid XML documents, Proceedings of the 2001 ACM Symposium on Document engineering, November 09-10, 2001, Atlanta, Georgia, USA | image and video retrieval;content-based retrieval;information retrieval;multimedia databases |
627982 | Generalization and Generalizability Measures. | AbstractIn this paper, we define the generalization problem, summarize various approaches in generalization, identify the credit assignment problem, and present the problem and some solutions in measuring generalizability. We discuss anomalies in the ordering of hypotheses in a subdomain when performance is normalized and averaged, and show conditions under which anomalies can be eliminated. To generalize performance across subdomains, we present a measure called probability of win that measures the probability whether one hypothesis is better than another. Finally, we discuss some limitations in using probabilities of win and illustrate their application in finding new parameter values for TimberWolf, a package for VLSI cell placement and routing. | Introduction
Generalization in psychology is the tendency to respond in the same way to
different but similar stimuli [6]. Such transfer of tendency may be based on
temporal stimuli, spatial cues, or other physical characteristics. Learning, on
the other hand, may be considered as a balance between generalization and discrimination
(the ability to respond to differences among stimuli). An imbalance
between them may lead to negative results.
Machine learning in an area in artificial intelligence that extends knowledge,
concepts, and understanding through one or more observations of instances of
Research supported by National Science Foundation Grant MIP 96-32316 and National
Aeronautics and Space Administration Grant NAG 1-613.
To appear, IEEE Transactions on Knowledge and Data Engineering, vol. 10, no. 1, Feb.
Performance
Normalization
Statistical Methods
to evaluate
Generalizability
Data-Intensive
Methods
Knowledge-
Intensive Methods
Concept Learning
and
Generalization
Figure
1: The relationship between concept learning, generalization, and generalizability
the concept [1]. The number of instances involved and the amount of information
they carry will determine the learning method to be used.
Learning methods can be classified as data-intensive and knowledge-intensive
(see
Figure
1). In data-intensive methods, symbolic concepts are learned using
data-intensive similarity-based methods. The learner is shown a large number of
related examples and is required to identify their similarities and generalize the
concept embedded. Using this approach, Mitchell [25] defines generalization as a
process that takes into account a large number of specific observations (inductive
bias), and that extracts and retains the important features that characterize
classes of these observations. He then casts generalization as a search problem,
and alternative generalization methods as different search strategies.
An example of a data-intensive learning method is the learning of heuristics
represented as a collection of production rules [42]. In this approach, learning
modifies each of the rules based on decisions made by these rules and on positive
and negative examples found. The process of apportioning a feedback signal to
individual decisions carried out in the past, as well as to decision elements
applied in each decision, in order to refine the heuristic method is called credit
assignment. The former credit assignment is called temporal, and the latter,
structural. Credit assignment is usually difficult when learning incrementally
single concepts from examples, especially when learning multiple disjunctive
concepts and when the learning data is noisy. In this case, a teacher may be
needed to tell the learner the proper amount of credit to assign to a decision.
A second class of data-intensive learning methods are decision-theoretic methods
that use statistical decision theory to discriminate probabilistic patterns exhibited
in learning examples [28]. The major component in a decision-theoretic
approach is the loss function that measures the loss when the learner categorizes
a learning example incorrectly. It represents a statistical approach to credit as-
signment. By minimizing the total loss using statistical methods, it is sometimes
possible to show asymptotic convergence of the concept to be learned. Examples
of decision-theoretic methods include evolutionary programming [20], genetic algorithms
[13], classifier systems [5], and artificial neural networks (ANNs) [22].
In contrast to using extensive training examples in data-intensive methods,
knowledge-intensive methods rely on domain-specific knowledge to learn and to
generalize. In explanation-based learning, the learner analyzes a single training
example using domain knowledge and the concept under study to produce a
generalization of the example and a deductive justification of the generalization
[8, 26]. Knowledge-intensive methods work well when the concept to be
generalized can be deduced from the domain knowledge.
To evaluate the quality of a learning and generalization method and to measure
the degree to which learning and generalization has been achieved, generalizability
measures have been developed. In the simplest case, they measure the
number of positive and negative examples in learning. In more general cases,
the degree to which an example satisfies a learned concept must be considered,
and statistical techniques are employed to determine whether a learned concept
can be generalized.
For example, in learning in feedforward ANNs, the effectiveness of an ANN
that computes discrete f0,1g-valued mappings can be evaluated by the network's
ability to solve dichotomization problems using measures such as discrimination
capacity, VC-dimension (named after Vapnik and Chervonenkis [36]), and
efficiency of decision functions [23]. For an ANN that performs function approximation
computing either discrete multiple-valued or continuous mappings,
we can measure its quality using concepts such as combinatorial dimension,
approximation error, and estimation error. Finally, the concept of PAC (prob-
ably approximately correct) learning [18] is useful for characterizing the time
complexity of algorithms for learning both discrete and continuous mappings.
A related problem in generalizability is the normalization of learned results
relative to a baseline. When the quality of a learned concept is measured numerically
and depends on some attributes of the example, it may be necessary
to normalize the measure with respect to that of a baseline before any statistical
evaluations can be made. For instance, the quality measure of a learned concept
may depend on the size of the learning example and needs to be normalized before
results from multiple learning examples can be aggregated statistically. In
this case, the generalizability of the learned concept may depend on the baseline
and the statistical method used to aggregate performance measures. Anomalies
in the ordering of hypotheses may happen when different normalization and
aggregation methods are used. This is discussed in detail in Section 3.3.
In the next section we summarize previous approaches in generalization and
credit assignment. We then present in Section 3 the general concept of gen-
eralizability, generalizability measures, and anomalies in generalization when
performance measures are normalized and aggregated. We illustrate in Section
4 the application of the method in Section 3 to find new parameter values
in TimberWolf.
Instance
Space
Rule
Space
Experiment Planning, Instance
Selection, Result Generalization
Figure
2: The process of inductive learning and generalization.
Generalization using Induction
In this section we summarize various strategies for generalization. Early work
on inductive learning and generalization was done by Simon and Lea [31] who
used training instances selected from some space of possible instances to guide
the search for general rules. The process of inductive learning entails a mapping
from the instance space to the rule space and involves experiment planning,
instance selection, and result interpretation (or generalization). (See Figure 2.)
2.1 The Generalization Problem
Generalization involves the extraction of information useful to guide the search
of a rule space [1]. To simplify the search process, a good representation of the
rule space must be chosen so that generalization can be carried out by inexpensive
syntactic operations, such as turning constants to variables, dropping
conditions, adding options, curve fitting and zeroing a coefficient.
The specific operators used may depend on the representation of the rule
space. For instance, a production rule Z ! Z 0 can be used to represent either
the backward form (Z 0 is the value of a state vector plus associated predicate) or
the forward form (Z 0 is a computational rule). The evaluation of the execution
of a rule constitutes credit assignment, whereas the creation of new rules involves
generalization. The latter entails the identification of a subvector of variables
relevant to the creation, the proper decision for the situation, and the reason for
making the decision. Waterman [42] proposed a set of generalization operators
that modify the defined symbolic values in a rule, eliminate one or more variables
in a rule, and change action rules and error-causing rules.
Mitchell [25] defines generalization in the context of a language that describes
instances and generalizations. Based on a set of positive and negative examples,
predicates are matched from generalizations to instances. Hence, generalizations
are defined within the provided language that are consistent with the presented
training examples.
In general, generalization also requires a function to evaluate the positive and
negative examples obtained in order to provide feedback (credit assignment). In
the simplest case, the function counts the number of positive and negative ex-
amples. In decision-theoretic approaches, a loss function is used to measure
the loss when the learner categorizes a learning example incorrectly. This is
the approach taken in classifier-system and genetics-based learning that uses a
fitness function. In reinforcement learning, the evaluation function may have to
be learned independently (by some form of supervised learning) in order to pro-
Generalization
Strategies
Knowledge-
Intensive
Data-Intensive
Explanation-Based
Depth-First
Breadth-First
Version Space
Decision-Theoretic
Credit
Assignment
Structural
Temporal
Figure
3: A classification of generalization strategies.
vide proper temporal credit assignment [33, 27, 34]. The reinforcement function
is particularly difficult to design when examples drawn from the problem space
are not statistically related. This happens when the evaluation data depends
on the size of the examples, or when the examples drawn belong to different
problem subdomains. Some possible solutions to these issues are discussed in
Section 3.3.
2.2 Generalization Strategies
As defined by Mitchell [25, 26], generalization strategies can broadly be classified
as data driven and knowledge-driven. (See Figure 3.) Both paradigms
use generate-and-test that generates alternative concepts, tests them on test
cases, and constructs feedbacks (credit assignment) to aid the refinement of the
concepts generated. The difference lies in the amount of tests performed: data-driven
methods do not rely on domain knowledge and often require extensive
tests on the concepts under consideration before reliable feedbacks can be gen-
erated. In contrast, knowledge-driven methods rely on domain knowledge and
one or a few tests to deduce new concepts.
Data-driven generalization strategies can be classified into depth-first search,
breadth-first search, version-space, and decision-theoretic techniques [25].
A depth-first strategy starts from a single generalization as the current best
hypothesis, tests it against each training example, and modifies the hypothesis
in order to make it consistent with the training example. Its advantage is that
it keeps a global picture in mind when modifying the hypothesis. However, it
is usually expensive to backtrack when a negative training example is found.
In this case, the new hypothesis generated must be tested against all previous
training examples to make sure that they are consistent. Any inconsistencies
will incur further backtracking.
A breadth-first strategy, on the other hand, generalizes from more specific hypotheses
to more general ones. Initially, it starts from a set of the most specific
hypotheses. Positive training examples allow the search to progress down the
breadth-first tree, generating more general hypotheses, whereas negative training
examples will prune the corresponding hypothesis from the search tree. The
boundary of the search tree, therefore, represents the most general hypotheses
generated so far that are consistent with the (positive) training examples. As a
result, when a new (more general) hypothesis is generated, it only needs to be
tested against all positive training examples to make sure that they are consistent
with the current hypothesis. This is the main advantage of a breadth-first
search over a depth-first search.
A hybrid of depth-first and breadth-first strategies is a version-space strat-
egy. The version space represents the set of hypothesis that are consistent with
all the training examples. It defines two boundaries. The first boundary is
obtained by depth-first search and bounds the acceptable level of specialization
of hypotheses (those that are consistent with all the positive examples). The
second boundary is obtained by breadth-first search and bounds the acceptable
level of generality of hypotheses (those that are inconsistent with all the negative
examples).
The fourth class of data-driven generalization strategies are the decision-theoretic
techniques. These do not always use a single type of search method
but may use a hybrid of search methods, such as depth-first and breadth-first
searches, depending on the evaluation results. They rely on a loss function that
measures the expected loss when the learner categorizes a learning example
incorrectly. Although the loss function may be designed either based on formal
statistical methods or heuristically, the generalization strategy can generally
be shown to converge asymptotically to the desired concept. For instance, in
genetic algorithms, Holland's Schema Theorem [13] shows that the number of
structures in a knowledge base that share a given subset of components can be
expected to increase or decrease over time at a rate proportional to the observed
performance of the subset, eventually converging asymptotically to the optimal
configuration.
In contrast to data-driven techniques, an explanation-based generalization
strategy uses domain knowledge to generalize from an example, defining a concept
that contains the example [26]. It analyzes a single example in terms of the
domain knowledge and the goal concept and produces a proof (or explanation)
that shows that the example is an instance of the goal concept. Here, the goal
concept found satisfies the operationality criteria, which is a predicate over concept
definitions that specifies the form in which the concept must be learned.
The proof tree in the process of generalization is constructed by replacing each
instantiated rule by the associated general rule.
An explanation-based strategy can start from general concepts to derive
specific ones, or vice versa. It consists of two phases: explanation and general-
ization. In the explanation phase, the relevant features of the training example
are isolated in order to create an explanation structure that terminates in an
expression satisfying the operationality criterion. In the generalization phase, a
set of sufficient conditions are found to satisfy the explanation. This is done by
regressing the goal concept through the explanation structure and by composing
terms from different parts of the explanation to form a valid generalization.
One of the problems in explanation-based learning is that learning through
multiple examples may result in multiple rules that cannot be combined into a
single rule. This leads to gradual degradation in efficiency in the generalized
rules. Another problem is that explanation-based generalization does not create
new parameters; hence, parameters not explicitly expressed in the proof cannot
be generalized. In this context, studies have been made to generalize the structure
of an example proof such that a fixed number of rule applications in the
proof is generalized into an unbounded number of applications [7].
2.3 Credit Assignment
Credit assignment entails the apportioning of feedback signals to individual decisions
made in the past as well as rules/entities leading to a decision. The
application of credit assignment requires a world model that captures the relationship
among states, decisions, and feedback signals generated by the learning
system or measured in the environment. This world model is explicitly defined
in knowledge-rich applications, but may have to be inferred during learning and
generalization when domain knowledge is not available. Credit assignment is
further complicated when there may be delays in getting the feedback signals
due to a decision. In this case, multiple subsequent decisions may have been
made between the times a decision was made and its feedback signal received.
There are two types of credit assignment: structural and temporal [33].
Structural credit assignment entails ways of using feedback signals to refine the
individual components or rules of a hypothesis. This process is systematic in
explanation-based learning as it involves rewriting one rule into another in the
proof structure. In other learning approaches, credit assignment may not be
possible when domain knowledge is missing. In this case, one may employ
population-based learning [37] that maintains a population of competing hypotheses
and delays choosing the best hypothesis to evaluate or new ones to
create until more tests are performed on each of the alternatives.
Temporal credit assignment, on the other hand, entails the apportioning of
temporal global feedback signals by the learning system to the past decisions
that affect these signals. When a decision is applied, its temporal scope is the
interval of time during which its direct effect can be observed in the application
environment. If the temporal scope is infinite and state changes are Markovian,
then the effects due to a feedback signal will be attributed only to the most recent
decision made in the past, and the effects of other decisions will be felt indirectly
through intervening decisions and states. When the temporal scope is finite and
state changes are dependent and non-Markovian, then an approximate temporal
model is needed for temporal credit assignment. Temporal credit assignment is
used extensively in reinforcement learning [33].
Credit assignment can be either implicit or explicit. An example of implicit
credit assignment is done in Smith's LS-1 system [32] in which rules that are
physically close together on the list representing the knowledge structure stand
a good chance of being inherited as a group. On the other hand, in explicit
credit assignment, explicit rules are defined for credit assignment. Examples of
explicit temporal credit-assignment mechanisms are the profit sharing plan and
the bucket brigade algorithm in classifier systems [14]. A hybrid of implicit and
explicit credit assignment can also be defined [11].
3 Generalizability Measures
To evaluate whether the goal of learning and generalization is achieved, generalizability
measures are used to evaluate the quality of generalization. These
measures are not limited to the field of machine learning but are used in performance
evaluation of many other areas. For instance, in evaluating the speed of a
computer, one generally defines a reference computer, such as the VAX 11/780,
and computes the speedup of the computer with respect to the reference for a
collection of benchmarks. Based on the evaluation results, one generalizes the
speedup to benchmarks not tested in the evaluation.
Since different regions of the problem space of an application domain may
have different characteristics, it may not be possible to evaluate generalization
across all examples of a problem space. To this end, the problem space is decomposed
into smaller partitions before generalization is evaluated. For instance,
in evaluating a computer, one defines its speedups for different collections of
benchmarks in order to reflect its performance under different applications.
In the partitioning of a problem space, we define a problem subspace as a
user-defined partition of a problem space so that concepts/hypotheses for one
subspace are evaluated independent of concepts/hypotheses in other subspaces.
Such partitioning is generally guided by common-sense knowledge or by user
experience in solving similar application problems. To identify a problem sub-
space, we need to know one or more attributes to classify test cases and a set of
decision rules to identify the subspace to which a test case belongs. For instance,
in evaluating the speedup of a computer, the partitioning of the class of all applications
is guided by user experience into the class of scientific applications
and the class of business applications.
Given a subspace of test cases, we define a problem subdomain as a partitioning
of the subspace into smaller partitions so that the evaluation of a con-
cept/hypothesis can be done quantitatively for all the test cases in a subdomain.
Such partitioning is necessary because the statistical performance metrics computed
(such as average or maximum) is not meaningful when the performance
values are of different ranges and distributions. To continue from the previous
example, the class of scientific benchmarks are further partitioned into subdomains
according to their computational behavior, such as whether a program is
CPU-bound or I/O-bound.
In the same way that test cases are partitioned into subspaces, we need to
know the attributes to classify test cases and a set of decision rules to identify
the subdomain to which a test case belongs. This may be difficult in some
applications because the available attributes may not be well defined or may
be too large to be useful. For instance, the attribute to classify whether a
benchmark program is CPU-bound or I/O-bound is imprecise and may depend
on many underlying characteristics of the program.
After evaluating the performance of a hypothesis in each subdomain, we
need to compare its performance across subdomains. For instance, one would
be interested to know whether a computer has high speedups across both CPU-bound
and I/O-bound applications. This comparison may be difficult because
test cases in different subdomains of a subspace may have different performance
distributions and cannot be compared statistically. We address this issue in
Section 3.3. In the next subsection, we examine some formal results in generalizability
for classification problems in one subdomain.
3.1 Formal Results on Learnability and Generalizability
Formal methods to deal with generalizability in learning with one performance
measure have been studied extensively in computational learning theory. They
center on the notion of PAC-learnability [35] of a concept class C by a learning
algorithm L, where a concept is defined as a subset of some instance space
. A learner tries to learn target concept C, finding out points of X (drawn
randomly) whether they belong to the target concept. The goal of the learner is
to produce with high probability (? hypothesis that is close (within ffl)
to the target concept, assuming that the learner does not know the underlying
distribution of the sample points. (The following definitions are from a survey
paper by Kearns et al. [17].) A concept C produced by a learning algorithm
L on input vector T is approximately correct if the error rate P (C \Phi T ) is at
most ffl. If, for any concept class C, with probability distribution P , accuracy
parameter ffl, and confidence parameter ffi, the probability that the output C is
approximately correct is at least (1 \Gamma ffi), then the learning algorithm is probably
approximately correct; and, L is said to PAC-learn C. A learning algorithm L is
a polynomial PAC-learning algorithm for class C, if L PAC-learns C with both
time complexity and sample complexity polynomial in 1
ffl and 1
To understand bounds on estimation by a learning algorithm, we need to estimate
the largest number of input-space points for which almost every possible
dichotomy is achieved by some concept from a class C. VC-dimension (named
after Vapnik and Chervonenkis [36]) addresses this issue. VC dimension, V , of
a concept class C is the size of the largest set S of input-space points such that
for every subset U ' S, there exists some concept C 2 C where
is some function realized by the concept; and C is the set of all such functions
realizable by that concept.
Sauer [29] notes that whenever the VC dimension of a function class is finite,
the number of dichotomies grows subexponentially (actually, polynomially) in
the number of points. The probability of a concept learned with a large estimation
error producing correct outputs for a given set of points goes rapidly to zero
as the size of the set increases. A learning algorithm whose outputs are always
consistent with the examples seen so far is called a consistent PAC-learning
algorithm. If the VC dimension of a concept class if finite, then a consistent
learning algorithm trained on a sufficiently large set of examples is likely to
learn the correct concept.
Blumer et al. [4] have derived bounds on the number m(ffl; ffi) of examples
needed by a consistent algorithm to PAC-learn a concept class C having VC
dimension d. This was improved by Ehrenfeucht et al. [10]
Baum and Haussler [3] have used these results to relate the size of a neural
network, the accuracy of the learned concept, and the number of examples
needed in order to guarantee a particular degree of accuracy. Their analysis suggests
that generalization can be improved by pruning unnecessary hidden units
during learning. The reduced architecture has VC dimension not significantly
larger than the VC dimension for an optimal number of hidden units. Baum
and Haussler establish the following necessary and sufficient conditions for valid
generalization for learning in neural networks of thresholded binary units.
ffl A network of N nodes and W weights, which after being trained on at
least O
ffl log N
examples, classifies at least
of them correctly,
will almost certainly classify a fraction (1 \Gamma ffl) of future examples correctly.
ffl A fully connected feedforward network with one hidden layer, trained on
fewer
examples will, for a dichotomy realizable by the network,
fail to find the requisite set of weights for more than a fraction (1 \Gamma ffl) of
future examples.
Haussler [12] shows that, for it to be likely that feedforward networks with
sigmoidal units obtain a low estimation error, the number of examples must
grow linearly with both the number of modifiable weights and the number of
hidden layers. That is, either of the following desiderata demands a larger
training sample: a) lowering the estimation error; b) increasing the confidence;
and, c) learning with sigmoids having a higher slope.
Barron [2] shows that, for a feedforward network having n sigmoidal units
and d input units and trained on N examples, the total mean squared error
(approximation plus estimation) between the true function and the estimated
function is bounded from above by O
nd
logN .
In summary, the theory in learnability provides conditions and bounds on
generalization that are useful when certain restricted assumptions are met. Such
assumptions may be difficult to ascertain in practice because it is difficult to
characterize the set of test cases and hypotheses precisely. Under such condi-
tions, heuristic methods to measure generalizability need to be developed. In
the next two subsections, we present some results in this area.
3.2 Anomalies in Performance Normalization
In general learning problems, the raw performance results obtained in evaluating
hypotheses on examples may depend on the size and characteristics of the
examples and may not be directly comparable. For instance, Table 1 shows the
CPU times of four computers in evaluating three benchmarks. Obviously, these
performance values cannot be aggregated directly because they belong to different
ranges and are of different distributions. To aggregate them statistically, we
must normalize them first. In the following, we show five different normalization
methods.
a) Average improvement ratio. Using the performance values of one hypothesis
as the baseline, we normalize each performance value of another hypothesis
by computing its ratio with respect to that of the baseline when tested on the
Table
1: Summary of raw CPU times of four computers in evaluating three
benchmarks.
Computer
Benchmark
Table
2: Anomalous orderings of computers in decreasing average normalized
speedups using three different normalization methods.
Average Improve- Average Symmetric Harmonic
Baseline ment Ratio Improvement Ratio Mean
same example. The average of the improvement ratios is then used as the aggregate
performance measure. The drawback of this approach is that different
ordering of the hypotheses can be obtained, depending on the baseline hypothesis
used. To illustrate this point, consider the performance data presented in
Table
1. The second column of Table 2 shows the three anomalous orderings of
the four computers based on their average normalized speedups using each computer
as the baseline for normalization. This shows that generalization based on
the average improvement ratios does not always lead to consistent conclusions.
b) Average symmetric improvement ratio. This is a normalization method
we have developed before to avoid anomalies in inconsistent orderings of two
hypotheses due to the choice of the baseline hypothesis [39]. The idea is to
avoid the problem in improvement ratios that put different weights in different
ranges of normalized performance values. Note that degradations in the original
improvement ratio are between zero and one, whereas improvements are
between one and infinity. Consequently, when improvement ratios are averaged,
degradations carry less weight than improvements.
The symmetric improvement ratio is defined as follows:
where S+;i is the original improvement ratio on the i'th test case. The symmetric
improvement ratio has the property that improvements are in the range
between 0 and infinity, and degradations are in the range between 0 and negative
infinity. For two hypotheses, when we reverse the role of the baseline
hypotheses, their symmetric improvement ratios only change in sign. Hence,
symmetric improvement ratios avoid anomalies in performance orderings with
two hypotheses.
However, anomalies in performance ordering are still present when more than
two hypotheses are concerned. This is illustrated in Table 2 that shows three
different orderings when different computers are used as the baseline. Hence,
generalization based on the average symmetric improvement ratios may not lead
to consistent conclusions.
c) Harmonic mean performance. This is defined as follows:
Again, as illustrated in Table 2, anomalies in orderings are still present.
d) Geometric mean performance. This is defined as follows:
Y
Taking the logarithm of both sides, we have:
log S
log
log
log t h;k ; (5)
where t b;k and t h;k are, respectively, the k'th performance values of the base-line
and the hypothesis being normalized. Based on (5), an alternative way to
view a geometric mean is that it is an arithmetic mean of the logarithms of raw
performance values. The effect of the baseline hypothesis on the average normalized
performance is reflected in the first constant term in (5). Hence, when
the baseline is changed, only a constant term will be changed, and performance
ordering is not affected. This is illustrated in the example in Table 1 in which
the ordering C 86 C 75 C 76 C 99 is unchanged when the baseline is changed.
Average normalized performance with respect to the median performance.
This belongs to a general class of methods that normalizes the performance
values of hypotheses on each test case with respect to a test case-specific constant
that is invariant as more hypotheses are evaluated. The specific method here
uses the median performance value of all the hypotheses on each test case as the
baseline for normalization. Unlike using a baseline hypothesis that may induce
a different ordering when the baseline is changed, the median performance is
invariant with respect to the hypotheses and test cases in a subdomain. Using
this normalization method, the performance distributions of all the test cases
will center around zero. This method is illustrated in the example in Table 1
in which the ordering is C 76 C 75 C 99 C 86 . In computing this ordering, we made a
simplifying assumption that the median performance of each computer on the
three benchmarks is the same as the median performance of the computer across
all possible benchmarks.
A potential problem with this approach is the unavailability of the true
median performance value of hypotheses for each test case. Hence, the sample
median may have to be used instead. Unfortunately, estimated sample medians
are inaccurate during learning because hypotheses may not be tested adequately,
and sample medians are sensitive to the hypotheses tested. Solutions to this
issue are still open at this time.
In summary, anomalies in performance normalization do not exist when
either the baseline is fixed (as in the case of the median performance) or the
effect of changing the baseline only results in changing a constant term in the
(transformed) normalized performance (as in the case of the geometric mean
performance). In other cases, it is possible for the order of the hypotheses to
change when the baseline is changed. The necessary and sufficient conditions
for anomalies to happen are still open at this time.
3.3 Generalizability Measures Across Subdomains
When hypotheses are tested across different subdomains of an application, their
performance values, even after normalization, may have different ranges and different
distributions. As a result, these performance values cannot be aggregated
statistically, and the hypotheses cannot be compared directly and generalized
across subdomains. In this section, we present a heuristic method to evaluate
performance across subdomains in a range-independent way. We assume that
the performance values of testing a hypothesis in a subdomain are independent
and identically distributed. This assumption allows the values in a subdomain
to be aggregated by statistical methods, such as averaging.
In the following, we present a method that uses the sample mean as a statistical
estimate of the population mean. To address uncertainties in using sample
means, we have studied a concept called probability of win [39], Pwin , that compares
two sample means and computes the probability that one sample mean
is larger than another. This is similar to hypothesis testing in which we take
random samples to test whether a property of a population is likely to be true
or false [15]. Obviously, it may be difficult to test a hypothesis fully by testing
the entire population of test cases or by testing only a single random sample.
There are four steps in general hypothesis testing. a) Specify a significance
level ff. b) Specify the testing hypotheses that include both null hypothesis H 0
and alternative hypothesis H 1 . c) Find the corresponding acceptance region
using lookup tables. d) Make decision on the sample value. If the sample falls
in the acceptance region, then accept H 0 and reject H 1 ; otherwise, reject H 0
and accept H 1 .
The probability of win measures statistically how much better (or worse) the
sample mean of one hypothesis is as compared to that of another. It resembles
the significance level in general hypothesis testing, but there are two major
differences. First, only one hypothesis fH: is specified, without the
alternative hypothesis. Further, in contrast to hypothesis testing, acceptance
confidence is not given in advance but is evaluated based on sample values.
One advantage of Pwin is that it is between zero and one and is independent
of the actual performance difference across subdomains. Hence, it can be used
to compare hypotheses in a uniform way across subdomains.
Consider the performance of H i in subdomain j. (For convenience of for-
mulation, subscript j is ignored in the following discussion.) Let - i and oe i be
the true mean and true standard deviation of the mean normalized performance
with respect to the baseline hypothesis H 0 . 1 When n i samples are taken, we
can calculate the sample mean - i and sample standard deviation - oe i . By Central
Limit Theorem,
where N is the normal distribution function with mean - i and standard devia-
tion
. Let t be
where t has Student's t-distribution with degrees of freedom when the
number of samples is less than 30 and the variance is unknown. The probability
that this hypothesis is better than H 0 with mean value zero 2 is
P r(H is
p(t is t\Gammadistributed) dt (7)
where the acceptance region of this hypothesis is
. Note that the
right bound of the acceptance region is a random variable that depends on both
the sample mean and the sample variance.
Example 1. Table 3 illustrates the Pwin for three hypotheses. We see that
Pwin of H 1 increases towards one when the number of samples increases. (H 1
is better than H 0 .) In contrast, Pwin of H 2 reduces to zero when the number
of samples is increased. (H 2 is worse than H 0 .) Last, Pwin of H 3 reaches the
maximum value 1.0, which means H 3 is definitely better than H 0 .
Note that Pwin considers both the mean and variance. Hence, when Pwin of
a hypothesis is close to 0.5, it is not clear whether the hypothesis is better than
or worse than the baseline.
Any one of the normalization methods presented in Section 3.2 can be used.
2 If the average normalized performance of H 0 is not zero, then appropriate shifting in the
mean value to zero can be performed.
Table
3: Examples illustrating how Pwin changes with increasing number of
samples. (Performance is normalized with respect to H 0 .)
No. of Samples to
Given baseline hypothesis H 0 , we now show Pwin of H i in subdomain j with
respect to the average performance of H 0 . Assuming sample mean - i;j , sample
variance -
i;j , and n i;j test cases, Pwin is defined as follows:
Pwin
is the cumulative distribution function of Student's t-distribution
with - degrees of freedom, and Pwin (i; j) is the probability that the true performance
(population mean) of H i in subdomain j is better than that of H 0 .
When n i;j !1, we have
Pwin (i;
where \Phi is the standard cumulative normal distribution function [9].
It is important to point out that probabilities of win are used to evaluate
whether a hypothesis is better than the baseline and is not meant to rank order
all the hypotheses. Hence, when hypothesis are ordered using their probabilities
of win and performance is normalized by any method in which the baseline can
be changed, anomalies in performance ordering may still happen. As illustrated
in the reference [41], this phenomenon happens because not only the mean
but the variance of the baseline are important in determining the ordering of
the hypotheses. The variance of the performance values places another degree
of freedom in the performance ordering, which can change the ordering when
the variance changes. Consequently, the ordering may change when a baseline
with a small variance is changed to one with a large variance (or vice versa).
This is true even for normalization methods that do not have anomalies when
hypotheses are ordered by their mean values, such as the geometric mean. In
short, anomalies in performance ordering will not happen when hypotheses are
ranked by their probabilities of win and when the baseline hypothesis is fixed,
such as the case when the median performance of the hypotheses is used as the
baseline.
We are now ready to define a generalizability measure across multiple sub-
domains. Because different subdomains have different statistical behavior, performance
from different subdomains must be treated independently and cannot
be combined.
There are two assumptions on the strategies presented here.
ffl We assume that the set of subdomains used in the design process are representatives
of all the subdomains in the application. These subdomains
behave in a statistically similar fashion to subdomains used in learning
and in generalization.
ffl We assume that the relative importance of one subdomain as compared
to another is unknown, and that the performance of hypotheses in subdomains
may be dependent. Under these assumptions, we cannot aggregate
performance values of hypotheses across subdomains. Our strategy is to
select hypotheses so that their worst-case performance across all subdomains
is better than a minimum level.
The objective of generalization here is to select a hypothesis that is better
than the incumbent hypothesis over a problem domain. When there are multiple
such hypotheses, our procedure should attempt to maximize the likelihood of
selecting the best hypothesis among the given set. Define:
When there is a baseline hypothesis H 0 , we apply one of the strategies in
Section 3.2 to normalize the performance of a hypothesis in a subdomain with
respect to the baseline hypothesis. We consider H i to be better than H 0 in
\Delta. Note that PW IN (i) is independent
of subdomain j and can be used in generalization if it were true across all
subdomains, even those subdomains that were not tested in learning.
The following are three possible outcomes when comparing PW IN (i) of H i
to H 0 .
a) H i is the only hypothesis that is better than H 0 in all subdomains. H i can
then be chosen as the hypothesis for generalization.
Multiple hypotheses are better than H 0 in all subdomains. Here, we should
select one hypothesis that maximizes the likelihood of being better than H 0 over
the entire domain. This likelihood (or degree of confidence) can be adjusted
by increasing \Delta, which is equivalent to placing a tighter constraint in each
subdomain, hence eliminating some potential hypotheses that are found to be
better than H 0 under a looser constraint.
c) No hypothesis is better than H 0 in all subdomains. Since no hypothesis is
superior to H 0 , H 0 is the most generalizable.
Alternatively, it is possible to find hypotheses such that PW IN ? 0:5
Such hypotheses have less certainty in performing better than
across all the subdomains. However, since PW IN is based on the worst-case
Pwin across all the subdomains, hypotheses selected this way may still perform
better the baseline in some subdomains. Such hypotheses should be considered
as alternatives to H 0 .
We have considered so far generalization based on one performance measure.
In general, there may be multiple performance measures in an application, and
generalization determines whether a hypothesis behaves consistently across all
subdomains with respect to all the performance measures. The problem belongs
to a general class of multi-objective optimization problems that can be solved in
some special forms. In our approach, we propose to constrain all but one measures
and to optimize the unconstrained measure subject to the constraints [39].
The constraints in this approach are defined with respect to the performance
of an existing baseline hypothesis. This is similar to first normalizing the performance
with respect to that of the baseline and formulating a constraint such
that the normalized performance is larger than one. Again, care must be taken
in normalization because anomalies in performance ordering may happen when
certain normalization methods are used and the baseline is changed.
Probabilities of mean have been used to evaluate generalizability in various
genetics-based learning and generalization experiments [39, 15, 34, 24, 41, 40, 16,
38]. These include the learning of load balancing strategies in distributed systems
and multicomputers, the tuning of parameters in VLSI cell placement and
routing, the tuning of fitness functions in genetics-based VLSI circuit testing,
the automated design of feedforward neural networks, the design of heuristics
in branch-and-bound search, range estimation in stereo vision, and the learning
of parameters for blind equalization in signal processing.
4 Example: VLSI Placement and Routing
In this section we illustrate the use of generalizability measures in the design of
heuristics for TimberWolf [30], a software package based on simulated annealing
(SA) [19] to place and route various circuit components on a piece of silicon.
The goal of the package is to minimize the chip area needed while satisfying
constraints such as the number of layers of poly-silicon for routing and the
maximum signal delay through any path. Its operations can be divided into
three steps: placement, global routing, and detailed routing.
The placement and routing problem is NP-hard; hence, heuristics are generally
used. SA used in TimberWolf is an efficient method to randomly search the
space of possible placements. Although in theory SA converges asymptotically
to the global optimum with probability one, the results generated in finite time
are usually suboptimal. Consequently, there is a trade-off between the quality
of a result and the cost (or computational time) of obtaining it.
In TimberWolf version 6.0, the version we have experimented, there are two
parameters to control the running time (which indirectly control the quality of
the result): fast-n and slow-n. The larger the fast-n is, the shorter time SA
will run. In contrast, the larger the slow-n is, the longer time SA will run. Of
course, only one of these parameters can be used at any time.
TimberWolf has six major components: cost function, generate function,
Table
4: The parameter set in TimberWolf (Version used in learning and
generalization.
Parameter Meaning Default Generalized
vertical path weight for estimating cost function 1.0 0.958
vertical wire weight for estimating cost function 1.0 0.232
P4 range limiter window change ratio 1.0 1.30
P5 high temperature finishing point 23.0 10.04
P6 intermediate temperature finishing point 81.0 63.70
P7 low temperature finishing point 125.0 125.55
final iteration temperature 155.0 147.99
critical ratio that determines acceptance prob. 0.44 0.333
P10 temperature for controller turn off 0.06 0.112
initial temperature, temperature decrement, equilibrium condition, and stopping
criterion. Many parameters in these components have been well tuned manually
in the last ten years. However, their settings are generally heuristic because we
lack domain knowledge to set them optimally. Moreover, the search of a single
parameter set that works well across multiple circuits have been done in an
ad hoc fashion. Our goal here is to show that, with a good generalization
procedure, it is possible to find a single parameter set that improves both the
cost and quality across multiple circuits.
Table
4 lists the parameters we have focused in our experiments and their
corresponding default values. In addition, the package also uses a random seed
that results in different performance when different seeds are used.
We have used seven benchmark circuits that were mostly from ftp.mcnc.org
in /pub/benchmark [21] (s298, s420, fract, primary1, struct, primary2, indus-
trial1). To show that generalization works, we divided the circuits into two
sets: The first set is a learning set consisting of three smaller circuits (s298,
s420, and primary1) that we used to experiment and find a generalized parameter
set; whereas the second (the remaining four circuits) is a testing set that
we used to test the new parameter set found.
In our experiments, we have studied only the standard-cell placement prob-
lem, noting that other kinds of placement can be studied in a similar fashion.
We have also used fast-n values of 1, 5, and 10, respectively.
The domain in our study is the set of all performance values of possible
circuits, large and small, for all combinations of parameters of TimberWolf. In
this domain, we have found that different parameter values of the parameter set
defined in Table 4 lead to different costs and qualities across different circuits as
well as across different temperature schedules (fast-n). Hence, we cannot group
the performance values of mappings of multiple circuits as well as multiple temperature
schedules into one subdomain. In addition, we cannot subdivide the
performance values of a circuit for a given temperature schedule into multiple
subdomains because the only variable left in TimberWolf causing performance
changes is the initial random seed. In short, we define a subdomain in this application
as the set of performance values (quality and cost) of all the mappings
for one circuit and one temperature schedule.
Since the quality and cost of a mapping generated by TimberWolf depend on
the random seed used, we need to normalize the quality and cost of the mapping
found using a new parameter set and a given random seed with respect to those
of the mapping found using the default parameter set and the same random
seed. In our experiments, we used the symmetric improvement ratio (2) as our
normalization method and average the performance values over mappings of a
circuit due to different random seeds. As the baseline for normalization is the
default parameter set and is fixed, there are no anomalies in the ordering of
parameter sets due to changing baselines.
To find parameter sets that can improve over the default parameter set, we
need to have a method to systematically generate new parameter sets. Any
method that explores the space of parameter sets in a systematic fashion is ade-
quate. In our experiments, we applied TEACHER [39], a learning package based
on genetic algorithms we have developed, to explore the space. TEACHER
found thirty sets of parameter values, ten for each of the following three sub-
domains: s298 with fast-n of 1, s420 with fast-n of 5, and primary1 with fast-n
of 10. We used a fixed sequence of ten random seeds in each subdomain to find
its statistical performance of the mappings. Each learning experiment involved
1000 applications of TimberWolf divided into ten generations. Based on the
best sets of parameter values, we applied our generalization procedure to
obtain one generalized parameter set. This generalized parameter set as well as
the default parameter set are shown in Table 4.
Figure
4 plots the quality (higher quality in the y-axis means reduced chip
area averaged over 10 runs using the defined random seeds) and cost (average
execution time of TimberWolf) between the generalized parameter set and the
default parameter set on all seven circuits with fast-n of 1, 5, and 10, respec-
tively. Note that all performance values in Figure 4 are normalized using (1)
with respect to those of fast-n of 10, and that the positive (resp., negative)
portion of the x-axes shows the fractional improvement (resp., degradation) in
computational cost with respect to the baseline parameter set using fast-n of 10
for the same circuit. Each arrow in this figure points from the average performance
of the default parameter set to the average performance of the generalized
parameter set.
Among the 21 subdomains (7 circuits and 3 temperature schedules), the
generalized parameter set has worse quality than that of the default in only two
subdomains, and has worse cost in 4 out of 21 subdomains. We see in Figure 4
that most of the arrows point in a left-upward direction, implying improved
quality and reduced cost.
Note that these experiments are meant to illustrate the power of our generalization
procedure. We expect to see more improvement as we learn other
functions and parameters in TimberWolf. Further, improvements in TimberWolf
are important as the system is actually used in industry.
Normalized
Symmetric
Quality
Normalized Symmetric Cost
default
generalized
Figure
4: Comparison of normalized average performance between the default
and the generalized HMs. The plots are normalized with respect to the performance
of applying the baseline HM on each circuit using
Final Remarks
In this paper, we have defined the generalization problem, summarized various
approaches in generalization, identified the credit assignment problem, and
presented some solutions in measuring generalizability.
Existing formal results in measuring generalizability only address some restricted
cases in which performance measures are from one common (possibly
unknown) distribution. In general, the performance of applications may be measured
by multiple metrics, and test cases may be grouped into subsets (or sub-
domains) such that each subset has a different performance distribution. Con-
sequently, existing methods cannot be used to measure generalizability across
subdomains of test cases.
We have presented some systematic methods to evaluate generalizability
within a subdomain. To eliminate dependence on the size of a test case in a
subdomain, we have shown various normalization methods to normalize performance
with respect to a baseline hypothesis. Some of these methods can
lead to anomalies in orderings when hypotheses are rank-ordered by the average
normalized measure and the baseline is changed. Only when the baseline
hypothesis is fixed (like using the median performance as the baseline) or when
the effect of the baseline only exists as a constant in the average normalized
measure (like using the geometric mean) that anomalies can be eliminated.
Finally, we have presented some methods to evaluate generalizability across
subdomains. We have introduced a concept called probability of win that measures
the probability that the sample mean of a hypothesis is better than the
population mean of the baseline, given the number of samples tested and the
variance of the samples. As probabilities of win are in the range between zero
and one, they can be used to evaluate generalizability across subdomains. Un-
fortunately, probabilities of win cannot be used to rank-ordered hypotheses,
even when performance is normalized and averaged using methods like the geometric
mean. This happens because probabilities of win are used to evaluate
whether a hypothesis is better than the baseline and is not meant to rank order
hypotheses. The variance of the performance values places another degree of
freedom in the ordering, leading to a different ordering when a baseline with a
different variance is used.
--R
The Handbook of Artificial Intelligence
Approximation and estimation bounds for artificial neural networks.
What size net gives valid generalization
Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension
Classifier systems and genetic algorithms.
Encyclopaedia Britannica
Generalizing number and learning from multiple examples in explanation based learning.
Probability and Statistics for Engineering and the Sciences.
A general lower bound on the number of examples needed for learning.
Credit assignment in rule discovery systems based on genetic algorithms.
Generalizing the PAC model: Sample size bounds from metric dimension-based uniform convergence results
Adaptation in Natural and Artificial Systems.
Properties of the bucket brigade algorithm.
Statistical generalization of performance-related heuristics for knowledge-lean applications
Statistical generalization of performance-related heuristics for knowledge-lean applications
Recent results on Boolean concept learning.
Recent results on Boolean concept learning.
Optimization by simulated annealing.
Genetic Programming.
International Workshop on Layout Synthesis
Parallel Distributed Processing: Explorations in the Microstructure of Cognition
Artificial Neural Networks: Concepts and Theory.
A systematic method for automated learning of load-balancing strategies in distributed systems
Generalization as search.
The truck backer-upper: An example of self-learning in neural networks
The Mathematical Foundations of Learning Machines.
On the density of families of sets.
VLSI Placement and Global Routing Using Simulated Annealing.
Problem solving and rule induction: A unified view.
Flexible learning of problem solving heuristics through adaptive search.
Temporal Credit Assignment in Reinforcement Learning.
Automated learning of the minimal configuration of a feed forward neural network.
A theory of the learnable.
On the uniform convergence of relative frequencies of events to their probabilities.
Teacher: A genetics-based system for learning and for generalizing heuristics
Generalization of heuristics learned in genetics based learning.
Generalization learning techniques for automating the learning of heuristics.
--TR | VLSI cell placement and routing;subdomains;probability of win;anomalies in generalization;machine learning;credit assignment problem generalization |
628000 | Symbolic Interpretation of Artificial Neural Networks. | AbstractHybrid Intelligent Systems that combine knowledge-based and artificial neural network systems typically have four phases involving domain knowledge representation, mapping of this knowledge into an initial connectionist architecture, network training, and rule extraction, respectively. The final phase is important because it can provide a trained connectionist architecture with explanation power and validate its output decisions. Moreover, it can be used to refine and maintain the initial knowledge acquired from domain experts. In this paper, we present three rule-extraction techniques. The first technique extracts a set of binary rules from any type of neural network. The other two techniques are specific to feedforward networks, with a single hidden layer of sigmoidal units. Technique 2 extracts partial rules that represent the most important embedded knowledge with an adjustable level of detail, while the third technique provides a more comprehensive and universal approach. A rule-evaluation technique, which orders extracted rules based on three performance measures, is then proposed. The three techniques area applied to the iris and breast cancer data sets. The extracted rules are evaluated qualitatively and quantitatively, and are compared with those obtained by other approaches. | Introduction
Several researchers have investigated the design of hybrid systems that combine expert and connectionist
subsystems [44, 45, 54, 10, 16, 15, 27]. The typical result is a Knowledge Based Neural
Network (KBNN) system with four phases: (i) the rule base representation phase, where initial
domain knowledge is extracted and represented in a symbolic format (e.g., a rule-based system)
(ii) the mapping phase, where initial domain knowledge is mapped into an initial connectionist
architecture (iii) the learning phase, where this connectionist architecture is trained by a set of
domain examples (iv) the rule extraction phase, where the trained and thus modified connectionist
architecture is mapped back into an updated rule-based system to provide explanation power.
KBNNs attempt to exploit the complementary properties of knowledge based and neural
network paradigms to obtain more powerful and robust systems. HIA [44], KBANN [53, 34],
RAPTURE [27] and KBCNN [10, 11] are examples of KBNN hybrid systems. Figure 1 sketches
typical components of a KBNN system that combines rule-based and connectionist paradigms.
Researchers have also combined connectionist systems with fuzzy logic systems to obtain
Fuzzy Logic Neural Networks (FLNN or NeuroFuzzy) hybrid systems. In FLNNs, the neural
network subsystem is typically used to adapt membership functions of fuzzy variables [6], or to
refine and extract fuzzy rules [48, 47, 24].
Rules
Extracted
Updated
Rule-Based
System
Output
Hybrid
Rule-based
Decisions
Concepts
Learned
Architecture
Connectionist
Dec isions
Decisions
Integrated Decision
Maker
Training Examples
Rule
Extraction
Module
Extracted Prior
Domain Knowledge
Initial
Mapping
Mapping
Revised
Figure
1: Typical components of a KBNN system that integrates knowledge based and connectionist
paradigms.
Extracting symbolic rules from trained ANNs is an important feature of comprehensive hybrid
systems, as it helps in:
1. Alleviating the knowledge acquisition problem and refining initial domain knowledge.
2. Providing reasoning and explanation capabilities.
3. Supporting cross-referencing and verification capabilities.
4. Alleviating the catastrophic interference problem of ANNs since a different set of rules can
be extracted after a network is retrained using the new environment examples. One can
then examine the resulting rule bases to find out which situation each set is suitable for.
Due to these capabilities, extracting rules from trained ANNs may be essential for obtaining
more powerful, more robust, self-explanatory, and self-maintained hybrid systems.
This paper proposes three rule extraction techniques for KBNN hybrid systems. Also, it
presents a simple rule evaluation procedure that orders rules obtained by any extraction approach,
according to some performance criteria. A qualitative evaluation of the three new techniques and a
comparison with some other approaches is also provided. The next section illustrates key issues in
extracting rules from trained neural networks and summarizes some of the existing rule extraction
techniques. Section 3 describes the proposed techniques and the rule evaluation procedure. In
section 4, we present implementation results of these three techniques using an artificial problem,
as well as the iris and breast cancer data sets. Section 5 compares the performance of rule
sets extracted by our techniques with the rule sets extracted by some other approaches. In
the concluding section we comment on the different rule extraction techniques, summarize the
significance of the proposed techniques and point to future directions.
Rule Extraction
2.1 Issues
Designing an efficient rule extraction module is in fact a difficult task. Several factors should be
carefully considered while designing a rule extraction technique:
1. Transparency of the extracted rules: a transparent system is a self explanatory system
that is capable of attaching sufficient hypotheses and evidence with each of its output
decisions to explain how it reaches them.
2. Granularity of the explanation feature: is the level of detailed hypotheses and evidence
that the system can provide with each of its output decisions.
3. Comprehensiveness of the extracted rules: in terms of the amount of embedded
knowledge captured by them.
4. Comprehensibility: indicated by the number of rules and number of premises in each
extracted rule from a trained network.
5. Fidelity: is a measure of the capability of the extracted rules to mimic the embedded
knowledge in a trained network.
6. Accuracy: an accurate rule-based module is one that can generalize well for unseen examples
7. Portability: is the capability of the rule extraction algorithm to extract rules from different
network architectures.
8. Modifiability: is the ability of extracted rules to be updated when the corresponding
trained network architecture is updated or retrained with different data sets. This issue
depends on how the trained network and the rule extraction modules interact.
9. Refinement Capability: is the capability of the extracted rules to help resolving the
knowledge acquisition bottleneck (i.e, the incompleteness, inconsistency, and/or inaccuracy
of initially acquired domain knowledge).
The quality of extracted rules is improved by increasing their comprehensibility, fidelity, and
accuracy. However, to extract a comprehensive rule base from a trained ANN most, if not all,
embedded knowledge in this ANN should be extracted. In this case, the comprehensibility of
extracted rules will be degraded because the resulting rule base may have too many rules, with
some of them having many premises.
2.2 Existing Rule Extraction Techniques
Research work in the area of extracting symbolic knowledge from trained ANNs has witnessed
much activity recently. This subsection summarizes some of the existing approaches with emphasis
on extracting rules from feedforward (specifically, MLP) ANN architectures. A very rich source of
literature review of different rule extraction approaches is a technical report written by Andrews
et al. [1].
2.2.1 Link Rule Extraction Techniques
The methodology behind most of the techniques for rule extraction from MLPs can be summarized
in two main steps: (i) For each hidden or output node in the network, search for different
combinations of input links whose weighted sum exceeds the bias of the current node. (ii) For
each of these combination generate a rule whose premises are the input nodes to this combination
of links. All premises of a rule are conjuncted. Either [35], KT [9] and Subset [52] are three
notable rule extraction algorithms in this category. Some of the main problems of the KT and
the Subset algorithms are: (i) the size of the search algorithm is O(2 l ) for a hidden/output node
with assuming that network inputs are binary. (ii) the algorithms extract a large set
of rules, up to fi p (1 are the numbers of the subsets of positively-weighted
and negatively-weighted links respectively; (iii) some of the generated rules may be repetitive;
(iv) the extracted rules tend to hide significant structures in the trained network. However, the
rules extracted from both algorithms are simple to understand. The size of the extracted rules
can be limited by specifying the number of premises of the rules. Generally, the rules extracted
by both KT and Subset algorithms are tractable specially in small application domains.
Based on the shortcomings of the Subset algorithm, Towell and Shavlik [52] developed another
rule extraction algorithm called MofN. The name of the algorithm reflects the rule format that
the algorithm uses to represent the extracted rules:
If ("at least" M of the following N premises are true) then (the concept designated by the unit is
true).
The rationale behind the MofN is to find a group of links that form an equivalence class in
that all class members have the same effect (i.e have similar weight values) and can be used
interchangeably with one another. MofN extracts rules from the KBANN trained network through
six main procedures. Rules extracted by MofN are significantly superior than rules extracted by
other symbolic approaches such as C4.5 [37], Either [35] and LINUS [8] at least for problems like
"promoter recognition in DNA nucleotides" for which it is a natural fit [52].
NeuroRule is another rule extraction approach that uses different combinations of weighted
links to extract rules [43]. The main difference between NeuroRule and MofN is that the former
extracts rules from networks after pruning their architectures and then discretizing their hidden
units activation values.
Recently, Howes and Crook introduced another algorithm that extracts rules from feedforward
neural networks [18]. The network architecture used by this algorithm is restricted to one
hidden layer network trained with a binary sigmoid activation function. The rationale behind
this algorithm is to extract the maximally general rules from the trained network using a linear
Activation Constraint Function which puts a limits on hidden nodes activation values to satisfy
an activation on output nodes of at least 0.9. After this step the algorithm searches for input combinations
that satisfy the predetermined (constrained) hidden nodes activation values. If found,
the algorithm extracts a corresponding rule to each combination. This algorithm, as well as the
previously mentioned approaches, works for binary networks. Howes and Crook have proposed
an extended version of this algorithm for continuous valued inputs, which is currently far from
efficient, as reported by the algorithm's authors.
We categorize all the approaches mentioned in this subsection as Link Rule Extraction (LRE)
techniques because they all first search for weighted links that cause a node (hidden or output)
to be "active". Then these combinations of weighted links are used to generate symbolic rules.
Heuristic methods are commonly used in the LRE category to bound the search space for rules
and to increase the comprehensibility of the extracted rules. Some researchers use the term
"decompositional methods" to refer to the LRE type techniques [1, 11].
Several other rule extraction approaches that extract rules from feedforward ANNs have
been reported. The main difference between them and the approaches mentioned above is that
they extract rules from specialized ANNs. RuleNet [30] and RULEX [3, 2] are two examples
of this class of approaches. RULEX extracts rules from a Constrained Error Back-Propagation
(CEBP) MLP network, similar to Radial Basis Function (RBF) networks. Each hidden node in
this CEBP network is localized in a disjoint region of the training examples. A distinctive feature
of RULEX is that it controls the search space through its network while other approaches use
heuristic measures to do the same. RuleNet, on the other hand, uses the idea of adaptive mixture
of local expert [19] to train a localized ANN then extracts binary rules in a LRE approach. Both
RULEX and RuleNet can be classified as "localized LRE" techniques.
2.2.2 Black-box Rule Extraction Techniques
Another class of rule extraction approaches extracts rules from feedforward networks only by
examining their input-output mapping behavior. An example of such a rule extraction approach
is the algorithm developed by Saito and Nakano to extract medical diagnostic rules from a trained
network [39]. BRAINNE [40], Rule-extraction-as-learning [7], and DEDEC [50] are other examples
of extracting rules by investigating the input-output mapping of a trained network. In this paper
we refer to this class as the Black-box Rule Extraction (BRE) category because rules are extracted
regardless the type or the structure of the neural network. Another given name to this class of rule
extraction techniques is 00 pedagogical 00 approaches [3]. For example, DEDEC extracts rules by
ranking the inputs of an ANN according to their importance (contribution) to the ANN outputs
[51]. This ranking process is done by examining the weight vectors of the ANN, which puts
DEDEC on the border between LRE and BRE techniques. The next step in DEDEC is to cluster
these ranked inputs and use each cluster to generate a set of optimal binary rules that describes
the functional dependencies between the attributes of this cluster and the outputs of the ANN.
DEDEC has been implemented using a standard feedforward MLP and a Cascaded Correlation
(CasCor) ANN. In spite of the LRE nature of its ranking procedure, DEDEC is classified as a
BRE since its main theme is to extract rules based on the input-output mapping.
2.2.3 Extracting fuzzy rules from ANNs:
Research in the area of Fuzzy Logic Neural Networks (FLNN) or "NeuroFuzzy systems" is concerned
with combining neural networks and fuzzy logic. Some FLNN systems include a fuzzy rule
extraction module for refining fuzzy sets membership functions and explaining the trained neural
network [17, 48, 47, 24]
2.2.4 Extracting rules from recurrent networks
Recurrent networks have shown great success in representing finite state languages [14, 55] and
deterministic finite state automata [13]. Omlin and Giles, [33] have developed a heuristic algorithm
to extract grammar rules in the form of Deterministic Finite-state Automata (DFA) from
discrete-time neural networks and specifically from second-order networks. Starting from a defined
initial network state that represents the root of the search space, the DFA rule extraction
algorithm searches the equally partitioned output space of N state neurons in a breadth-first fash-
ion. The authors claim that the DFA rules extraction algorithm improves network generalization
performance based on the stability of the internal DFA representation.
3 Proposed Rule Extraction Approaches
In this section we introduce three different approaches to extracting rule bases from trained neural
networks. The suitability of each approach depends on the network type, inputs, complexity, nature
of application, the required quality of the extracted rules and some other factors as explained
later. The first approach is a Black-box Rule Extraction technique. The second and the third
approaches belong to the Link Rule Extraction category. Also, an evaluation procedure and a rule
ordering algorithm that measure the firing and the false alarm rates of each extracted rule and
order them are introduced and applied to existing rule extractors as well as to the three proposed
methods.
3.1 First Approach (BIO-RE)
The first approach is a simple black box rule extraction technique, that is surprisingly effective
within its (relatively) narrow domain of applicability. It is named Binarized Input-Output
Rule Extraction (BIO-RE) because it extracts binary rules from any neural network trained
with "binary" inputs, based on its input-output mapping. If original inputs are not binary, they
have to be binarized using Equation 1.
(1)
where: x i is the value of original input X i , - i is the mean value of X i , and y i is the corresponding
binarized input value of x i . Some of the unique features of BIO-RE are:
1. It does not require any information about the internal structure of the network.
2. It can be used to extract rules from any kind of neural network (e.g. RBFs, MLPs, or
Recurrent Networks).
3. It does not require any specific training regime (supervised or unsupervised).
The outline of the BIO-RE algorithm is as follows:
For a well trained neural network do:
1. Obtain the network output O(Y corresponding to each binary input
pattern Y . If the number of input nodes is n then this conceptually requires 2 n input patterns.
However, the problem specification may remove some combinations that will not occur.
2. Generate a truth table by concatenating each input Y of step 1 and its corresponding output
decision O(Y ) from the trained network. An output set to 1 if its corresponding
output node is active (above a threshold), otherwise it is 0.
3. Generate the corresponding boolean function (represented in the previously described binary
rule format) from the truth table of step 2.
Any available boolean simplification method can be used to perform step 3 of the BIO-RE algorithm
(e.g. Karnough map [22], algebraic manipulation, or a tabulation method [29]). We used
Espresso 1 to generate the extracted rules [4]. Rules extracted by BIO-RE are represented in the
If [Not] Input-Variable [And [Not] Input-Variable] \Gamma!Consequent j
where: [\Delta] is an optional term and [\Delta] means that the term [\Delta] can be repeated 0 or n times. In
terms of final rules, an extracted rule "If Y 1 And Not Y 2 Then O 1 " is rewritten as "If
And binary input (e.g., Y 1 ) is represented as
a "negated" binary input variable (e.g., Y 2 ) is represented as:
Section 4.3 for examples. The BIO-RE approach is suitable when the in-
put/output variables are naturally binary or when binarization does not significantly degrade
the performance. Also the input size (n) should be small. Given that the above conditions are
satisfied, BIO-RE has some advantages:
1. It allows the use of available logic minimization tools.
2. Extracted rules are optimal and cannot be simplified any further. Hence, no rewriting
procedure is required.
1 Espresso is a software package for logic design [38].
3. The extracted rules do not depend on the number of layers of the trained network.
4. The set of rules extracted by BIO-RE is comprehensive and understandable.
5. All premises of the extracted rules are conjuncted.
6. There is no limitation on the number of premises in any of the extracted rules by BIO-RE.
However, the maximum number of premises in any rule is equal to the number of the input
nodes of the network.
The BIO-RE algorithm was tested on three problems. The first is a representative binary
problem used to study BIO-RE soundness and correctness. The other two are public domain examples
which were used to compare its efficiency and performance with other existing algorithms.
Section 4 presents these experimental results.
3.2 Second Approach (Partial-RE)
The idea underlying Partial-RE algorithm is that it first sorts both positive and negative incoming
links for each hidden and output node in descending order into two different sets based on their
weight values. Starting from the highest positive weight (say i), it searches for individual incoming
links that can cause a node j (hidden/output) to be active regardless of other input links to this
node. If such a link exists, it generates a rule: "If Node i
cf
\Gamma! Node j ", where cf represents the
measure of belief in the extracted rule and is equal to the activation value of node j with this
current combination of inputs. Values of certainty factors are computed by Equation 3. If a
node i was found strong enough to activate a node j, then this node is marked and cannot be
used in any further combinations when checking the same node j. Partial-RE continues checking
subsequent weights in the positive set until it finds one that cannot activate the current node j
by itself.
It is important to mention that Partial-RE assumes that all inputs have the same range, so
that their effect on the hidden layer is simply determined by the weights. Therefore, the original
input features may need to be scaled using Equation 2.
1:0
2:0oe i
(2)
is the corresponding scaled input value of the original input value x i and oe i is
the standard deviation of input feature X i . In Equation 2, oe i is multiplied by "2" to provide a
wider distribution of input X i ( a range of - i \Sigma 2oe i will contain approximately 95 percent of the
normally distributed).
If more detailed rules are required (i.e. the comprehensibility measure p ? 1), then Partial-
starts looking for combinations of two unmarked links starting from the first (maximum)
element of the positive set. This process continues until Partial-RE reaches its terminating criteria
(maximum number of premises in rule = p). Also, it looks for negative weights such that if their
inputs are not active then a node in the higher layer of the network is going to be active, and
extracts rules in the format: If Not Node g
cf
\Gamma! Node j , where the link between node g and node
j has a negative value. Moreover, it looks for small combinations of positive and negative links
that can cause any hidden/output node to be active. In this case extracted rules are represented
as: If Node i And Not Node g
cf
\Gamma! Node j , where the link between node i and j is positive and
between g and j is negative. After extracting all rules, a rewriting procedure takes place. Within
this rewriting procedure any premise that represents an intermediate concept (i.e a hidden unit)
is replaced by the corresponding set of conjuncted input features that causes it to be active. Final
rules are written in the format: "If
cf
\Gamma! Consequent j ". See Table 2 and
6 for examples.
Partial-RE can be used efficiently in applications where the main objective of extracting rules
from trained neural networks is to study the main parameters that cause specific output decisions
to be taken. Moreover, Partial-RE can be used to analyze the correlations between input-output
parameters of many applications that use neural networks for either function approximation or
classification tasks. In such cases, the cost of implementing the Partial-RE is low compared to
the MofN algorithm if a small number of premises per rule is enough. By extracting only certain
rules with small number of premises per rule we are reducing the combinatorial nature of the rule
extraction process into one that is polynomial in n. Partial-RE examines small subsets S js of
incoming links to a hidden or output node j, and extracts a rule if
(w ji x
where w ji is the weight value of the link between input x i and hidden/output node j, ' j is
the threshold value of the node j, and \Delta is a small positive value (between 0.1 and 0.3) called
certainty parameter. The value of the certainty parameter \Delta has been added to the previous
equation to make sure that the incoming links to node j are high enough to cause node j to be
active. Therefore, the extracted rules are "certain" rules. The value of \Delta should be chosen based
on how certain the extracted rules should be. In fact, having \Delta and p (which determines the
number of premises in a rule) as adjustable parameters increase the efficiency of the Partial-RE
algorithm. The Partial-RE is easily parallelizeable. Experimental results show that Partial-RE
algorithm is suitable for large size problems, since extracting all possible rules is NP-hard and
extracting only the most effective rules is a practical alternative.
3.3 Third Approach
Like the Partial-RE approach, Full-RE falls in the LRE category. It is notable because:
1. It extracts rules with certainty factors from trained feedforward ANNs.
2. It extracts all possible rules that represent the semantic interpretation of the internal structure
of the trained neural network that they were extracted from.
3. It can extract rules from networks trained with continuous, normal, and binary inputs.
Therefore, there is no restriction on the values that any input feature can take. This
capability makes Full-RE a universal extractor.
4. It is applicable to any neural network node (unit) with a monotonically increasing activation
function.
After examining different possible combinations of incoming links and feed-forwarding their effect
to the output nodes, Full-RE first generates intermediate rules in the format:
\Gamma! Consequent j
where: c i is a constant representing the effect of the i th input (X i ) on Consequent j and - j is a
constant determined based on the activation value of node j to make it active. If node j is in the
layer above node i (e.g., node i is an input node and node j is a hidden node) then c i represents
the weight value w ji of the link between these two nodes.
Note that a range of X i values may satisfy an intermediate rule, and one would want to determine
a suitable extremum value in such a range. To make this tractable, each input range has to
be discretized into a small number of values that can be subsequently examined. Thus, each input
feature discretized using k intervals: (D
where: d i;l\Gamma1 and d i;l are the lower and upper boundary values of interval l of input X i respec-
tively. Different discretization approaches can be exploited to compute discretization boundaries
of input features X i s [46, 5, 23, 26, 56]. Full-RE uses the Chi2 [25] algorithm 2 , a powerful discretization
tool, to compute discretization boundaries of input features. When Full-RE finds more
2 We are thankful to Liu and Setiono for making their Chi2 source code available to us.
than one discretization value of an input X i that can satisfy the intermediate rule (i.e., the rule
has more than one feasible solution) then it chooses the minimum or the maximum of these values
based on the sign of the corresponding effect parameter c i . If c i is negative then Full-RE chooses
the minimum discretization value of X i , otherwise it chooses the maximum value. However, all
selected discretization values should satisfy the left hand side (the inequality) of the intermediate
rule and the boundary constraints of all input features of this inequality.
The Full-RE method can be summarized in the following steps:
For each hidden node j in a well trained MLP do:
1. Consider the equation
high enough to make node j active. If
activation function of node j is sigmoid then -
1).
2. Given the discretization boundaries of each input feature, consider the Linear Programming
(LP) problem of Minimizing w j1
such
Full-RE solves this LP problem by selecting the discretization boundaries of X i s (D i s) that
determine the feasible solution(s) of the intermediate rule and satisfy all given constraints. Values
of single input features that can satisfy this LP problem regardless of other input features can be
found easily by substituting X i s of positive weights to node j by their minimum values (a i s) and
negative weights by their maximum values (b i s). Higher combinations are found similarly
by examining different combinations of discretized inputs and finding the edges of the feasible
solution surface of the LP problem. Note that: any linear programming tools can also be used
to solve this standard LP problem 3 . As an example, assume that a feasible solution is at x
and x the effect of input X 1 and X 2 on node j is positive and negative respectively
based on the extracted intermediate rule for node j (i.e., c 1
extracts the
following rule: "If
cf
where a are determined by the discretization process or the LP tool. Full-RE
3 We also used Mathematica to find out feasible solutions.
computes certainty factors of extracted rules based on 4 Equation 3.
if act(j) is a sigmoid
is a linear threshold
if act(j) is a hard limiting and
if act(j) is a hard limiting and P n
activation values of hidden nodes are bounded between (0,1), Full-RE uses a simplified
version of the above procedure to extract rules between hidden and output nodes where discretizing
the outputs from hidden nodes is no longer required. However, extracted rules between hidden
and output nodes are represented in the same format of Partial-RE (e.g. If h 1 And h 2
cf
\Gamma! O k ).
Full-RE replaces each hidden node (h j ) in the previous rule by the left hand side of the rule(s)
whose right hand side is h j . The general format of final rules extracted by the Full-RE is:
If Simple-Boolean-Expression [AND Simple-Boolean-Expression] cf
\Gamma! Consequent j
where:
Simple-Boolean-Expression ::= Variable Operator Constant,
Operator
The means that the term [AND Simple-Boolean-Expression] can be repeated 0 or n times, the
stands for an alternation (i.e. operator can take any of the four boolean operators) and
cf represent the certainty factor computed by Equation 3 for each extracted rule. The certainty
factor (cf) of a rule represents the measure of confidence/belief in this rule consequent when all
of its premises are true. Final rules extracted by Full-RE are represented in the same format
of Partial-RE expect that each - i is replaced by one of the discretization boundaries (say d i;l )
selected by Full-RE as described earlier. See Table 3 and 7 for examples. Note that there is
no restriction on the number of premises in the final rules extracted by the Full-RE. The only
limitation applied is on the number of premises of rules between nodes of adjacent layers (e.g.,
number of premises in intermediate rules between input and hidden nodes or between hidden and
output nodes). Note that when input features are binary, the discretization step is no longer
required.
4 Full-RE only generates a rule if its cf computed by Eqn. 3 is - 0.5.
3.4 Rule Evaluation
To evaluate the performance of rules extracted from trained networks by any of the three presented
techniques (or by any other rule extraction approach), we developed a simple rule evaluation
procedure which attaches three performance measures to each extracted rule. For Partial-RE and
Full-RE approaches, the certainty factor attached to each extracted rule can be used along with
these three performance measures to evaluate the extracted set of rules. The main motivations
for developing this rule evaluation procedure are to:
1. find the best order of the extracted rules that maximizes their performance on the available
data set.
2. test the fidelity of the extracted rule-based system (i.e., its capability to mimic the embedded
knowledge in the trained network). This objective can be achieved by comparing the
performance of the extracted rules with the corresponding trained neural network performance
3. measure how much knowledge is left unextracted from the internal structure of the trained
network.
4. cases where the extracted rule-based system surpasses the trained neural network
and vice versa. This analysis helps in the process of integrating and combining the output
decisions of the two subsystems.
The values of the performance measures depend on the inference engine used to fire the extracted
rules. A simple inference engine is one that examines the rules in a predetermined sequential
order. The decision is thus determined by the first fireable rule in the predetermined order.
Alternatively, an inference engine can check out all possible rules that can fire and provide more
than one output decision at a time. The latter inference engine is considered powerful then the
former because it provides the system user with all possible output decisions and hence more
choices can be examined.
In practice, for both types of inference engines, a predetermined order of the extracted rules
plays an important role in determining which rule is going to be fired or the order in which all
fireable rules are considered. Since the embedded knowledge in the internal structure of a trained
neural network does not directly help in resolving this problem (i.e., ordering the extracted rules),
a rule evaluation procedure that can help to order the extracted rules is crucial.
The three performance measures are:
1. The soundness measure: This measures how many times each rule is correctly fired. A
rule is correctly fired if all its premises are satisfied and its consequent matches the target
decision. The soundness measure of an extracted rule represents the ability of this rule to
correctly interpret the output decisions of the trained network. Note that the soundness
measure does not depend on the rule order.
2. The completeness measure: A completeness measure attached to a rule represents how
many distinct times this rule is correctly fired (i.e., how many unique patterns are correctly
identified/classified by this rule and not by any other extracted rule that is inspected by the
inference engine before this rule). Certainly, the resulting number in this case depends on
both the order in which the extracted rules are applied and the mechanism of the inference
engine. For each extracted set of rules with the same consequent, if the sum of the completeness
measures of all rules in this set equals the total number of input patterns having the
corresponding output then this set of extracted rules is 100% complete with respect to that
consequent. An extracted rule with zero completeness measure but its soundness measure
is ? 0 means that there is a preceding rule(s), in the order of rule application, that covers
the same input patterns that this rule covers. Such a rule may be removed.
3. The false-alarm measure: which measures how many times a rule is misfired over the
available data set. When considered for application, a rule is misfired if all its premises are
satisfied but its consequent does not match the target output. The value of this measure
also depends on the order of rule application and the mechanism of the inference engine.
3.4.1 Rule ordering algorithm
Finding the optimal ordering of extracted rules is a combinatorial problem. So we have developed
the following "greedy" algorithm to order any set of extracted rules, based on the three performance
measures. The rule ordering algorithm first creates a list L that contains all extracted rules.
Assume that the list L is divided into two lists, a head list (L h ) and a tail list (L t ), where L h is
the list of all ordered rules and L t is the list of all remaining (unordered) rules 5 . Initially, L h is
empty and L t includes all the extracted rules. A performance criteria is used to select one rule
5 i.e., the ordering of rules in L t has no effect.
from L t to be moved to the end of L h , and the the process continues till L t is null.
The steps of the rule ordering algorithm are as follows:
1. Initialize L extracted rulesg.
2. WHILE L t 6= f g, DO
(a) Fire all rules in L h in order.
(b) Compute the completeness and false-alarm measures for each rule in L t using the available
data set.
(c) IF 9 a rule with zero false-alarm
THEN this rule is moved from L t to the end of L h
6 .
ELSE Among all rules in L t select the one with the highest
(Completeness - False-alarm) measure; add this rule to
the end of L h , delete it form L t .
(d) IF 9 any rule in L t with a zero completeness measure then remove this rule from L t .
This means that the rules in L h cover this rule.
3. END DO.
In this paper, all rules extracted by our approaches are ordered using the above rule ordering
algorithm. Also, the measures attached to all extracted rules assume that an inference engine
that fires only one rule per input (namely, the first fireable rule) is used.
An important issue that needs to be addressed here is: "Should one discard rules with low
soundness, completeness and/or high false-alarm measure(s)?". An example of such a rule is R 10
in
Table
5. For small data sets we might still retain such rules at the bottom of the application
ladder in the hope for better generalization, as they are part of the overall characteristics of the
corresponding trained network. In cases where available data sets are representative, the answer
depends on the application nature. For example, in medical applications where one is interested
6 If 9 more than one rule with zero false-alarm THEN select the one with the highest completeness measure out
of these rules to be moved from L t to the end of Lh .
in high detection rates, rules with low soundness, completeness and/or high false-alarm measures
may still be kept. In other applications, like Automatic Target Recognition (ATR), one may only
retain rules with low false-alarm rate to reduce the chances of "'friendly fire".
The three performance measures along with a rule's certainty factor (if applicable) can be
used to form a composite measure (in an application dependent manner) which specifies the
importance of the extracted rules.
4 Implementation and Performance Evaluation
4.1 Data Sets
We applied all three rule extraction techniques to three problems:
1. an artificial rule-based system which has six rules relating four binary inputs and four binary
outputs.
2. Iris database, a simple classification problem which contains 50 examples each of classes Iris
Iris Versicolor, and Iris Virginica [32]. These 150 instances were divides into two
subsets, the first subset, used for training, is of size 89 and the second is of size 61 and used
for testing. Each input pattern has four continuous input features: I
Sepal-width, I Petal-length, and I 4 =Petal-width.
3. Breast-Cancer data set which has nine inputs and two output classes [28, 32]. The input
features are:
Shape,
Bland Chromatin, X Mitoses. All 9 inputs are continuous
and range from 1 to 10. Each of the 683 available instances is labeled as Benign (444
instances) or Malignant. These instances are divided into a training set of size 341 and a
test set of size 342.
Other popular data sets that have been used as benchmarks for rule extraction approaches are
the Monk [49], Mushroom [21] and the DNA promoter [54] data sets. All three of these data
sets inputs are symbolic/discrete by nature. Since we want to test more general problems that
may include continuous valued variables, Iris and Breast-Cancer were preferred for our initial
experiments.
4.2 Methodology
The following points illustrate some of the important procedures followed to perform the experimental
work presented in this paper:
1. Training procedure: in all experiments, an MLP network is trained using the backpropagation
algorithm with momentum as well as a regularization term P which adds \Gamma2-w jk w
to the weight update term in the backpropagation equation [12]. Cross validation is used
for the stopping criteria.
2. Network architectures and data reduction: for the iris problem, an MLP with 4 input,
6 hidden, and 3 output nodes is used for the three experiments but trained with different
data sets each time, as described later. For the breast-cancer classification problem, we
reduced the dimensionality of the input space from 9 to 6 inputs. This has been done by
removing inputs correspond to the lowest three eigenvalues of the
covariance matrix of the original input space. The remaining 6 input features are then used
for training and testing an MLP with 9 hidden and 2 output nodes.
3. Network initialization: for the artificial problem the six initial rules are used by the Node
Link Algorithm [44], to initialize a network of 4 input, 6 hidden, and 4 output nodes. For
both the iris and breast-cancer data sets, there is no prior knowledge so the corresponding
networks are initialized randomly.
4. Input representation: inputs of the artificial problem are naturally binary, so there was no
required mapping. Since the input features of both iris and breast-cancer problems are con-
tinuous, while BIO-RE and Partial-RE extract rules from networks with binary/binarized
and normalized inputs respectively, a binarized and a normalized version of these two data
sets were computed and then used for training and testing the corresponding network architectures
(a) Binarizing input features: for an input feature value x i , the corresponding binarized
value y i is computed by Equation 1.
(b) Normalizing input features: a normalized value z of an input feature value
x i is computed by Equation 2.
5. Extraction techniques and networks labeling: BIO-RE is used to extract rules from
networks trained with binary/binarized input patterns. For iris and breast-cancer problems,
these networks are labeled Iris-Bin and Cancer-Bin respectively. Partial-RE is used to
extract rules from the networks trained with normalized input patterns (labeled Iris-Norm
and Cancer-Norm). Full-RE uses the original data sets of both problems to train the
corresponding networks. These two networks are labeled Iris-Cont and Cancer-Cont.
6. class rule: A comprehensive rule extraction approach is one that extracts rules
to cover all input-output mapping cases. In some cases achieving such a goal is hard and it
may be convenient to cover the input-output mapping cases that cannot be covered by the
extracted rule-base using default rules. Such default rules make the set of extracted rules
complete but they do not provide any interpretation of why the action was done other than
"none of the extracted rules could be fired". If a default rule is used, its output (consequent)
can be chosen to minimize the false alarm rate and to maximize the correct classification
rate. In some applications, these two goals may conflict with each other. In such cases, the
criteria of choosing the default output decision depends on the application nature. Note
that the default rule may only fire when none of the extracted rules can be fired.
4.3 Experimental Results
4.3.1 An Artificial Binary Problem
This experiment is designed to test the soundness and completeness of the three rule extraction
techniques. The original rules are as follows:
Rule #1: If A And B 0:8
\Gamma! O 1
Rule #2: If B And C And D 0:7
\Gamma! O 2
Rule #3: If Not C 0:6
\Gamma! O 3
Rule #4: If Not A And D 0:7
\Gamma! O 4
Rule #5: If B And D 0:7
\Gamma! O 1
Rule #6: If D 0:8
\Gamma! O 1
are binary inputs, and O i s are binary consequents. After
using the Node Links Algorithm to map these six rules, into an initial network with 4 input, 6
hidden, and 4 output nodes, the following two experiments are performed.
Table
1: Rules extracted from network "Iris-Bin" by BIO-RE technique.
Rule Rule Iris Soundness Completeness False-Alarm
No. Body Class Measure Measure Measure
and I 4 - 1:2 Versicolor 10/50 10/50 0/150
and I 4 - 1:2 Setosa 50/50 50/50 5/150
3 If I 1 - 5:8 and I 3 - 3:7
and I 4 - 1:2 Virginica 47/50 47/50 24/150
4 If I 1 - 5:8 and I 2 - 3:0
and I 4 - 1:2 Versicolor 15/50 11/50 3/150
Performance 118/150 32/150
1. The first experiment: The objective of this experiment is to check whether the three
approaches are able to extract the original rules from the mapped network. Therefore, the
network was not trained before the extraction procedures were applied.
The results of applying the three rule extraction techniques to the generated (but not
trained) network are as follows:
ffl BIO-RE extracts the same set of binary rules but without certainty factors.
ffl Partial-RE with conditions per rule) extracts all 5 original rules
with two conditions or less. On increasing p to 3, rule#2 was also extracted. The
certainty factors attached to each output decision were approximately the same as the
original rules.
ffl Full-RE extracts the same six original rules from the untrained network.
2. The second Experiment: Based on the original rules, 2 4 binary patterns were generated.
After training the previous network, we applied the three approaches to the final network
architecture (i.e., the adapted one):
ffl Both BIO-RE and Full-RE extract the same six original rules.
ffl Partial-RE extracts all the six rules plus an extra one: Rule #7: If B And D 0:74
\Gamma! O 2 .
This rule was extracted when
Table
2: Rules extracted from network "Iris-Norm" by Partial-RE technique.
Rule Rule Iris Certainty Soundness Completeness False-Alarm
No. Body Class Factor Measure Measure Measure
and I 4 - 1:2 Virginica 0.99 47/50 47/50 27/150
3 If I 1 - 5:8 and I 2 - 3:0
and I 3 - 3:7 Versicolor 0.74 18/50 16/50 3/150
4 If I 1 - 5:8 and I 2 - 3:0
and I 4 - 1:2 Versicolor 0.72 15/50 1/50 0/150
Performance 118/150 32/150
Table
3: Rules Extracted from network "Iris-Cont" by Full-RE technique.
Rule Rule Iris Certainty Soundness Completeness False-Alarm
No. Body Class Factor(cf) Measure Measure Measure
3 If I 3 - 4:8 Virginica 0.98 47/50 47/50 1/150
Performance 146/150 4/150
4.3.2 Iris Classification
Table
1, 2, and 3 present the ordered rules extracted by BIO-RE, Partial-RE, and Full-RE
techniques respectively from their corresponding networks trained on the Iris data set. They also
present the corresponding measures for each extracted rule as generated by the rule evaluation
procedure. Table 4 provides summary of the performance of each rule extraction technique and
compares it with the performance of the corresponding trained network. It shows that binarizing
or scaling input patterns of the iris problem degrades the performance of the trained networks
("Iris-Bin" and "Iris-Norm") as well as the corresponding rules extracted from these two networks.
Also, it shows the remarkable performance of the rules extracted from network "Iris-Cont" by
Full-RE.
Note that:
(i) The numeric values compared with input features I i s in the rules extracted by both BIO-RE
and Partial-RE represent the mean (- i s) of these input feature (see the rule bodies in Table 1 and
Table
2). This coarse thresholding is largely responsible for the (relatively) poor performance of
Table
4: Performance comparison between the sets of extracted rules and their corresponding
trained networks for the iris problem.
Neural Network Extracted Rules
ratio % match ratio % match
Binarized Training
Network Testing 43/61 70.49 51/61 83.61
Normalized Training
Network Testing 56/61 91.80 49/61 80.33
Continuous Training
Network Testing 59/61 96.72 60/61 98.36
the two networks and subsequently of the extracted rules.
(ii) In
Table
3, a numeric value that is compared to an input feature I i in the rule body represents
one of the critical discretization boundaries of that feature which was selected by Full-RE.
(iii) For rules examined later (e.g., rule 4 in Table 2), completeness may be much less than
soundness, because some instances where these rules would fire correctly have already been covered
by other preceding rules.
(iv) Full-RE leads to three simple rules that classify the iris data set very well.
4.3.3 Breast-Cancer Classification
For the breast-cancer classification problem, Table 5, 6, and 7 present three sets of ordered rules
extracted by the three rule extraction techniques, along with the corresponding performance measures
Table
8 provides an overall comparison between the extracted rules and their corresponding
trained networks. It shows that the three techniques were successfully used with approximately
the same performance regardless of the nature of the training and testing data sets used for each
network. Also, it shows that binarizing and scaling breast cancer data set did not degrade the
performance of the trained networks as well as of the rules extracted by BIO-RE and Partial-RE
from these networks ("Cancer-Bin" and "Cancer-Norm" respectively). Since the original input
features of the breast cancer problem have the same range (1-10), by binarizing and/or scaling
them we did not change their nature much.
Table
5: Rules Extracted from network "Cancer-Bin" by BIO-RE technique.
Rule Rule B-Cancer Soundness Completeness False-Alarm
No. Body Class Measure Measure Measure
Table
Rules extracted from network "Cancer-Norm" by Partial-RE technique.
Rule Rule B-Cancer Certainty Soundness Completeness False-Alarm
No. Body Class Factor(cf) Measure Measure Measure
and
9 If
Total For Benign Rules 427/444 7/683
Total For Malignant Rules 232/239 17/683
Performance 659/683 24/683
Table
7: Rules extracted from network "Cancer-Cont" by Full-RE technique.
Rule Rule B-Cancer Certainty Soundness Completeness False-Alarm
No. Body Class Factor Measure Measure Measure
Table
8: Performance comparison between the sets of extracted rules and their corresponding
trained networks for the breast-cancer problem.
Neural Network Extracted Rules
ratio % match ratio % match
Binarized Training
Network Testing 317/342 92.69 329/342 96.20
Normalized Training
Network Testing 325/342 95.03 328/342 95.91
Continuous Training
Network Testing 331/342 96.78 327/342 95.61
4.4 Discussion
The implementation results of Section 4.3 indicate that:
1. All rules extracted by the three techniques are sound.
2. Partial-RE is sound but not complete. Its completeness depends on the chosen degree of
comprehensibility (p).
3. Rules extracted by Full-RE are much more comprehensible than those extracted by the
BIO-RE and Partial-RE. This is likely due to:
ffl Full-RE is used to extract rules from neural networks trained with original input features
without any binarization or normalization.
ffl Rules extracted by Full-RE compare input features with values of discretization boundaries
of these features while the other two techniques compare it with the mean (- i ).
4. Binarizing or normalizing continuous features may degrade the accuracy of the extracted
rules as well as the generalization capability of the corresponding trained neural network.
See the first six rows of Table 4.
5. Full-RE was tested several times on different networks initialized randomly each time and
trained with different sets of training patterns. Each time the set of extracted rules are
similar except for the values of the certainty factors. This indicates that Full-RE is more
accurate and can extract rules based on more combinations of input features, not just the
most effective features, see Table 3 and Table 7.
6. Although BIO-RE and Partial-RE were used to extract rules from networks trained with
binarized and normalized input features, they were still able to extract "certain" rules that
may be adequate in some application examples. See Table 5 and Table 6.
7. Although some of the extracted rules have a low firing rate on the available data set, they
were extracted to represent the generalization capability of the trained network on unseen
data. Also, they were extracted to cover all training and testing data sets and hence increase
the completeness of the extracted set of rules. Examples of such rules are: R 1 and R 4 of
Table
R 4 of Table 2, and R 2 -R 5 of Table 5.
Performance Evaluation
Since both iris and breast cancer problems have continuous input features, Full-RE is the best
technique to be used to extract rules from both "Iris-Cont" and "Cancer-Cont" networks which
were trained with the original continuous input features. There is no need to prune the trained
network since Full-RE is capable of extracting rules from MLPs of any size. In this section, we
compare the performance of the extracted rules from the iris and the breast-cancer databases
with the rules extracted by both NeuroRule and C4.5rules algorithms [43]. The main reason of
choosing NeuroRule and C4.5rules is that they have previously been used to extract rules for the
same two databases used by Full-RE [42]. Moreover, they both extract comprehensive rules with
relatively high correct classification rate as reported by the authors of NeuroRule [42]. For iris
problem, we also compare the set of rules extracted by Full-RE with the corresponding set of
rules extracted by KT algorithm [11].
Before analyzing the extracted rules, we summarize the computational complexity of Neu-
roRule and C4.5rules:
ffl NeuroRule: as a starting point of the NeuroRule algorithm, 100 fully connected MLPs
are generated. Before training any of these 100 networks, input features have to be binary
discretized (i.e., divided into n intervals each with a lower and higher boundary, a thermometer
coding is then used to covert the discretized values into binary ones). Although
this discretization step helps simplify the last step of the rule extraction process, it has three
major drawbacks:
It increases the number of input nodes and hence the complexity of the required net-work
9architecture. The number of input nodes of a generated network after the binary
discretization step is equal to the resulting number of binary discretized intervals of the
original input features. The network architecture generated by NeuroRule has 39 input
nodes for the iris problem and 91 input nodes for the breast cancer problem. Note
that the corresponding networks used by Full-RE have 4 and 6 input nodes respectively
("Iris-Cont" and "Cancer-Cont"). This increase in the complexity of network
architectures may degrade the performance of the trained network.
It increases the training time and complexity of the training algorithm.
It increases the complexity of the rule extraction procedure if the network is not pruned.
Due to the complexity of the generated network architectures, NeuroRule employs a pruning
procedure after the training phase. The pruning process continues until network performance
drops to 95% of original performance. This process is applied to the 100 MLPs. The
rule extraction procedure starts by choosing the best one out of the 100 pruned networks
(the one with the highest performance). NeuroRule extracts rules by clustering the remaining
hidden nodes activation values and then checking which input combination can make
each hidden (and later output) node active.
The power of NeuroRule lies in its pruning and clustering techniques. In its pruning phase,
NeuroRule removes input nodes. For example, the best pruned architecture for the iris
problem is a network of 4 input, 2 hidden, and 3 output nodes. For breast-cancer problem
the best pruned network has 6 input, 1 hidden and 2 output nodes. Since the resulting
network architectures from the pruning step are very small, the rule extraction process is
easy and could be done visually for iris and breast cancer networks. However, the pruning
and clustering processes lead to substantial overheads.
ffl C4.5rules: C4.5rules was used by the authors of NeuroRule to extract rules from the iris
and breast-cancer databases for comparison reasons. Like ID3 [36], C4.5rules [20] generates
decision tree rules based on the available input samples. Therefore, the complexity is
moderate, but the performance of the rules generated by C4.5rules is highly affected by the
noise level in the available data samples [11].
5.1 Comparison using iris data set
The rules extracted by the Full-RE techniques for the iris problem were given in Table 3. The
rules extracted by NeuroRule for the same problem are:
Rule 1: If I 3 - 1:9, then Iris Setosa
Rule 2: If I 3 - 4:9 and I 4 - 1:6, then Iris Versicolor
Rule 3: Default Rule (Iris Virginica)
The corresponding rules extracted by C4.5rules are:
Rule 1: If I 3 - 1:9, then Iris Setosa
Rule 2: If I 3 - 1:9 and I 4 - 1:6, then Iris Versicolor
Rule 3: If I 4 - 1:6, then Iris Virginica
Rule 4: Default Rule (Iris Setosa)
The corresponding rules extracted by KT approach are:
Rule 1: If I 3 - 2:7, then Iris Setosa
Rule 2: If I 3 - 5:0 and I 3 ? 2:7 and I 4 - 1:6 and I 4 ? 0:7 , then Iris Versicolor
Rule 3: If I 3 ? 5:0, then Iris Virginica
Rule 4: If I 4 ? 1:6, then Iris Virginica
Rule 5: If I 2 ? 3:1 and I 3 ? 2:7 and I 3 - 5:0, then Iris Versicolor
From the methodologies and results, we note that:
1. Completeness: Both Full-RE and KT extracted complete sets of rules that cover all cases,
and so no default rule was required. However, a default rule is essential for both NeuroRule
(due to its pruning step) and for C4.5.
2. Comprehensibility:
(a) Number of rules: Both Full-RE and NeuroRule extract 3 rules while KT extracts 5
rules and C4.5 extracts 4 rules.
(b) Number of premises per rule: Except KT, the maximum number of conditions per
rule for all other techniques is 2. KT extracted rules with a maximum of 4 conditions
per rule.
3. Performance: Since iris is a simple classification problem, all techniques performed well.
In fact, all of them were able to show that the Setosa class is linearly separable from the
Table
9: Correct classification rate (%) of the rule sets extracted by different techniques.
Full-RE NeuroRule C4.5rules KT
Iris with default rule 97.33 98.00 96.00 97.33
without default rule 97.33 64.67 96.00 97.00
Breast Cancer with default rule 96.19 97.21 97.21 N/A
without default rule 96.19 63.10 94.72 N/A
other two classes. Moreover, rule extracted by all of them showed that P etal-length is the
most dominant input feature (see row 1 and 2 in Table 9).
4. Certainty factors: Rules extracted by the Full-RE provide a certainty factor attached
with each extracted rule, unlike the other approaches.
5.2 Comparison using breast cancer data set
For the breast cancer database, the rules extracted by the Full-RE from a simple MLP architecture
output nodes) are presented in Table 7. The rules extracted by
NeuroRule from the best among the pruned 100 MLP network architectures (6 inputs, 1 hidden,
and 2 output nodes) are [43, 41]:
Rule 1: If
Rule 2: If
Rule 3: If
Rule 4: Default Rule (Malignant)
The corresponding rules extracted by C4.5rules are [43]:
Rule 1: If
Rule 2: If
Rule 3: If X 2 - 5:0, then Malignant
Rule 4: If X 6 - 9:0, then Malignant
Rule 5: If X 1 - 7:0, then Malignant
Rule
Rule 7: Default Rule (Benign)
Comparing these three sets of extracted rules, we observed:
1. Completeness: NeuroRule did not extract any rule for class Malignant. Both NeuroRule
and DT (C4.5) have a default rule while rules extracted by Full-RE has a 100% completeness
measure and hence there was no need for a default rule. Default rules are undesirable because
they cannot provide a symbolic interpretation of the decision other than "because none of
the above occurred".
2. Comprehensibility:
(a) Number of rules: Number of rules extracted by Full-RE is 5 and by DT (C4.5) is
7. NeuroRule extracted only 4 rules, as it used a default rule to cover all cases of class
Malignant, which were applied to a highly pruned network with only one hidden node.
(b) Number of premises per rule: For Full-RE, the maximum number of conditions
per extracted rule is 2. All rules extracted by NeuroRule have 4 conditions, while those
extracted by DT have a maximum of 4 conditions per rule. Thus the rules extracted
by Full-RE are more comprehensible than those extracted by the other two techniques.
3. Performance: The performance of the rules extracted by all the three techniques are very
high and they all achieve very low misclassification rate (see row 3 of Table 9). When default
rules are removed, the performance of NeuroRule drops dramatically (see row 4 of Table 9).
Authors of NeuroRule reported that by choosing different trained network architectures
they extracted different rules. In one case only two rule were extracted, one of which is a
default rule. In another experiment, NeuroRule extracts only 3 rules (one of which is also
a default rule). In both experiments, the achieved completeness measure is approximately
95%. However, we did not observe any effect on the rules extracted by the Full-RE due to
changing the initialization of the "Cancer-Cont" network or, when we used different input
samples for training and testing. This indicates that Full-RE rules are quite stable so long
the network is trained reasonably well.
4. Certainty factors: Rules extracted by the Full-RE provide a certainty factor attached
with each extracted rule while NeuroRule and C4.5rules do not. Note that KT was not used
to extract rules from the breast cancer problem.
Table
9 compares the classification rates obtained using the rules extracted by the four
technique NeuroRule, C4.5rules, and KT), for the iris and the breast-cancer databases,
while
Table
presents a qualitative comparison between our three techniques and some other
notable rule extraction techniques from trained neural networks (NeuroRule, KT, Subset, MofN).
Note that C4.5rules was not included in the comparative study presented by Table 10 because it
CONCLUSIONS 33
Table
10: A qualitative comparison of different rule extraction techniques.
BIO-RE Partial-RE Full-RE NeuroRule KT or Subset MofN
Provides CF No
May need
a default rule
Works for
1.Binary inputs Yes Yes Yes Yes Yes Yes
2.Normalized inputs No
3.Continuous inputs No
Complexity Very Low Low Med Very High Med High
Additional overheads
extracts rules, based on input samples, from decision trees and not from trained networks like the
other approaches.
6 Conclusions
In this paper we introduced three new rule extraction techniques. The suitability of each approach
depends on the network type and architecture, complexity, the application nature, inputs, and
the required transparency level. All three methods are able to extract meaningful rules for the
well known Iris database and Wisconsin breast cancer diagnosis database, where no pre-existing
rules are available. The extracted rules compare favorably with other reported implementation
results. The proposed techniques are less complex, and the rules extracted by them are efficient,
comprehensible and powerful.
The ordering of extracted rules has to be determined while designing the inference engine.
The network does not provide any (direct) information on this issue, KBNN researchers have
not reported on this aspect. We developed a simple greedy rule evaluation procedure and an
algorithm that can order rules extracted by any rule extraction algorithm, with a goal of maximizing
performance and minimizing error rates of the extracted rules over available data. We
also presented a qualitative comparison of some key issues involved in the process of extracting
rules from trained networks by different approaches.
It is important to mention that obtaining all possible combinations of rules is NP-hard and a
feasible alternative is often to extract key rules that cover most of the concepts of the application
CONCLUSIONS 34
domain. More progress is needed in determining when an adequate set of rules has been extracted.
Another important issue that needs to be investigated is how the outputs of both the rule extraction
and the trained ANN modules can be integrated to provide more robust decisions, and
how the extracted rules can be used for knowledge refinement and truth maintenance of domain
knowledge.
--R
A survey and critique of techniques for extracting rules from trained artificial neural networks.
Rule extraction from a constrained error backpropagation MLP.
Inserting and extracting knowledge from constrained error back-propagation networks
Logic Minimization Algorithm for VLSI Synthesis.
On changing continuous attributes into ordered discrete attributes.
A fuzzy neural hybrid system.
Using sampling and queries to extract rules from trained neural networks.
Learning relations from noisy examples: An empirical comparison of linus and foil.
Rule learning by searching on adapted nets.
Neural Networks in Computer Intelligence.
Structural adaptation and generalization in supervised feed-forward networks
Learning a class of large finite state machines with a recurrent neural network.
Learning and extracting finite state automata with second-order recurrent neural networks
Hybrid neural network and rule-based pattern recognition system capable of self-modification
On fuzzy modeling using fuzzy neural networks with back-propagation algorithm
Rule extraction from neural networks.
Adaptive mixtures of local experts.
5 Programs for Machine Learning.
Concept acquisition through representational adjustment.
A map method for synthesis of combinational logic circuits.
Discretization of numeric attributes.
Chi2: Feature selection and discretization of numeric attributes.
Discretization of ordinal attributes and feature selection
Combining connectionist and symbolic learning to refine certainty factor rule bases.
Cancer diagnosis via linear programming.
Digital Logic and Computer Design.
The connectionist scientist game: Rule extraction and refinement in a neural network.
Introduction to Probability and Statistics
UCI repository of machine learning database.
Extraction of rules from discrete-time recurrent neural networks
Heuristically expanding knowledge-based neural network
Changing the rules: A comprehensive approach to theory refinement.
Induction of decision trees.
Simplifying decision trees.
Espresso-MV: Algorithms for multiple-Valued logic minimization
Medical diagnostic expert system based on DPD model.
Automated knowledge acquisition of rules with continuously valued attributes.
Extracting rules from pruned neural networks for breast cancer diagnosis.
Understanding neural networks via rule extraction.
Symbolic representation of neural networks.
Controlling water reservoirs using a hybrid intelligent architecture.
A hybrid intelligent architecture and its application to water reservoir control.
A hybrid intelligent architecture for refining input characterization and domain knowledge.
A generation methods for fuzzy rules using neural networks with planar lattice architecture.
DEDEC: Decision Detection by Rule Extraction from Neural Networks.
DEDEC: A methodology for extracting rules from trained artificial neural networks.
The extraction of refined rules from knowledge-based neural networks
Refinement of approximate domain theories by knowledge-based artificial neural network
Induction of finite-state languages using second-order recurrent networks
A System for Doing Mathematics by Computer.
--TR
--CTR
Marco Muselli , Diego Liberati, Binary Rule Generation via Hamming Clustering, IEEE Transactions on Knowledge and Data Engineering, v.14 n.6, p.1258-1268, November 2002
W. Wettayaprasit_affanb , C. Lursinsap , C. H. Chu, Extracting linguistic quantitative rules from supervised neural networks, International Journal of Knowledge-based and Intelligent Engineering Systems, v.8 n.3, p.161-170, August 2004
Sankar K. Pal , Sushmita Mitra , Pabitra Mitra, Rough-Fuzzy MLP: Modular Evolution, Rule Generation, and Evaluation, IEEE Transactions on Knowledge and Data Engineering, v.15 n.1, p.14-25, January
Zhi-Hua Zhou, Rule extraction: using neural networks or for neural networks?, Journal of Computer Science and Technology, v.19 n.2, p.249-253, March 2004
J. L. Castro , L. D. Flores-Hidalgo , C. J. Mantas , J. M. Puche, Extraction of fuzzy rules from support vector machines, Fuzzy Sets and Systems, v.158 n.18, p.2057-2077, September, 2007
Zan Huang , Hsinchun Chen , Chia-Jung Hsu , Wun-Hwa Chen , Soushan Wu, Credit rating analysis with support vector machines and neural networks: a market comparative study, Decision Support Systems, v.37 n.4, p.543-558, September 2004
Alex A. Freitas, Understanding the Crucial Role of AttributeInteraction in Data Mining, Artificial Intelligence Review, v.16 n.3, p.177-199, November, 2001 | rule evaluation;neural networks;hybrid systems;knowledge refinement;rule extraction |
628007 | Volume Leases for Consistency in Large-Scale Systems. | AbstractThis article introduces volume leases as a mechanism for providing server-driven cache consistency for large-scale, geographically distributed networks. Volume leases retain the good performance, fault tolerance, and server scalability of the semantically weaker client-driven protocols that are now used on the web. Volume leases are a variation of object leases, which were originally designed for distributed file systems. However, whereas traditional object leases amortize overheads over long lease periods, volume leases exploit spatial locality to amortize overheads across multiple objects in a volume. This approach allows systems to maintain good write performance even in the presence of failures. Using trace-driven simulation, we compare three volume lease algorithms against four existing cache consistency algorithms and show that our new algorithms provide strong consistency while maintaining scalability and fault-tolerance. For a trace-based workload of web accesses, we find that volumes can reduce message traffic at servers by 40 percent compared to a standard lease algorithm, and that volumes can considerably reduce the peak load at servers when popular objects are modified. | Introduction
To fulfill the promise of an environment in which essentially all human knowledge is available from
a set of servers distributed across wide area networks, the data infrastructure must evolve from
protocols optimized for one application-browsers-to protocols that support a range of more
demanding applications. In the future, we expect data-intensive applications to extend beyond
human-driven browsers to include program-driven agents, robots, distributed databases, and data
miners that will place new demands on the data-distribution infrastructure. These new applications
will require aggressive caching for acceptable performance, and they will not be as tolerant of
cache inconsistencies as a browser. Unfortunately, current cache consistency protocols do not
scale to large systems such as the web because of poor performance, weak consistency guarantees,
or poor fault tolerance.
This article is to appear in the IEEE Transactions on Knowledge and Data Engineering Special Issue on Web
Technologies. 1999. This work was funded in part by a NSF CISE grant (CDA-9624082), gifts from Novell and
Sun Microsystems, and DARPA/SPAWAR grant number N66001-98-8911. Dahlin and Alvisi were supported by NSF
CAREER awards (CCR-9733842) and (CCR-9734185), respectively.
Cache consistency can be achieved through either client-driven protocols, in which clients send
messages to servers to determine if cached objects are current, or server-driven protocols, in which
servers notify clients when data change. In either case, the challenge is to guarantee that a client
read always returns the result of the latest completed write. Protocols that achieve this are said to
be strongly consistent.
Client-driven protocols force caches to make a difficult choice. They must either poll the server
on each access to cached data or risk supplying incorrect data. The first option, polling on each
read, increases both the load on the server and the latency of each cache request; both effects can be
significant in large scale systems because servers support many clients and polling latencies can be
high. The other option, periodic polling, relaxes consistency semantics and allows caches to supply
incorrect data. For example, web browsers account for weak consistency through a human-based
error-correction protocol in which users manually press a "reload" button when they detect stale
data. Weak consistency semantics may be merely annoying to a human, but they can cause parallel
and distributed programs to compute incorrect results, and they complicate the use of aggressive
caching or replication hierarchies because replication is not transparent to the application.
Server-driven protocols introduce three challenges of their own. First, strong consistency is
difficult to maintain in the face of network or process failures because before modifying an object,
a server using these protocols must contact all clients that cache that object. If there are many
cached copies, it is likely that at least one client will be unreachable, in which case the server
cannot complete the write without violating its consistency guarantees. Second, a server may
require a significant amount of memory to track which clients cache which objects. Third, sending
cache invalidation messages may entail large bursts of server activity when popular objects are
modified.
In distributed file systems, the problems of server driven protocols were addressed by using
leases [8], which specify a length of time during which servers notify clients of modifications to
cached data. After a lease's timeout expires, a client must renew the lease by sending a message
to the server before the client may access the cached object. Leases maintain strong consistency
while allowing servers to make progress even if failures occur. If a server cannot contact a client,
the server delays writes until the unreachable client's lease expires, at which time it becomes the
client's responsibility to contact the server. Furthermore, leases free servers from notifying idle
clients before modifying an object; this reduces both the size of the server state and the load
sustained by the server when reads and writes are bursty.
Although leases provide significant benefits for file system workloads, they may be less effective
in a wide area network (WAN). To amortize the cost of renewing a lease across multiple
reads, a lease should be long enough that in the common case the cache can be accessed without a
renewal request. Unfortunately, at least for browser workloads, repeated accesses to an object are
often spread over minutes or more. When lease lengths are shorter than the time between reads,
leases reduce to client polling. On the other hand, longer lease lengths reduce the three original
advantages of leases.
In this article, we show how volume leases [22] restore the benefits of leases for WAN work-
loads. Volume leases combine short leases on groups of files (volumes) with long leases on individual
files. Under the volume leases algorithm, a client may access a cached object if it holds valid
leases on both the object and the object's volume. This combination provides the fault-tolerance
of short leases because when clients become unreachable, a server may modify an object once the
short volume lease expires. At the same time, the cost of maintaining the leases is modest because
volume leases amortize the cost of lease renewal over a large number of objects.
We examine three variations of volume leases: volume leases, volume leases with delayed in-
validations, and best effort volume leases. In the delayed invalidations algorithm, servers defer
sending object invalidation messages to clients whose volume leases have expired. This optimization
reduces peaks in server load, and it can reduce overall load by batching invalidation messages
and eliminating messages entirely in cases when clients never renew a volume lease. The third
variation is motivated by the observation that some workloads do not require strict consistency but
do prefer that clients observe fresh data. For example, when an important event occurs, a news
service would like to invalidate stale cached copies of their front page quickly, but they may want
to begin distributing the new front page immediately rather than wait until they have notified all
customers that the old page is invalid. The best effort variation of volume leases uses relaxed
consistency to satisfy such applications. We find that this approach can improve performance by
allowing servers to utilize longer volume lease timeouts.
This article evaluates the performance of volume leases using trace-based simulation. We
compare the volume algorithms with three traditional consistency algorithms: client polling, server
invalidations, and server invalidations with leases. Our simulations demonstrate the benefits of
volume leases. For example, volume leases with delayed invalidations can ensure that clients
never see stale data and that servers never wait more than 100 seconds to perform a write, all while
using about the same number of messages as a standard invalidation protocol that can stall server
writes indefinitely. Compared to a standard object lease algorithm that also bounds server write
delays at 100 seconds, this volume algorithm reduces message traffic by 40%.
The rest of this article is organized as follows. Section 2 describes traditional algorithms for
providing consistency to cached data, and Section 3 describes our new volume lease algorithms.
Section 4 discusses our experimental methodology, and Section 5 presents our experimental results.
After discussing related work in Section 6, Section 7 summarizes our conclusions.
Traditional consistency algorithms
This section reviews four traditional cache consistency algorithms. The first two-Poll Each Read
and Poll-rely on client polling. The remaining algorithms-Callback and Lease-are based on
server invalidation. In describing each algorithm we refer to Table 1, which summarizes key characteristics
of each algorithm discussed in this paper, including our three new algorithms. We also
refer to Figure 1, which defines several parameters of the algorithms.
In
Table
1, we summarize the cost of maintaining consistency for an object using each of
the algorithms. Columns correspond to key figures of merit: the expected stale time indicates how
long a client expects to read stale data after o is modified, assuming random reads, random updates,
and failures. The worst stale time indicates how long o can be cached and stale assuming that (1)
was loaded immediately before it was modified and (2) a network failure prevented the server from
contacting the client caching o. The read cost shows the expected fraction of cache reads requiring
a message to the server. The write cost indicates how many messages the server expects to send to
notify clients of a write. The acknowledgment wait delay indicates how long the server will wait to
write if it cannot invalidate a cache. The server state column indicates how many clients the server
Reads Writes State
Expected Worst Read Write Acknowledge Server
stale time stale time cost cost wait delay state
(seconds) (seconds) (messages) (messages) (seconds) (bytes)
Poll Each Read
Volume Leases(t, t v
(Ro
R\Deltat Co min(t; t v
Vol. Delay Inval(t, t v , d)
R\Deltat Cv min(t; t v )
Delay Inval(t, t v , d)
Table
1: Summary of algorithm performance.
Variable Meaning
t timeout for an object
timeout for a volume
servers store state for inactive clients
R frequency object o is read
V Number of active objects per volume
C tot Number of clients with a copy of object
Number of clients with lease on object
C v Number of clients with lease on volume v
C d Number of clients whose volume leases expired
less than d seconds ago.
bytes of server state to support x clients
Figure
1: Definition of parameters in Table 1
expects to track for each object.
2.1 Poll each read
Poll Each Read is the simplest consistency algorithm. Before accessing a cached object, a client
asks the object's server if the object is valid. If so, the server responds affirmatively; if not, the
server sends the current version.
This algorithm is equivalent to always having clients read data from the server with the optimization
that unchanged data is not resent. Thus, clients never see stale data, and writes by the
server always proceed immediately. If a network failure occurs, clients unable to contact a server
have no guarantees about the validity of cached objects. To cope with network failures, clients take
application-dependent actions, such as signaling an error or returning the cached data along with a
warning that it may be stale.
The primary disadvantage of this algorithm is poor read performance, as all reads are delayed
by a roundtrip message between the client and the server. In addition, these messages may impose
significant load on the servers [11].
2.2 Poll
Poll is based on Poll Each Read, but it assumes that cached objects remain valid for at least a
timeout period of t seconds after a client validates the data. Hence, when equivalent
to Poll Each Read. Choosing the appropriate value of t presents a trade-off: On the one hand,
long timeouts improve performance by reducing the number of reads that wait for validation. In
particular, if a client accesses data at a rate of R reads per second and the timeout is long enough to
span several reads, then only 1
R\Deltat of the client's reads will require network messages (see Table 1).
On the other hand, long timeouts increase the likelihood that caches will supply stale data to
applications. Gwertzman and Seltzer [10] show that for web browser workloads, even for a timeout
of ten days, server load is significantly higher than under the Callback algorithm described below.
The same study finds that an adaptive timeout scheme works better than static timeouts, but that
when the algorithm's parameters are set to make the adaptive timeout algorithm impose the same
server load as Callback, about 4% of client reads receive stale data.
If servers can predict with certainty when objects will be modified, then Poll is ideal. In this
case, servers can tell clients to use cached copies of objects until the time of the next modification.
For this study, we do not assume that servers have such information about the future.
2.3 Callback
In a Callback algorithm [11, 17], servers keep track of which clients are caching which objects.
Before modifying an object, a server notifies the clients with copies of the object and does not
proceed with the modification until it has received an acknowledgment from each client. As shown
in
Table
1, Callback's read cost is low because a client is guaranteed that a cached object is valid
until told otherwise. However, the write cost is high because when an object is modified the server
invalidates the cached objects, which may require up to C tot messages. Furthermore, if a client
has crashed or if a network partition separates a server from a client, then a write may be delayed
indefinitely.
2.4 Lease
To address the limitations of Callback, Gray and Cheriton proposed Lease [8]. To read an object, a
client first acquires a lease for it with an associated timeout t. The client may then read the cached
copy until the lease expires. When an object is modified, the object's server invalidates the cached
objects of all clients whose leases have not expired. To read the object after the lease expires, a
client first contacts the server to renew the lease.
Lease allows servers to make progress while maintaining strong consistency despite failures.
If a client or network failure prevents a server from invalidating a client's cache, the server need
only wait until the lease expires before performing the write. By contrast, Callback may force the
write to wait indefinitely.
Leases also improve scalability of writes. Rather than contacting all clients that have ever read
an object, a server need only contact recently active clients that hold leases on that object. Leases
can thus reduce the amount of state that the server maintains to track clients, as well as the cost
of sending invalidation messages [14]. Servers may also choose to invalidate caches by simply
waiting for all outstanding leases to expire rather than by sending messages to a large number of
clients; we do not explore this option in this study. Lease presents a tradeoff similar to the one
offered by Poll. Long leases reduce the cost of reads by amortizing each lease renewal over R \Delta t
reads. On the other hand, short leases reduce the delay on writes when failures occur.
As with polling, a client that is unable to contact a server to renew a lease knows that it holds
potentially stale data. The client may then take application-specific actions, such as signaling an
error or returning the suspect data along with a warning. However, unlike Poll, Lease never lets
clients believe that stale objects are valid.
3 Volume leases
Traditional leases provide good performance when the cost of renewing leases is amortized over
many reads. Unfortunately, for many WAN workloads, reads of an object may be spread over
seconds or minutes, requiring long leases in order to amortize the cost of renewals [10]. To make
leases practical for these workloads, our algorithms use a combination of object leases, which are
associated with individual data objects, and volume leases, which are associated with a collection
of related objects on the same server. In our scheme a client reads data from its cache only if both
its object and volume leases for that data are valid, and a server can modify data as soon as either
lease has expired. By making object leases long and volume short, we overcome the limitations of
traditional leases: long object leases have low overhead, while short volume leases allow servers
to modify data without long delays. Furthermore, if there is spatial locality within a volume, the
overhead of renewing short leases on volumes is amortized across many objects. This section first
describes the Volume Leases algorithm and then examines a variation called Volume Leases with
Delayed Invalidations. At the end of this section, we examine Best Effort Volume Leases to support
applications where timely updates are desired, but not required.
3.1 The basic algorithm
Figures
2, 3, and 4 show the data structures used by the Volume Leases algorithm, the server side
of the algorithm, and the client side of the algorithm, respectively. The basic algorithm is simple:
ffl Reading Data. Clients read cached data only if they hold valid object and volume leases on
the corresponding objects. Expired leases are renewed by contacting the appropriate servers.
When granting a lease for an object o to a client c, if o has been modified since the last time
c held a valid lease on o then the server piggybacks the current data on the lease renewal.
ffl Writing Data. Before modifying an object, a server sends invalidation messages to all
clients that hold valid leases on the object. The server delays the write until it receives acknowledgments
from all clients, or until the volume or object leases expire. After modifying
the object, the server increments the object's version number.
3.1.1 Handling unreachable clients
Client crashes or network partitions can make some clients temporarily unreachable, which may
cause problems. Consider the case of an unreachable client whose volume lease has expired but
that still holds a valid lease on an object. When the client becomes reachable and attempts to
renew its volume lease, the server must invalidate any modified objects for which the client holds a
valid object lease. Our algorithm thus maintains at each server an Unreachable set that records the
clients that have not acknowledged-within some timeout period-one of the server's invalidation
messages.
After receiving a read request or a lease renewal request from a client in its Unreachable set, a
server removes the client from its Unreachable set, renews the client's volume lease, and notifies
the client to renew its leases on any currently cached objects belonging to that volume. The client
then responds by sending a list of objects along with their version numbers, and the server replies
with a message that contains a vector of object identifiers. This message (1) renews the leases
of any objects not modified while the client was unreachable and (2) invalidates the leases of any
objects whose version number changed while the client was unreachable.
Data Structures
Volume A volume v has the following attributes
set of objects in v
number (incremented on server reboot)
time by which all current leases on v will have expired
set of hclient; expirei of valid leases on v
set of clients whose volume leases have expired
and who may have missed object invalidation messages
Object An object o has the following attributes
data = the object's data
time by which all current leases on o will have expired
set of hclient; expirei of valid leases on
Figure
2: Data Structures for Volume Lease algorithm.
3.1.2 Handling server failures
When a server fails we assume that the state used to maintain cache consistency is lost. In LAN
systems, servers often reconstruct this state by polling their clients [17]. This approach is impractical
in a WAN, so our protocol allows a server to incrementally construct a valid view of the object
lease state, while relying on volume lease expiration to prevent clients from using leases that were
granted by a failed server. To recover from a crash, a server first invalidates all volume leases by
waiting for them to expire. This invalidation can be done in two ways. A server can save on stable
storage the latest expiration time of any volume lease. Then, upon recovery, it reads this timestamp
and delays all writes until after this expiration time. Alternatively, the server can save on stable
storage the duration of the longest possible volume lease. Upon recovery, the server then delays
any writes until this duration has passed.
Since object lease information is lost when a server crashes, the server effectively invalidates
all object leases by treating all clients as if they were in the Unreachable set. It does this by
maintaining a volume epoch number that is incremented with each reboot. Thus, all client requests
Server writes object
for all hclient; expirei 2 o:at
expire ? currentTime - client 62 o:volume:unreachable
To contact / To contact [ client
send(INV ALIDATE; o:id) to all clients in To contact
while do
receive(ACK INV ALIDATE, o:id) from c 2 To contact
To contact / To contact \Gamma f c g
o:volume:unreachable
o:version
Server renews client lease
receive(RENEW LEASE REQ;volId; volEpoch; objId; clientV ersion) from c
let v be the volume such that v:id = volId
let o be the object such that
(v:epoch ? volEpoch) then
v:unreachable
recoverUnreachableClient(c, v) // see below
if c 62 v:unreachable
v:expire
v:at old leases for client
v:at
o:expire
o:at old leases for client
o:at
else
recoverUnreachableClient(client c, volume v)
send(MUST RENEW ALL; v:id) to c
renewRecvd / FALSE
while (T f - currentTime) and (:renewRecvd) do
receive(RENEW OBJ LEASES;volId; leaseSet) from c
rewnewRecvd / TRUE
if (:renewRecvd) then
return // client still unreachable
for all hobjId; objV ersioni 2 leaseSet do
let o be the object such that
renewList renewList [ ho:id; o:version; o:expirei
send(INV ALIDATE;invalList; RENEW; renewList)
while
receive (ACK INV ALIDATE) from c
v:unreachable
Figure
3: The Volume Leases Protocol (Server Side).
Client reads object
renewLease(o:volume;o)
read local copy of
renewLease(volume v, object o)
epoch max(v:epoch; \Gamma1)
vnum max(o:version; \Gamma1)
send(RENEW LEASE REQ; v:id; epoch; o:id; vnum)
// Note: if any recieve times out, abort the read.
if receive(MUST RENEW ALL; v:id) from server then
renewAll(v)
// Note: if any recieve times out, abort the read.
receive(RENEW LEASE RESP;v:id; v:expire; v:epoch; o:version; o:expire[; o:data]) from server
renewAll(volume v)
for all objects o for which ((o:volume = v) - (validLease(o))
send(RENEW OBJ LEASES;v:id; leaseSet) to server
// Note: if any recieve times out, abort the read.
receive (INV ALIDATE;invalList; RENEW; renewList) from server
for all objId 2 invalList
let o be the object for which
for all hobjId; version; expirei 2 renewList
let o be the object for which
o:expire expire
send(ACK INV ALIDATE;v:id) to server
validLease(lease l)
if l:expire ? currentTime
return TRUE
else
return FALSE
Client receives object invalidation message for object
receive(INV ALIDATE; objId) from server
let o be the object for which
send(ACK INV ALIDATE; o:id) to server
Figure
4: The Volume Leases Protocol (Client Side).
to renew a volume must also indicate the last epoch number known to the client. If the epoch
number is current, then volume lease renewal proceeds normally. If the epoch number is old, then
the server treats the client as if the client were in the volume's Unreachable set.
It is also possible to store the cache consistency information on stable storage [5, 9]. This
approach reduces recovery time at the cost of increased overhead on normal lease renewals. We do
not investigate this approach in this paper.
3.1.3 The cost of volume leases
To analyze Volume Leases, we assume that servers grant leases of length t v on volumes and of
length t on objects. Typically, the volume lease is much shorter than the object leases, but when a
client accesses multiple objects from the same volume in a short amount of time, the volume lease
is likely to be valid for all of these accesses. As the read cost column of Table 1 indicates, the cost
of a typical read, measured in messages per read, is 1
R\Deltat . The first term reflects the
fact that the volume lease must be renewed every t v seconds but that the renewal is amortized over
all objects in the volume, assuming that object o is read R o times per second. The second term is
the standard cost of renewing an object lease. As the ack wait delay column indicates, if a client
or network failure prevents a server from contacting a client, a write to an object must be delayed
for min(t; t v ), i.e., until either lease expires. As the write cost and server state columns indicate,
servers track all clients that hold valid object leases and notify them all when objects are modified.
Finally, as the stale time columns indicate, Volume Leases never supplies stale data to clients.
3.1.4 Protocol verification
To verify the correctness of the consistency algorithm, we implemented a variation of the volume
leases algorithm described in Figures 3 and 4 using the Teapot system [4]. The Teapot version of
the algorithm differs from the one described in the figures in two ways. First, the Teapot version
uses a simplified reconnection protocol for Unreachable clients. Rather than restore a client's set of
object leases, the Teapot version clears all of the client's object leases when an Unreachable client
reconnects. The second difference is that in the Teapot version every network request includes a
sequence number that is repeated in the corresponding reply. These sequence numbers allow the
protocol to match replies to requests.
Teapot allows us to describe the consistency state machines in a convenient syntax and then to
generate Murphi [7] code for mechanical verification. The Murphi system searches the protocol's
state space for deadlocks or cases where the system's correcness invariants are violated. Although
Murphi's exhaustive search of the state space is an exponential algorithm that only allows us to
verify small models of the system, in practice this approach finds many bugs that are difficult to
locate by hand and gives us confidence in the correctness of our algorithm [3].
Murphi verifies that the following two invariants hold: (1) when the server writes an object,
no client has both a valid object lease and a valid volume lease for that object and (2) when a
client reads an object, it has the current version of the object. The system we verified contains
one volume with two objects in it, and it includes one client and one server that communicate over
a network. Clients and servers can crash at any time, and the network layer can lose messages
at any time but cannot deliver messages out of order; the network layer can also report messages
lost when they are, in fact, delivered. We have tested portions of the state space for some larger
models, but larger models exhaust our test machine's 1 GB of memory before the entire state space
is examined.
3.2 Volume leases with delayed invalidations
The performance of Volume Leases can be improved by recognizing that once a volume lease
expires, a client cannot use object leases from that volume without first contacting the server. Thus,
rather than invalidating object leases immediately for clients whose volume leases have expired,
the server can send invalidation messages when (and if) the client renews the volume lease. In
particular, the Volume Leases with Delayed Invalidations algorithm modifies Volume Leases as
follows. If the server modifies an object for which a client holds a valid object lease but an expired
volume lease, the server moves the client to a per-volume Inactive set, and the server appends
any object invalidations for inactive clients to a per-inactive-client Pending Message list. When an
inactive client renews a volume, the server sends all pending messages to that client and waits for
the client's acknowledgment before renewing the volume. After a client has been inactive for d
seconds, the server moves the client from the Inactive set to the Unreachable set and discards the
client's Pending Message list. Thus, d limits the amount of state stored at the server. Small values
for d reduce server state but increase the cost of re-establishing volume leases when unreachable
clients become reconnected.
As
Table
indicates, when a write occurs, the server must contact the C v clients that hold
valid volume leases rather than the C clients that hold valid object leases. Delayed invalidations
provide three advantages over Volume Leases. First, server writes can proceed faster because
many invalidation messages are delayed or omitted. Second, the server can batch several object
invalidation messages to a client into a single network message when the client renews its volume
lease, thereby reducing network overhead. Third, if a client does not renew a volume for a long
period of time, the server can avoid sending the object invalidation messages by moving the client
to the Unreachable set and using the reconnection protocol if the client ever returns.
3.3 Best-effort volume leases
Some applications do not require strong consistency but do want to deliver timely updates to
clients. For example, when an important event occurs, a news service would like to invalidate
stale copies of their front page quickly rather than wait until all customers know that the old page
is invalid. Thus, it is interesting to consider best-effort algorithms. A best effort algorithm should
always allow writes to proceed immediately, and it should notify clients of writes when doing so
does not delay writes.
Any of the volume algorithms may be converted to best effort algorithms by sending invalidations
in parallel with writes. Table 1 summarizes the characteristics of the best effort version
of the Delayed Invalidations algorithm. By sending invalidations in parallel with writes, the algorithm
limits the expected stale read time to notify(C v )-the time it takes for the server to send the
messages-without delaying writes.
Note that in the best effort algorithms, volume leases serve a different purpose than in the
original volume algorithms: they limit the time during which clients can see stale data. Whereas
strong consistency algorithms generally set the volume lease time to be the longest period they
are willing to delay a write, this is no longer a factor for best effort algorithms. Instead, these
algorithms set t v to the longest time they will allow disconnected clients to unknowingly see stale
data. Since only the disconnected clients are affected by long t v values, this may allow larger
values for t v than before. For example, a news service using strong consistency might not want
to block dissemination of a news update for more than a few seconds, but it may be willing to
allow a few disconnected clients to see the old news for several minutes. Thus, such a system
might use t seconds under strong consistency, but it might use t minutes under a
best effort algorithm. As with the original volume algorithms, combining short volume leases with
long object leases allows leases to be short while amortizing renewal costs over many objects.
To examine the algorithms' performance, we simulated each algorithm discussed in Table 1 under
a workload based on web trace data.
4.1 Simulator
We simulate a set of servers that modify files and provide files to clients, and a set of clients that
read files. The simulator accepts timestamped read and modify events from input files and updates
the cache state. The simulator records the size and number of messages sent by each server and
each client, as well as the size of the cache consistency state maintained at each server.
We validated the simulator in two ways. First, we obtained Gwertzman and Seltzer's simulator
[10] and one of their traces, and compared our simulator's results to theirs for the algorithms
that are common between the two studies. Second, we used our simulator to examine our algorithms
under simple synthetic workloads for which we could analytically compute the expected
results. In both cases, our simulator's results match the expected results.
Limitations of the simulator. Our simulator makes several simplifying assumptions. First,
it does not simulate concurrency-it completely processes each trace event before processing the
next one. This simplification allows us to ignore details such as mutual exclusion on internal data
structures, race conditions, and deadlocks. Although this could change the messages that are sent
(if, for instance, a file is read at about the same time it is written), we do not believe that simulating
these details would significantly affect our performance results.
Second, we assume infinitely large caches and we do not simulate server disk accesses. Both
of these effects reduce potentially significant sources of work that are the same across algorithms.
Thus, our results will magnify the differences among the algorithms.
Finally, we assume that the system maintains cache consistency on entire files rather than on
some finer granularity. We chose to examine whole-file consistency because this is currently the
most common approach for WAN workloads [1]. Fine-grained consistency may reduce the amount
of data traffic, but it also increases the number of control messages required by the consistency
algorithm. Thus, fine-grained cache consistency would likely increase the relative differences
among the algorithms.
4.2 Workload
We use a workload based on traces of HTTP accesses at Boston University [6]. These traces span
four months during January 1995 through May 1995 and include all HTTP accesses by Mosaic
browsers-including local cache hits-for 33 SPARCstations.
Although these traces contain detailed information about client reads, they do not indicate when
files are modified. We therefore synthesize writes to the objects using a simple model based on two
studies of write patterns for web pages. Bestavros [2] examined traces of the Boston University
web server, and Gwertzman and Seltzer [10] examined the write patterns of three university web
servers. Both studies concluded that few files change rapidly, and that globally popular files are
less likely to change than other files. For example, Gwertzman and Seltzer's study found that 2%-
23% of all files were mutable (each file had a greater than 5% chance of changing on any given
day) and 0%-5% of the files were very mutable (had greater than 20% chance of changing during
a 24-hour period).
Based on these studies, our synthetic write workload divides the files in the trace into four
groups. We give the 10% most referenced files a low average number of random writes per day
(we use a Poisson distribution with an expected number of writes per day of 0.005). We then
randomly place the remaining 90% of the files into three sets. The first set, which includes 3% of
all files in the trace, are very mutable and have an expected number of writes per day of 0.2. The
second set, 10% of all files in the trace, are mutable and have an expected number of writes per
day of 0.05. The remaining 77% of the files have an expected number of writes per day of 0.02. In
section 5.4, we examine the sensitivity of our results to these parameters.
We simulate the 1000 most frequently accessed servers; this subset of the servers accounts for
more than 90% of all accesses in the trace. Our workload consists of 977,899 reads of 68,665
different files plus 209,461 artificially generated writes to those files. The files in the workload
are grouped into 1000 volumes corresponding to the 1000 servers. We leave more sophisticated
grouping as future work.
5 Simulation results
This section presents simulation results that compare the volume algorithms with other consistency
schemes. In interpreting these results, remember that the trace workload tracks the activities of a
relatively small number of clients. In reality, servers would be accessed by many other clients, so
the absolute values we report for server and network load are lower than what the servers would
actually experience. Instead of focusing on the absolute numbers in these experiments, we focus
on the relative performance of the algorithms under this workload.
5.1 Server/network load
Figure
5 shows the performance of the algorithms. The x-axis, which uses a logarithmic scale,
gives the object timeout length in seconds (t) used by each algorithm, while the y-axis gives the
number of messages sent between the client and servers. For Volume Lease, t refers to the object
lease timeout and not the volume lease timeout; we use different curves to show different volume
lease timeouts and indicate the volume lease time in the second parameter of the label. For
t (timeout in seconds)
Volume(t,
Delay Volume(t, 10,-)
Callback
Volume(t, 100)
Delay Volume(t, 100, -)
Client Poll(t) Object Lease(t)
Number
of
Messages
Figure
5: Number of messages vs. timeout length.
the Delay Volume lines, we assume an infinite acknowledgement wait delay (d) as signified by the
third parameter; this means that a server never moves idle clients to the unreachable list. The line
for Callback is flat because Callback invalidates all cached copies regardless of t. The Lease and
basic Volume Lease lines decline until t reaches about 100,000 seconds and then rise slightly. This
shape comes from two competing influences. As t rises, the number of lease renewals by clients
declines, but the number of invalidations sent to clients holding valid leases increases. For this
workload, once a client has held an object for 100,000 seconds, it is more likely that the server
will modify the object than that the client will read it, so leases shorter than this reduce system
load. As t increases, Client Poll and Delayed Invalidation send strictly fewer messages. Client
Poll never sends invalidation messages, and Delayed Invalidation avoids sending invalidations to
clients that are no longer accessing a volume, even if the clients hold valid object leases. Note that
for timeouts of 100,000 seconds, Client Poll results in clients accessing stale data on about 1% of
all reads, and for timeout values of 1,000,000 seconds, the algorithm results in clients accessing
stale copies on about 5% of all reads.
The separation of the Lease(t), Volume(t; t lines shows the
additional overhead of maintaining volume leases. Shorter volume timeouts increase this overhead.
Lease can be thought of as the limiting case of infinite-length volume leases.
Although Volume Leases imposes a significant overhead compared to Lease for a given value of
t, applications that care about fault tolerance can achieve better performance with Volume Leases
than without. For example, the triangles in the figure highlight the best performance achievable
by a system that does not allow writes to be delayed for more than 10 seconds for Lease(t),
Messages
Exchanged
t (timeout in seconds)
Delay Volume(t, 1000, -)
Delay Volume(t, 10, -)
Delay Volume(t, 10000, -)
Delay Volume(t, 100, -)
Figure
Number of messages vs. timeout length for Volume Leases with Delayed Invalidates as
volume lease length is varied.
sends 32% fewer messages than
sends 39% fewer messages than Similarly, as indicated by the squares in the
figure, for applications that can delay writes at most 100 seconds, Volume Lease outperforms Lease
by 30% and Delayed Invalidations outperforms the lease algorithm by 40%.
Although providing strong consistency is more expensive than the Poll algorithm, the cost
appears tolerable for many applications. For example, uses about 15% fewer
messages than Delayed supplies stale data to
clients on about 1% of all reads. Even in the extreme case of clients see
stale data on over 35% of reads), Delayed Invalidations uses less than twice as many messages as
the polling algorithm.
We also examined the network bytes sent by these algorithms and the server CPU load imposed
by these algorithms. By both of these metrics, the difference in cost of providing strong consistency
compared to Poll was smaller than the difference by the metric of network messages. The relative
differences among the lease algorithms was also smaller for these metrics than for the network
messages metric for the same reasons.
A key advantage of Best Effort Volume Leases for applications that permit relaxed consistency
is the algorithm may enable longer volume lease timeouts and thus may reduce consistency over-
head. Strict consistency algorithms set the volume timeout, t v , to be the longest tolerable write
delay, but the best effort algorithms can set t v to be the longest time disconnected clients should
Server
States
(Bytes)
t (timeout in seconds)
Object Lease(t)
Delay Volume(t,10,-)
Delay Volume(t,100,-)
Delay Volume(t,10,10000)
Delay Volume(t,100,10000)
Callback
Figure
7: State at the most popular server vs. timeout.
be allowed to unknowingly access stale data; this may allow larger values of t v for some services
that use Best Effort. Figure 6 shows the effect of varying the volume lease timeout on the number
of messages sent.
5.2 Server state
Figures
7 and 8 show the amount of server memory required to implement the algorithms. The first
shows the requirements at the trace's most heavily loaded server, and the second shows the demand
at the trace's tenth most heavily loaded server. The x-axis shows the timeout in seconds using a
log scale. The y-axis is given in bytes and represents the average number of bytes of memory used
by the server to maintain consistency state. We charge the servers 16 bytes to store an object or
volume lease or callback record. For messages queued by the Delay algorithm, we also charge
bytes.
For short timeouts, the lease algorithms use less memory than the callback algorithm because
the lease algorithms discard callbacks for inactive clients. Compared to standard leases, Volume
Leases increase the amount of state needed at servers, but this increase is small because volume
leases are short, so servers generally maintain few active volume leases. If the Delay algorithm
never moves clients to the Unreachable set it may store messages destined for inactive clients for
a long time and use more memory than the other algorithms. Conversely, if Delay uses a short
d parameter so that it can move clients from the Inactive set to the Unreachable set and discard
their pending messages and callbacks, Delay can use less state than the other lease or callback
Server
States
(Bytes)
t (timeout in seconds)
Callback
Delay Volume(t,10,-)
Delay Volume(t,100,-)
Object Lease(t)
Delay Volume(t,10,10000)
Delay Volume(t,100,10000)
Figure
8: State at the 10 th most popular server vs. timeout.
algorithms. Note that running Delay with short discard times will increase server load and the
number of consistency messages. We have not yet quantified this effect because it will depend on
implementation details of the reconnection protocol.
5.3 Bursts of load
Figure
9 shows a cumulative histogram in which the y value, shown in log scale, counts the number
of 1-second periods in which the load at the server was at least x messages sent or received per
second. There are three groups of lines. Client Poll and Object Lease both use short timeouts, so
when clients read groups of objects from a server, these algorithms send groups of object renewal
messages to the server. Callback and Volume use long object lease periods, so read traffic puts
less load on the server, but writes result in bursts of load when popular objects are modified. For
this workload, peak loads correspond to bursts of about one message per client. Finally, Delay
uses long object leases to reduce bursts of read traffic from clients accessing groups of objects,
and it delays sending invalidation messages to reduce bursts of traffic when writes occur. This
combination reduces the peak load on the server for this workload.
For the experiment described in the previous paragraph, Client Poll and Object Lease have periods
of higher load than Callback and Volume for two reasons. First, the system shows performance
for a modest number of clients. Larger numbers of clients would increase the peak invalidate load
for Callback and Volume. For Client Poll and Object Lease, increasing the number of clients would
increase peak server load less dramatically because read requests from additional clients would be
Periods
with
at
Least
that
Load
Messages per 1 Second
Client
Object
Callback
Delay Volume(1x10^7,10,-)
Figure
9: Periods of heavy server load under default workload for the most heavily loaded server.
more spread out in time. The second reason for Callback and Volume's advantage in this experiment
is that in the trace clients read data from servers in bursts, but writes to volumes are not bursty
in that a write to one object in a volume does not make it more likely that another object from the
same volume will soon be modified. Conversely, Figure 10 shows a "bursty write" workload in
which when one object is modified, we select k other objects from the same volume to modify at
the same time. For this graph, we compute k as a random exponential variable with a mean of 10.
This workload significantly increases the bursts of invalidation traffic for Volume and Callback.
5.4 Sensitivity
Our workload utilizes a trace of read events, but it generates write events synthetically. In this
subsection, we examine how different assumptions about write frequency affect our results.
Figure
11 shows the performance of the algorithms for representative parameters as we vary
the write frequency. Our default workload gives the 10% most referenced files a per-day change
probability of 0.5%, 3% of the files a per-day change probability of 20%, 10% of the files a probability
of 5%, and 77% of the files a per-day change probability of 2%. For each point on the
graph, we multiply those per-day probabilities by the value indicated by the x-axis. Note that our
workload generator converts per-day change probabilities to per-second change probabilities, so
per-day probabilities greater than 100% are possible
We examine the lease algorithms as they might be parameterized in a system that never wishes
to delay writes more than 100 seconds and compare to a poll algorithm with a 100-second timeout
Periods
with
at
Least
that
Load
Messages per 1 Second
Delay Volume(1x10^7,10,-)
Client
Object
Callback
Figure
10: Periods of heavy server load under "bursty write" workload for the most heavily loaded
server.
and a callback algorithm with infinite timeout. These results indicate that the Client
and are little affected by changing write rates. This is because the object timeouts
are so short that writes are unlikely to cause many invalidations even when their frequency is
increased 100-fold. The volume lease algorithms and Callback all cost more as write frequency
increases. The cost of Callback increase more quickly than
the cost of Delayed because the first two algorithms
have long object callback periods and thus send invalidation messages to all clients that have done
reads between a pair of writes. Delayed Volume rises more slowly because it does not send object
invalidations once a volume lease expires.
6 Related work
Our study builds on efforts to assess the cost of strong consistency in wide area networks. Gwertzman
and Seltzer [10] compare cache consistency approaches through simulation and conclude
that protocols that provide weak consistency are the most suitable to a Web-like environment. In
particular, they find that an adaptive version of Poll(t) exerts a lower server load than an invalidation
protocol if the polling algorithm is allowed to return stale data 4% of the time. We arrive
at different conclusions. In particular, we observe that much of the apparent advantage of weak
consistency over strong consistency in terms of network traffic comes from clients reading stale
data [14]. Also, we use volume leases to address many of the challenges to strong consistency.
Messages
Exchanged
Write Multiplier
Delay Volume(1x10^7,100, )
Callback
Figure
Messages sent under different write frequencies. The x-axis represents a multiplier to
the write frequency compared to our default workload.
We also build on the work of Liu and Cao [14], who use a prototype server invalidation system
to evaluate the overhead of maintaining consistency at the servers compared to client polling. They
also study ways to reduce server state via per-object leases. As with our study, their workload is
based on a trace of read requests and synthetically-generated write requests. Our work differs primarily
in our treatment of fault tolerance issues. In particular, after a server recovers our algorithm
uses volume timeouts to "notify" clients that they must contact the server to renew leases; Liu and
Cao's algorithm requires the server to send messages to all clients that might be caching objects
from the server. Also, our volume leases provide a graceful way to handle network partitions;
when a network failure occurs, Liu and Cao's algorithm must periodically retransmit invalidation
messages, and it does not guarantee strong consistency in that case.
Cache consistency protocols have long been studied for distributed file systems [11, 17, 19].
Several aspects of Coda's [13] consistency protocol are reflected in our algorithms. In particular,
our notion of a volume is similar to that used in Coda [16]. However, ours differ in two key
respects. First, Coda does not associate volumes with leases, and relies instead on other methods to
determine when servers and clients become disconnected. The combination of short volume leases
and long object leases is one of our main contributions. Second, because Coda was designed
for different workloads, its design trade-offs are different. For example, because Coda expects
clients to communicate with a small number of servers and it regards disconnection as a common
occurrence, Coda aggressively attempts to set up volume callbacks to all servers on each hoard
In our environment, clients are associated with a larger universe of
servers, so we only renew volume leases when a client is actively accessing the server. Also, in
our algorithm when an object is modified, the server does not send volume invalidation messages
to clients that hold volume leases but not object leases on the object in question. We thus avoid the
false sharing problem of which Mummert warns [16].
Our best effort leases algorithm provides similar semantics to and was inspired by Coda's
optimistic concurrency protocol [13]. Bayou [20] and Rover [12] also implement optimistic con-
currency, but they can detect and react to more general types of conflicts than can Coda.
Worrell [21] studied invalidation-based protocols in a hierarchical caching system and concluded
that server-driven consistency was practical for the web. We plan to explore ways to add
hierarchy to our algorithms in the future.
Cache consistency protocols have long been studied for distributed file systems [18, 17, 19].
Howard et. al [11] reached the somewhat counter-intuitive conclusion that server-driven consistency
generally imposed less load on the server than client polling even though server-driven algorithms
provide stronger guarantees for clients. This is because servers have enough information to
know exactly when messages need to be sent.
Mogul's draft proposal for HTTP 1.1 [15] includes a notion of grouping files into volumes
to reduce the overhead of HTTP's polling-based consistency protocol. We are not aware of any
implementations of this idea.
Finally, we note that volume leases on the set of all objects provided by a server can be thought
of as providing a framework for the "heartbeat" messages used in many distributed state systems.
Conclusions
We have taken three cache consistency algorithms that have been previously applied to file systems
and quantitatively evaluated them in the context of Web workloads. In particular, we compared the
timeout-based Client Poll algorithm with the Callback algorithm, in which a server invalidates
before each write, and Gray and Cheriton's Lease algorithm. The Lease algorithm presents a
tradeoff similar to the one offered by Client Poll. On the one hand, long leases reduce the cost of
reads by amortizing each lease renewal over many reads. On the other hand, short leases reduce
the delay on writes when a failure occurs. To solve this problem, we have introduced the Volume
Lease, Volume Lease with Delayed Invalidation, and Best Effort Volume Lease algorithms that
allow servers to perform writes with minimal delay, while minimizing the number of messages
necessary to maintain consistency. Our simulations confirm the benefits of these algorithm.
Acknowledgments
Some of the work described here appeared in an earlier paper [22]. We thank James Gwertzman
and Margo Seltzer for making their simulator available to us so we could validate our simulator. We
thank Carlos Cunha, Azer Bestavros and Mark Crovella for making the BU web traces available
to us. This work was funded in part by a NSF CISE grant (CDA-9624082), gifts from Novell
and Sun Microsystems, and DARPA/SPAWAR grant number N66001-98-8911. Dahlin and Alvisi
were supported by NSF CAREER awards (CCR-9733842) and (CCR-9734185), respectively.
--R
Hypertext Transfer Protocol - HTTP/1.0
Speculative Data Dissemination and Service to Reduce Server Load
Experience with a Language for Writing Coherence Protocols.
Teapot: Language Support for Writing Memory Coherence Protocols.
The Rio File Cache: Surviving Operating System Crashes.
Characteristics of WWW Traces.
Protocol Verification as a Hardware Design Aid.
An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency
Notes on data base operating systems.
Scale and Performance in a Distributed File System.
A Toolkit for Mobile Information Access.
Disconnected Operation in the Coda File System.
Maintaining Strong Cache Consistency in the World-Wide Web
A Design for Caching in HTTP 1.1 Preliminary Draft.
Large Granularity Cache Coherence for Intermittent Connectivity.
Caching in the Sprite Network File System.
Design and Implementation of the Sun Network Filesystem.
Spritely NFS: Experiments with Cache Consistency Protocols.
Managing Update Conflicts in Bayou
Invalidation in Large Scale Network Object Caches.
Using Leases to Support Server-Driven Consistency in Large-Scale Systems
--TR
--CTR
Xianjun Geng , Ram D. Gopal , R. Ramesh , Andrew B. Whinston, Scaling Web Services with Capacity Provision Networks, Computer, v.36 n.11, p.64-72, November
Venkata Duvvuri , Prashant Shenoy , Renu Tewari, Adaptive Leases: A Strong Consistency Mechanism for the World Wide Web, IEEE Transactions on Knowledge and Data Engineering, v.15 n.5, p.1266-1276, September
Randal C. Burns , Robert M. Rees , Darrell D. E. Long, Efficient Data Distribution in a Web Server Farm, IEEE Internet Computing, v.5 n.4, p.56-65, July 2001
Rajeev Gupta , Ashish Puri , Krithi Ramamritham, Executing incoherency bounded continuous queries at web data aggregators, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
L. Y. Cao , M. T. zsu, Evaluation of Strong Consistency Web Caching Techniques, World Wide Web, v.5 n.2, p.95-123, 2002
Yuguang Fang , Yi-Bing Lin, Strongly consistent access algorithms for wireless data networks, Wireless Networks, v.11 n.3, p.243-254, May 2005
Mohammad S. Raunak , Prashant Shenoy , Pawan Goyal , Krithi Ramamritham, Implications of proxy caching for provisioning networks and servers, ACM SIGMETRICS Performance Evaluation Review, v.28 n.1, p.66-77, June 2000
Randal C. Burns , Robert M. Rees , Darrell D. E. Long, Efficient Data Distribution in a Web Server Farm, IEEE Internet Computing, v.5 n.4, p.56-65, July 2001
Edith Cohen , Haim Kaplan, Refreshment policies for web content caches, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.38 n.6, p.795-808, 22 April 2002
Chi-Hung Chi , HongGuang Wang, A generalized model for characterizing content modification dynamics of web objects, Web content caching and distribution: proceedings of the 8th international workshop, Kluwer Academic Publishers, Norwell, MA, 2004
Anoop Ninan , Purushottam Kulkarni , Prashant Shenoy , Krithi Ramamritham , Renu Tewari, Cooperative leases: scalable consistency maintenance in content distribution networks, Proceedings of the 11th international conference on World Wide Web, May 07-11, 2002, Honolulu, Hawaii, USA
Jian Yin , Lorenzo Alvisi , Mike Dahlin , Arun Iyengar, Engineering server-driven consistency for large scale dynamic Web services, Proceedings of the 10th international conference on World Wide Web, p.45-57, May 01-05, 2001, Hong Kong, Hong Kong
Randal C. Burns , Robert M. Rees , Larry J. Stockmeyer , Darrell D. E. Long, Scalable Session Locking for a Distributed File System, Cluster Computing, v.4 n.4, p.295-306, October 2001
Magnus E. Bjornsson , Liuba Shrira, BuddyCache: high-performance object storage for collaborative strong-consistency applications in a WAN, ACM SIGPLAN Notices, v.37 n.11, November 2002
Purushottam Kulkarni , Prashant Shenoy , Weibo Gong, Scalable techniques for memory-efficient CDN simulations, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Jian Yin , Lorenzo Alvisi , Mike Dahlin , Arun Iyengar, Engineering web cache consistency, ACM Transactions on Internet Technology (TOIT), v.2 n.3, p.224-259, August 2002
Amol Nayate , Mike Dahlin , Arun Iyengar, Transparent information dissemination, Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware, October 18-22, 2004, Toronto, Canada
Arun Iyengar , Daniela Rosu, Architecting Web sites for high performance, Scientific Programming, v.10 n.1, p.75-89, January 2002
Ming-Kuan Liu , Fei-Yue Wang , Daniel Dajun Zeng, Web caching: a way to improve web QoS, Journal of Computer Science and Technology, v.19 n.2, p.113-127, March 2004 | file system;scalable server;lease;cache consistency;fault tolerance;volume |
628018 | The Ant System Applied to the Quadratic Assignment Problem. | AbstractIn recent years, there has been growing interest in algorithms inspired by the observation of natural phenomena to define computational procedures that can solve complex problems. In this article, we describe a distributed heuristic algorithm that was inspired by the observation of the behavior of ant colonies, and we propose its use for the Quadratic Assignment Problem. The results obtained in solving several classical instances of the problem are compared with those obtained from other evolutionary heuristics to evaluate the quality of the proposed system. | Introduction
The Quadratic Assignment Problem (QAP) of order n consists in looking for the best
allocation of n activities to n locations, where the terms activity and location should be
considered in their most general sense. It was first formulated in (Koopmans and Beckman,
1957) and since then it has been recognized as a model of many different real situations;
applications have been described concerning planning of buildings in university campuses,
arrangement of departments in hospitals, minimization of the total wire length in electronic
circuits, ordering of correlated data in magnetic tapes and others (Burkard, 1984).
Mathematically the problem is defined by three matrices of dimension n # n:
of the distances (between location i and location h);
of the flows (between activity j and activity k);
of the assignment costs (of activity j to location i).
Normally matrices D and F are integer-valued matrices, while the assignment cost c ij of
activity j to location i is usually ignored, as it does not make a significant contribution to the
complexity of solving the problem.
Under these hypotheses, a permutation #: i #(i) can be interpreted as a particular
assignment of activity location
The cost of transferring data (or materials etc., depending on the problem in question)
between two activities can be expressed as the product of the distance between the locations
to which the activities are assigned by the flow between the two activities, d ih -f #(i)#(h) .
To solve the QAP one must thus find a permutation # of the indices (1,2,.,n) which
minimizes the total assignment cost:
d
z
The problem can be reformulated to show the quadratic nature of the objective function:
solving the problem means identifying a permutation matrix X of dimension n # n (whose
is assigned to location i and 0 in the other cases) such that:
z
subject to the following constraints
which identify the matrix X as belonging to set # of the permutation matrices of order n.
As the QAP is a generalization of the Traveling Salesman Problem (TSP), it is also an NP-complete
problem (Sahni and Gonzales, 1976)
The techniques which can be used to find the optimal solution are limited to branch and
bound and cutting planes methods: with current hardware, problems of order greater than 20
cannot be solved in an acceptable time (Burkard et al., 1994).
For this reason, in recent years many heuristic algorithms have been proposed which, though
not ensuring that the solution found is the best one, give good results in an acceptable
computation time (Maniezzo et al., 1994).
In this article we propose the use of a new heuristic procedure, improving over an algorithm
originally developed for the TSP, which shows the emergence of global properties following
the mutual interaction among many elementary agents (Colorni et al., 1991, 1992, Dorigo et
al., 1996). In particular we are interested in the distribution of search activity among agents
which can only perform very simple actions, so that we can easily parallelize the
computational effort (see Li, Pardalos, 1992 for a discussion on the effectiveness of
parallelization for the QAP).
Our work was inspired by researches on the behavior of ant colonies (Denebourg et al.,
1983), where one of the problems of interest is to understand how ants, which are almost
blind animals with very simple individual capacities, can, when they act together in a colony,
find the shortest route between two points, (e.g. the ant's nest and a source of food).
The explanation lies in how the ants transmit information on the path followed: each of them
when it moves deposits a substance, called pheromone, which can be detected by the other
ants. While an ant with no information moves essentially at random, an ant which follows a
path already followed by others is tempted to follow the already marked path (and the
probability of this to occur depends on the intensity of the trace perceived), in turn leaving
new pheromone which is added to that already existing. The emerging collective effect is a
form of autocatalytic or positive feedback behavior, in that the more ants follow a particular
path, the more attractive this path becomes for the next ants which should meet it. The
process is characterized by a positive feedback; in fact, the probability with which an ant
chooses a path increases with the number of ants which have already chosen the same path.
The final result is that nearly all the ants will choose to follow the shortest path, even if each
ant's decision always remains probabilistic (that is, they can also explore new paths).
The algorithm which we will define in the next section is inspired by the observations made
on ant colonies and is thus called the Ant System. A description of the original version of this
method and of its experimental results when applied to the Traveling Salesman Problem can
be found in (Dorigo, et al., 1996).
2. The Ant System
In this section we introduce a new heuristic (below called the Ant System) for the QAP which
uses some characteristics of behavior shown in reality by ant colonies, defining a system of
artificial "ants". The Ant System presented in this paper represents an improvement of the
algorithm described in (Dorigo et al., 1996), from which it differs in several structural
elements.
In the Ant System each artificial "ant" is an agent with the following characteristics:
i) when it chooses to assign activity j to location i it leaves a substance, called trace (the
equivalent of the pheromone) # ij on the coupling (i, j);
ii) it chooses the location to which a given activity is to be assigned with a probability
which is a function of the "potential goodness" # ij of the coupling (i,j) and of the quantity
of trace present on the coupling itself;
iii) to construct a complete permutation, locations and activities already coupled are
inhibited until all activities have been assigned.
This heuristic uses a population of m agents which construct solutions step by step, assigning
an activity to each location. When all the ants have constructed their permutations, the best
assignments are rewarded so as to encourage the identification of ever better solutions in the
next cycles.
To satisfy the requirement that the ants assign each activity to a different location, we
associate a data structure, called a tabu list, to each ant. This memorizes the locations already
used and stops the ant assigning them a new activity before a cycle is complete (which thus
identifies a permutation). Once the permutation is completed, the tabu list is emptied and the
ant is free to choose its own couplings again. Let us define tabu k a vector containing the tabu
list for the k-th ant and tabu k (s) as the s-th element of the tabu list for the k-th ant (the
location occupied by the s-th activity in the assignment made by the k-th ant).
We now see how to introduce a method to calculate the "potential goodness" # ij of an
assignment and thus the initial assignment (when there is no trace); the initial situation will
then be modified by the experience acquired by the population via the trace.
The basic idea is to exploit the information given by an effective lower bound to the
completion of the problem solution and use it as an indicator of the expected proficiency of a
particular pairing.
For the particular case of the QAP, several lower bounds have been proposed (Pardalos,
Wolkowicz, 1994). The better known one, and among those we tested the one that had the
best effectiveness/computational cost ratio, is the Gilmore and Lawler bound (independently
presented by Gilmore, 1962, and Lawler, 1963). The bound is obtained by computing a value
z GL as
z
#( min
#d ih f jk x hk
where the minimum is computed subject to constraints (3), (4) and (5). Obviously z
GL#
z QAP . Using the same bounding strategy it is also possible to obtain a lower bound to the
value of the completion of a partial assignment. In fact, suppose that the index set
. , n} is partitioned into two subsets # 1 and # 2 , corresponding to the indices of the already
assigned facilities and to the indices of the still unassigned facilities, respectively.
Similarly, suppose that the index set partitioned into two subsets # 1 and
corresponding to the indices of the already assigned locations and to the indices of the
still unassigned locations, respectively. Then, equation (2) can be rewritten as :
z
Notice that the first term of the objective function is now a known constant, z 1 , and the fourth
term is a reduced QAP instance to which formula (6) can be applied to obtain a bound z 4 . A
lower bound z 23 to the value of the second and third terms can be obtained (Burkard, 1984)
by solving an assignment problem defined over a cost matrix [# lm ], l# 2 , m# 2 , where
li f mj ) (7)
A lower bound to the completion cost of a partial assignment can be thus computed as
z
On the basis of these results to compute the attractiveness of a coupling (i,j), i# 2 , j# 2 we
simply compute formula (8) for a partial assignment where, apart from the already specified
couplings, we also tentatively locate facility i in location j. Therefore we tentatively set #
accordingly and set # ij
z LB .
As an example we consider Nugent's problem of order 5 (Nugent et al., 1968), a problem
arising in a hospital layout location context, with the distance D and flow F matrices shown
below.
The following execution trace shows how a solution is constructed. Assume for simplicity
that solutions are constructed assigning facilities to locations of increasing indices, that is,
first a facility to assign to location 1 is chosen, then the facility to assign to location 2, and so
on. The construction goes on as follows. At the root node no facility is assigned, so there are
5 possible assignments of facilities to location 1, whose costs are:
The corresponding 5 lower bounds are therefore:
z
The choice goes to assigning facility 4 to location 1.
At the second level one needs to define the assignment to location 2. Being one facility
assigned, only 4 possibilities remain, whose costs are:
The corresponding 4 lower bounds are therefore:
z
The choice goes to assigning facility 1 to location 2.
At the third level one needs to define the assignment to location 3. The 3 possibilities remain,
have the following costs:
The corresponding 3 lower bounds are therefore:
z
Facility 5 is thus chosen for location 3. The two remaining assignments are then considered
explicitly, i.e., without going through lower bound computations.
One can thus find the complete permutation: activity 4 is assigned to node 1; activity 1 to
node 2 and so on, obtaining the permutations (4,1,5,2,3) of cost equal to 50.
In the Ant System the permutation is constructed probabilistically, using the Monte Carlo
method, on the basis both of the # ij values and of the values of the # ij variables, representing
the trace levels. We define in fact as the trace intensity (pheromone in the case of real
ants) associated to the location i - activity j coupling.
The population has m ants, with k the generic ant m). The probability that the k-th
ant assigns activity j to location i is given by
if
tabu
r
ir
ir
with
0# 1.
In constructing the permutation we start from the location of index 1 and we assign a facility
to it by choosing probabilistically from all the available facilities; at the second step we
assign to the second location a facility by choosing probabilistically among those that were
not already assigned, and so on. The procedure is repeated for all n locations. The solution
construction is repeated m times, as many times as there are ants in the population.
The parameter # allows the user to define the relative importance of the trace # ij (t) with
respect to the desirability # ij . Thus the probability p t
between the
desirability of a coupling (as indicated by the lower bound to the cost of a solution containing
that assignment) and the trace intensity (if there was already a high "passage" of ants on
coupling (i,j) then this coupling is probably very desirable).
Trail levels are updated after all the ants have constructed their solutions. The update is made
according to the following equation
where # is a coefficient which represents the trace's persistence (1-# represents the
evaporation) and
being the quantity of trace left on the coupling (i,j) by the k-th ant at the end of the
construction of its permutation. The trace's initial intensity, # ij (0), can be set to a small and
positive arbitrary value.
The coefficient # must be fixed to a value < 1 to avoid an unlimited accumulation of trace.
Concerning the quantity of trace left by the ants, different choices for the calculation of # ij k
determine the realization of slightly different algorithms. In the current version of the Ant
is given by the value Q/L k if the k-th ant has chosen coupling (i,j), and by the
value 0 otherwise: Q is the current upper bound, i.e., the best solution found at the current
iteration, while L k is the value of the objective function obtained by the k-th ant.
In this way the best solutions (with a corresponding low L k value) must be characterized by
more trace on the couplings which determine low values of the objective function.
The basic algorithm, which uses the calculation of the bounds and which we will indicate by
AS, is the following.
1. t:=0
Initialize the trace matrix
Calculate the upper and lower bounds for the whole problem and the
desirabilities
ants on node 1
2. For k:=1 to m
Repeat {for each location}
Choose, with probability given by equation (9), the facility to
assign from those not yet assigned.
Put the chosen facility in the tabu list of the k-th ant
Until the tabu list is full {this cycle is repeated n times}
End-for
For k:=1 to m
Carry the solution to its local optimum and compute L k
{the local search procedure is described at the end of this Section}
Update the best permutation found
End-for
3. For each coupling (i,j) calculate #t ij according to equation (11)
Update the trace matrix according to equation (10)
4. If not (END_TEST)
Empty the tabu lists of all the ants
Else
Print the best permutation and STOP
The END_TEST is usually made either on a maximum number of iterations (steps from 2 to
5) or on a maximum CPU time allowed.
The algorithm's performance depends on the values of parameters # (trace persistence
(importance of the trace), and m (number of ants). An experimental analysis
for parameters setting will be presented in Section 3.
One can calculate an estimate of the complexity of the Ant System algorithm. After the
initializations, of complexity O(n 3 ) as it implies the solution of a linear assignment problem,
one must choose which facility will be assigned to the currently considered location:
probabilities are calculated according to equation (9) and the choice in probability is made
between the facilities not yet assigned; the whole has complexity O(n 2 ). To construct an
entire permutation one must thus perform O(n 3 ) operations. Each complete iteration (m ants)
thus requires a number of operations O(m-n 3 ). When all the ants have constructed their
solution the trace matrix must be updated: O(n 2 ) operations are required for this updating.
The total complexity of an iteration of the algorithm is thus O(m-n 3 ).
As it is the case for most constructive heuristics (see for example GRASP, Li et al., 1994)
also for the Ant System efficiency improvements can be achieved by using a local search
procedure as a standard element of the overall algorithm. We thus designed a two-phase
algorithm. The first phase constructs solutions one element after the other, following the ant
path. When an ant has constructed its basic permutation, a second phase of local search (see
step 2) is activated and the trace is then added to the common data structure.
The local search procedure we implemented is a simple deterministic procedure. The cost of
all the possible exchanges is evaluated starting from the permutation obtained by the ant and
choosing the exchange which most improves the objective function (see (Taillard, 1990) for
an efficient implementation of the variation due to an exchange).
The local search procedure is then the following.
Change:=true
While (change=true) do
Explore the neighborhood of solution s(k) constructed by ant k
and save the best adjacent solution s'(k)
If f(s'(k))<f(s(k))
then s(k):= s'(k)
else change:=false
End-while
The complete exploration of the neighborhood of a solution requires a number of operations
in fact the neighborhood consists of n(n-1)/2 permutations which can be obtained with
exchanges of pairs of elements and evaluating the cost variation, once initialized the relevant
data structures, requires a constant operation time (Taillard, 1990); the local search step, for
medium-large problems, could become rather onerous in terms of computation time.
3. Experimental results
The algorithm presented in this paper was coded in Fortran 77 and tested on a Pentium 166
MHz machine, running under DOS. The computational testing of the new algorithm was
carried out applying the code to standard test problems from the literature, and comparing the
results to those of an established heuristic, running under identical experimental conditions.
Before comparing our code, we had to identify a good parameter setting. As a complete
analysis of the model which suggests the optimum values of the parameters in each situation
has not been developed, we performed several simulations, testing the algorithm on 5
different problems with various values of the control parameters # (importance of the trace)
and # (trace persistence coefficient). We also studied how the number m of ants can influence
the overall performance.
The problems chosen for the purpose of setting the algorithm parameters were: the Nugent
problems (Nugent et al., 1968) of dimension from 15 to 30, the Elshafei problem (Elshafei,
1977) of dimension 19, and a Krarup problem of dimension
The optimal solution is known for all the problems up to dimension 20, while for the larger
ones (Nugent 30 and Krarup 30) the best solutions found in the literature, as reported by
Burkard et al. (1994), were considered for the comparison.
We tested various values for each parameter (in a coeteris paribus framework) on 5 different
simulations for each choice.
The values tested were: #={0.3, 0.5, 0.9} and #={0.7, 0.9, 0.95, 0.99}. We
0.9 as default values.
As well as solving the problems, we were also interested in studying the behavior of the ant
population with regard to a possible "stagnation", a situation in which all the ants reconstruct
the same solution: this situation indicates that the system has stopped exploring new
possibilities and that the best solution found up to that point will probably not be improved
any further. With some parameter values it was observed that, after many cycles, all the ants
made the same couplings despite the algorithm's stochastic nature: this behavior is due to a
much greater trace level on some couplings than on others. From this high trace level it
follows that the probability that an ant chooses a new coupling is very low and thus
stagnation is produced.
The value 0.9 for # (independent of the other parameters) quickly led the ant population to
stagnation around the sub-optimum solutions. With parameter # at value 0.5 good solutions
were found for all the problems (about 0% to 3% away from the best solution known),
without observing stagnation of the population: this means that at each cycle new solutions
belonging to a promising subset were tried.
Low values of parameter # reduce the algorithm's efficiency: it takes longer to find good
solutions; the best results were obtained for
The number of ants used does not seem to have a decisive influence on the overall
performance, on condition that a quantity at least the same as the dimension (n) of the
problem is used. This agrees with results obtained by a previous version of the Ant System
applied to the TSP (Dorigo et al., 1996).
With the most effective parameters n) the basic algorithm AS found
the optimal or best known solutions of all problems.
Table
1 gives the best known results, the Gilmore-Lawler lower bound, the average of the
objective function of 500 randomly generated solutions, and the best results given by the Ant
System in 10 minutes runs.
1. Best known results (Best), Gilmore-Lawler lower bound (GL bound), random
average value (Random), and results obtained by the Ant System (AS) for the
problems examined
Nugent
Nugent
Nugent
Krarup
(30a)
GL bound 963 2057 4539 11971900 68360
Random 1588 3403 8120 58993040 134641
AS 1150 2570 6124 17212548 88900
To evaluate the performance of the algorithm proposed, we compared it with one of the best
performing metaheuristics so far proposed for the QAP, namely GRASP, in the version
presented by Li et al. (1994). Both the Ant System and GRASP were run for 10 minutes on
each problem instance.
The comparative computational experiments were carried out on problem instances taken
from the QAPLIB library (Burkard et al., 1994), plus one instance (UFFICI), which will be
introduced in Section 4. In order to have a significant test suite, we used all instances of the
QAPLIB database (by the time of writing of this paper) containing problem of dimension 20
to 40: lower dimensions imply too easy problems, bigger dimensions lead to the need of
augmenting the time limit of 10 minutes in order to have meaningful results.
For GRASP, the parameters used were the same as those that were used in Li et al. (1994),
that is for MaxIter, which was set to 2048 as in Li et al. (1994).
All experiments were run on the same Pentium PC 166 MHz machine mentioned at the
beginning of this Section. The results are presented in Table 2, which shows the following
columns:
# PROBL: problem identifier.
# GL: Gilmore and Lawler bound.
# OPT/BK: optimal or best known solution.
# GRASP-best: best result obtained by GRASP over 5 runs of 10 minutes each.
# GRASP-%error: percentage error of the best solution obtained by GRASP.
# GRASP-t.best: average, over 5 runs, of the CPU time (in seconds) needed by GRASP to
produce its best solutions.
# ANT-best: best result obtained by the Ant System over 5 runs of 10 minutes each.
# ANT-%error: percentage error of the best solution obtained by the Ant System.
# ANT-t.best: average, over 5 runs, of the CPU time (in seconds) needed by the Ant System
to produce its best solutions.
The last two rows of Table 2 present:
AVG average percentage distance from the optimum (or best known) solution
computed over all 45 problems;
MAX maximal percentage distance from the optimum (or best known) solution
computed over all 45 problems.
2. Comparison of results obtained by GRASP and by the Ant System.
GRASP ANT
PROBL GL OPT/BK best %error t.best best %error t.best
CHR22A 5924 6156 6298 2,31 200,66 6156 0,00 314,68
ESC32D 106 200 200 0,00 1,92 200 0,00 2,13
KRA30A 68360 88900 88900 0,00 292,03 88900 0,00 199,06
ROU20 599948 725522 725522 0,00 164,55 725522 0,00 244,54
STE36A 7124 9526 9698 1,81 275,77 9598 0,76 295,23
AVG 0,66 0,27
Table
2 shows that, under the mentioned experimental conditions, the Ant System has a
better performance, in terms of quality of the best solution found, than GRASP on the
problem tested: it finds a greater number of best known solutions, it has a smaller average
percentage error and a smaller maximum error. On no problem, GRASP found a better
solution which improved over that found by the Ant System. Moreover, the time needed to
find its best solution is on the average slightly smaller for the Ant System (138.99 sec.) than
for GRASP (143.83 sec.), even though on individual problems GRASP could be more
efficient than the Ant System.
4. A real-world testcase
In this section we propose a real assignment problem, which can be modeled as QAP of order
33. The problem is the optimum allocation of services in the offices of a Milan multinational
company, originally described in (Maniezzo, et al., 1994)
The offices available are clustered into units, which are the elements of the three following
buildings:
I. TOWER: a building on six identical floors, each divided into three units, numbered from
1 to
II. BUILDING A: a three-floor construction near to the TOWER building, with direct
pedestrian connections at the level of the first two floors (as well as the outside passage)
and with three units per floor, numbered from 19 to 27.
III. BUILDING B: a construction with several floors, of which the first three are available
for the company in question, detached from the previous buildings and connected to them
by footpaths. Two units are available on each usable floor, numbered from 28 to 33. The
whole is shown in Figure 1.
Figure
1. Position of the units in the three buildings available
The distance matrix is made of the times (in seconds) spent by an employee to move from
location i to location h (i,
For simplicity, the distances between the units on the various floors of each building are
considered as identical, even though sometimes there are mandatory paths which may cause
small differences. The distances are estimated on the basis of the conditions of normal
activity of the offices themselves (waiting times for the service lifts and/or any use of
alternative routes, walkways or stairs).
As "flow between activities" we decided to use the number of personal contacts necessary on
average in a week by the employees of various offices, weighted according to the
qualification of the person involved (the employees were assigned weight 1 and the managers
weight 2), thus trying to correlate the movements to the effective burden in terms of working
costs. The matrix of the flows between the various activities was obtained by quali-
quantitative indications obtained from all the managers of the various services. The distance
and flow matrices are reported in the Appendix.
The objective function of the permutation (3, 4, 5, 14, 16, 17, 25, 26, 15, 24, 8, 9, 10, 2, 11,
corresponding to the current
location of the offices in the units was initially calculated: it produces a value of 438114
man-sec per week (# 121.7 man-hours).
If this datum is compared with the average value of a random arrangement (565541 man-sec,
calculated as the average between 100 permutations generated at random), one can conclude
that the actual logistic situation allows a "saving" of about 22.5% as compared to a random
allocation.
The best permutation found with the Ant System algorithm, as reported in the last row of
Table
2, has a value of 339416 man-sec (# 94.3 man-hours). This solution is 22.5% better
than the current logistic situation (obviously this datum must be taken with due care, as the
current assignment derives not only from cost considerations, but also from other less
quantifiable objectives such as personal preferences, prestige of a location, .
5. Conclusions
In this work we presented a distributed heuristic algorithm, the Ant System, applied to the
Quadratic Assignment Problem.
The main point in each distributed system is the definition of the communication procedure
among agents. In our algorithm a set of ants communicates by modifying the problem's
representation, as at each step of the processing each ant leaves a sign of its activity which
changes the probability with which the decisions will be made in future. The idea is that if an
ant in a given state must choose between different options and, having made a choice, that
choice results to be particularly good, then in the future that choice must appear more
desirable whenever the state and the options are the same.
The ants are given a heuristic to guide the initial steps of the computation process, when the
information on the problem structure given by the trace has not yet accumulated. This initial
heuristic then automatically loses importance (by means of trace accumulation) when the
experience acquired by the ants, saved in the trace matrix, grows.
The result presented in this work is the use of an autocatalytic process as a method for
optimization and learning. The autocatalytic process of an individual ant would almost
always converge very quickly to a sub-optimum solution; the interaction of many
autocatalytic processes can instead lead to convergence towards a region of the space
containing good solutions, so that a very good solution can be found (without however being
stuck on it). In other words, the ant population does not converge on a single solution, but on
a set of (good) solutions; the ants continue their search to further improve the best solution
found.
The results obtained showed the Ant System's competitive performance on all test problems.
--R
Quadratic Assignment Problems
Distributed Optimization by Ant Colonies
An Investigation of some Properties of an Ant Algorithm
Probabilistic Behavior in Ants: a Strategy of
Hospital Layout as a Quadratic Assignment Problem
Optimal and suboptimal algorithms for the quadratic assignment problem
Assignment Problems and the Location of Economic Activities
Computer Aided Layout Design
The quadratic assignment problem
Il sistema formiche applicato al problema dell'assegnamento quadratico.
Algodesk: an Experimental Comparison of Eight Evolutionary Heuristics Applied to the Quadratic Assignment Problem
Robust Taboo Search for the Quadratic Assignment Problem
Appendix: distance and flow matrices for the Italian company problem Distance matrix
--TR
--CTR
Habibeh Abbasi , Abbas Afshar , Saeed Alimohammadi, Optimum design of water conveyance system by ant colony optimization algorithms, Proceedings of the 5th WSEAS/IASME International Conference on Systems Theory and Scientific Computation, p.232-237, September 15-17, 2005, Malta
Shu-Chuan Chu , John F. Roddick , Jeng-Shyang Pan, Ant colony system with communication strategies, Information SciencesInformatics and Computer Science: An International Journal, v.167 n.1-4, p.63-76, 2 December 2004
Yan Yang , Mohamed S. Kamel, An aggregated clustering approach using multi-ant colonies algorithms, Pattern Recognition, v.39 n.7, p.1278-1289, July, 2006
Shiyan Hu, Key-dependant decomposition based image watermarking, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Daniel Merkle , Martin Middendorf, Fast Ant Colony Optimization on Runtime Reconfigurable Processor Arrays, Genetic Programming and Evolvable Machines, v.3 n.4, p.345-361, December 2002
Carlos M. Fernandes , Agostinho C. Rosa , Vitorino Ramos, Binary ant algorithm, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Wenbing Tao , Hai Jin , Liman Liu, Object segmentation using ant colony optimization algorithm and fuzzy entropy, Pattern Recognition Letters, v.28 n.7, p.788-796, May, 2007
Stefan Janson , Daniel Merkle , Martin Middendorf , Hossam Elgindy , Hartmut Schmeck, On Enforced Convergence of ACO and its Implementation on the Reconfigurable Mesh Architecture Using Size Reduction Tasks, The Journal of Supercomputing, v.26 n.3, p.221-238, November
Martin Middendorf , Frank Reischle , Hartmut Schmeck, Multi Colony Ant Algorithms, Journal of Heuristics, v.8 n.3, p.305-320, May 2002
Stephen Gilmour , Mark Dras, A two-pronged attack on the dragon of intractability, Proceedings of the Twenty-eighth Australasian conference on Computer Science, p.183-192, January 01, 2005, Newcastle, Australia
Matteo Golfarelli , Vittorio Maniezzo , Stefano Rizzi, Materialization of fragmented views in multidimensional databases, Data & Knowledge Engineering, v.49 n.3, p.325-351, June 2004
Shxyong Jian Shyu , B. M. T. Lin , Tsung-Shen Hsiao, Ant colony optimization for the cell assignment problem in PCS networks, Computers and Operations Research, v.33 n.6, p.1713-1740, June 2006
Zne-Jung Lee , Chou-Yuan Lee, A hybrid search algorithm with heuristics for resource allocation problem, Information SciencesInformatics and Computer Science: An International Journal, v.173 n.1-3, p.155-167,
Anne Wade , Said Salhi, An ant system algorithm for the mixed vehicle routing problem with backhauls, Metaheuristics: computer decision-making, Kluwer Academic Publishers, Norwell, MA, 2004
Alan R. McKendall, Jr. , Jin Shang, Hybrid ant systems for the dynamic facility layout problem, Computers and Operations Research, v.33 n.3, p.790-803, March 2006
M. Solimanpur , Prem Vrat , Ravi Shankar, An ant algorithm for the single row layout problem in flexible manufacturing systems, Computers and Operations Research, v.32 n.3, p.583-598, March 2005
Yi-Liang Xu , Meng-Hiot Lim , Yew-Soon Ong , Jing Tang, A GA-ACO-local search hybrid algorithm for solving quadratic assignment problem, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Nicolas Meuleau , Marco Dorigo, Ant colony optimization and stochastic gradient descent, Artificial Life, v.8 n.2, p.103-121, July 2002
Christine Solnon , Serge Fenet, A study of ACO capabilities for solving the maximum clique problem, Journal of Heuristics, v.12 n.3, p.155-180, May 2006
Tseng , Shyi-Ching Liang, A Hybrid Metaheuristic for the Quadratic Assignment Problem, Computational Optimization and Applications, v.34 n.1, p.85-113, May 2006
Kuo-Ching Ying , Ching-Jong Liao, An ant colony system for permutation flow-shop sequencing, Computers and Operations Research, v.31 n.5, p.791-801, April 2004
Subrata , Albert Y. Zomaya, A Comparison of Three Artificial Life Techniques for Reporting Cell Planning in Mobile Computing, IEEE Transactions on Parallel and Distributed Systems, v.14 n.2, p.142-153, February
Antonella Carbonaro , Vittorio Maniezzo, The Ant Colony Optimization paradigm for combinatorial optimization, Advances in evolutionary computing: theory and applications, Springer-Verlag New York, Inc., New York, NY,
Marco Dorigo , Christian Blum, Ant colony optimization theory: a survey, Theoretical Computer Science, v.344 n.2-3, p.243-278, 17 November 2005
Urszula Boryczka , Mariusz Boryczka, Multi-cast ant colony system for the bus routing problem, Metaheuristics: computer decision-making, Kluwer Academic Publishers, Norwell, MA, 2004
Marco Dorigo , Gianni Di Caro , Luca M. Gambardella, Ant algorithms for discrete optimization, Artificial Life, v.5 n.2, p.137-172, April 1999
Geoff Nitschke, Emergence of Cooperation: State of the Art, Artificial Life, v.11 n.3, p.367-396, September 2005 | evolutionary computation;combinatorial optimization;knowledge pooling;distributed algorithms;ant system;heuristic algorithms;quadratic assignment problem |
628025 | Finding Interesting Patterns Using User Expectations. | AbstractOne of the major problems in the field of knowledge discovery (or data mining) is the interestingness problem. Past research and applications have found that, in practice, it is all too easy to discover a huge number of patterns in a database. Most of these patterns are actually useless or uninteresting to the user. But due to the huge number of patterns, it is difficult for the user to comprehend them and to identify those interesting to him/her. To prevent the user from being overwhelmed by the large number of patterns, techniques are needed to rank them according to their interestingness. In this paper, we propose such a technique, called the user-expectation method. In this technique, the user is first asked to provide his/her expected patterns according to his/her past knowledge or intuitive feelings. Given these expectations, the system uses a fuzzy matching technique to match the discovered patterns against the user's expectations, and then rank the discovered patterns according to the matching results. A variety of rankings can be performed for different purposes, such as to confirm the user's knowledge and to identify unexpected patterns, which are by definition interesting. The proposed technique is general and interactive. | Introduction
In knowledge discovery, techniques are constantly being developed and improved for
discovering various types of patterns in databases. While these techniques were shown to be
useful in numerous applications, new problems have also emerged. One of the major problems
is that, in practice, it is all too easy to discover a huge number of patterns in a database. Most
of these patterns are actually useless or uninteresting to the user. But due to the huge number
of patterns, it is difficult for the user to comprehend and to identify those patterns that are
interesting to him/her. To prevent the user from being overwhelmed by the large number of
patterns, techniques are needed to rank them according to their interestingness.
So far, a number of papers have discussed the interestingness issue [e.g., 4, 8, 11, 12, 13, 14].
The main factors that contribute to the interestingness of a discovered pattern have also been
proposed. They include: coverage, confidence, strength, statistical significance, simplicity,
unexpectedness, actionability [e.g., 4, 8, 11, 12]. The first five factors are called objective
measures [15]. They can be handled with techniques requiring no application and domain
knowledge. They have been studied extensively in the literature [e.g., 12, 8]. The last two
factors are called the subjective measures [11, 15], which measure the subjective
interestingness of a pattern to the user. They are defined as follows:
1. Unexpectedness: Patterns are interesting if they are unexpected or previously unknown to
the user [4].
2. Actionability: Patterns are interesting if the user can do something with them to his/her
advantage [4,11].
It has been noted in [13, 11] that although objective measures are useful in many respects,
they are insufficient in determining the interestingness of the discovered patterns. Subjective
measures are needed. Subjective interestingness is the focus of this paper. The proposed
technique is for ranking the discovered patterns according to their subjective interestingness. It
assumes that some other techniques have performed the pattern discovery task and have
filtered out those patterns that do not meet the objective requirements.
To design a general ranking technique(s) using subjective interestingness measures is a difficult
task. Some of the reasons are: (1) in different domains (or applications), people are interested
in different things; (2) given the same database and the patterns discovered, different users
may be interested in different subsets of the patterns; (3) even for the same user, at different
points in time, his/her interests may also vary due to the specific situation he/she is in at the
particular moment. In order to identify and/or to rank the discovered patterns, it is obvious
that the system must have a great deal of knowledge about the database, the application
domain and the user's interests at a particular time.
To date a number of studies [e.g., 4, 8, 11, 12, 156] have been conducted on the subjective
interestingness issues and some systems have also been built [8, 11] with interestingness
filtering components to help users focus on the useful patterns. However, these systems mostly
handle the subjective interestingness in application/domain-specific fashions [15].
In this paper, we propose a general approach to determine the subjective interestingness
(unexpectedness and actionability) of a discovered pattern. The technique is characterized by
asking the user to specify a set of patterns according to his/her previous knowledge or
intuitive feelings. This specified set of patterns is then used by a fuzzy matching algorithm to
match and rank the discovered patterns. The assumption of this technique is that some amount
of domain knowledge and the user's interests are implicitly embedded in his/her specified
patterns. In general, we can rank the discovered patterns according to their conformities to the
user's knowledge or their unexpectedness, or their actionabilities. With such rankings, a user
can simply check the few patterns on the top of the list to confirm his/her intuitions (or
previous knowledge), or to find those patterns that are against his/her expectation, or to
discover those patterns that are actionable.
The proposed approach is simple, effective and highly interactive. Though we do not claim
that this technique solves the interestingness problem completely, we believe this is a major
step towards the right direction.
2. Problem Definition
From a user's point of view, he/she wants to find patterns from one or more databases,
denoted by D, that are useful or interesting to him/her. From a discovery system's point of
view, a technique Q is used to discover all the patterns from D that are discoverable by Q. Let
B (Q,D) be the set of patterns discovered by Q on D. We denote I (Q,D) as the set of interesting
patterns in B (Q,D) . Thus, I (Q,D) - B (Q,D) . Three points to be noted:
. I (Q,D) may not be the complete set of interesting patterns that can be discovered from D. It
is simply the set of interesting patterns that can be discovered by technique Q on D.
. Not all patterns in I (Q,D) are equally interesting. Different patterns may have different
degrees of interestingness to the user.
. I (Q,D) may be a dynamic set in the sense that the user may be interested in different things at
different points in time. The degree of interestingness of each pattern may also vary.
In general, B (Q,D) is much larger than I (Q,D) . This implies that many patterns discovered by Q are
uninteresting or useless. It is desirable that a system only gives the user the set of interesting
patterns, I (Q,D) , and ranks the patterns in I (Q,D) according to their degrees of interestingness.
Hence, we define the interestingness problem as follows:
Given B (Q,D) , the set of patterns discovered by Q on D, determine I (Q,D) and rank
the patterns in I (Q,D) according to their degrees of interestingness to the user at the
particular point in time.
In practice, this is difficult to achieve because the definition of interestingness is domain (or
application) dependent and also user and his/her situation dependent. To simplify our task, we
only rank all the patterns discovered (i.e., B (Q,D) ). Our assumption is that I (Q,D) will be a small
subset of the top-ranked patterns. The final identification task is left to the user.
So, how can a system know what is useful in a domain and what is considered interesting at a
particular moment to a user? What are the criteria used for ranking the discovered patterns?
We believe that our proposed technique is able to provide a partial answer to these problems.
3. The Proposed Technique
This section describes the proposed method. Slightly different procedures are used for finding
unexpected patterns and for finding actionable patterns.
3.1 Finding unexpected patterns and confirming user's knowledge
Patterns are unexpected if they are previously unknown to the user [4]. Unexpected patterns
are, by definition, interesting because they provide new information to the user [4, 15]. Apart
from finding unexpected patterns, sometimes the user also wishes to know whether his/her
existing knowledge about the database is correct. For these two purposes, the proposed
method has the following two steps:
1. The user is asked to provide a set of patterns E (with the same syntax as the discovered
patterns) that he/she expects to find in the database D based on his/her previous
knowledge or intuitive feelings. These user patterns are regarded as fuzzy patterns (also
called user-expected patterns), which are described with the help of fuzzy linguistic
variables [17].
A fuzzy linguistic variable is defined as a quintuple (x, T(x), U, G, ~
M ) in which x is the
name of the variable; T(x) is the term set of x; that is, the set of names of linguistic values
of x, with each value being a fuzzy variable denoted generally by x and ranging over a
universe of discourse U; G is a syntactic rule for generating the name, X, of values of x;
and M is a semantic rule for associating with each value X its meaning, ~
(X) which is a
fuzzy subset of U. A particular X is called a term.
For example, if speed is interpreted as a linguistic variable with its term
set T(speed) could be
moderate, fast, .
~
(X) gives a meaning to each term. For example, ~
M (slow) may be defined as follows:
~
where -slow u
slow (u) denotes the degree of membership of u in term slow.
Thus, in this step, the user needs to input (1) his/her expected patterns, and (2) the fuzzy
set ~
(X) for each term X used in his/her expected patterns.
2. The system then matches (in a number of ways) each discovered pattern in B (Q,D) against
the patterns in E using a fuzzy matching technique. The discovered patterns are then
ranked according to their degrees of match with E.
Note that, though the proposed technique is application/domain independent, it is not
discovery technique independent. This is because different knowledge discovery techniques
discover different types of patterns (e.g., classification patterns, association patterns, sequence
patterns, time series patterns, etc. For different pattern types, fuzzy pattern matching
methods may not be the same. For example, the matching technique for sequence patterns and
the matching technique for classification patterns (or rules) should be different.
In short, the matching algorithm must be customized for different pattern types. Thus, each
general discovery tool can have a suitable implementation of the proposed method, which may
be used for any domain and any application. Section 4 and 5 describe such an implementation.
Now let us consider an example of how the technique works. Suppose we have the following
set of discovered (classification) patterns from an accident database ("," denotes "and").
1. If P_Age > 50,
2. If P_Age > 65, killed
3. If P_Age > 50,
The user-specified expected pattern is:
Before matching can be performed, the system must know how to interpret the semantic
meanings of "OLD", "BAD_VISIBILITY" and "BAD_ACCIDENT." This is achieved by
asking the user to provide the fuzzy sets associated with these terms. A graphical user-interface
has been built to make this process of supplying the fuzzy sets easy and simple.
Having specified the semantic meanings, a matching algorithm is then executed to determine
the degrees of match between the discovered patterns and the user-specified expected pattern.
Different ranking algorithms are used for different purposes. If our purpose is to confirm a
hypothesis, the system will rank the discovered patterns such that the pattern with the highest
degree of match is ranked first. The results of such a ranking could be as follows:
A1. If P_Age > 65, killed
A2. If P_Age > 50,
A3. If P_Age > 50,
This confirms the user's belief that an old person involved in an accident at some bad location
will result in a serious injury. On the other hand, if our purpose is to find unexpected patterns
in the sense of an unexpected consequent, then a different ranking will result as shown below:
B1. If P_Age > 50,
B2. If P_Age > 65, killed
B3. If P_Age > 50,
This shows that pattern B1 is against the user's expectation because instead of a serious injury,
the old person suffers a slight injury. It is important to note that simply reversing of the order
for conformity is in general not the right method for ranking patterns according to their
unexpectedness. In fact, the unexpectedness of a pattern could be described in a number of
ways. Details on this will be discussed in Section 4.
From the example, we can see that by determining the degrees of match between the
discovered patterns and the user-specified expected patterns in various ways, and ranking the
patterns accordingly, it is possible to help the user focus on the appropriate subsets of patterns
based on his/her purpose.
It can also be observed that the working of this method depends on the following assumption.
Assumption: The user knows the database and has some intuitive feelings or previous
knowledge about the kinds of pattern that might be found in the database.
We believe this assumption is realistic because in real life, after working on a particular domain
and its database for some time, the user generally develops good intuitive sense regarding the
kinds of patterns that can be found in the database. We have tested this on our industrial
partner. Even if the user is new to the database, database visualization tools are available to
help the user obtain a good initial feel of the kinds of patterns in the database. With this as a
starting point, the user can then incrementally add more patterns to aid in the ranking process.
It is important to note, however, that this method does not require the user to provide the
complete set of his/her expected patterns at the beginning, which is quite difficult. Due to the
interactive nature of the technique, he/she may try something simple at the beginning and
slowly build up the set of expected patterns.
3.2 Finding actionable patterns
Patterns are actionable if the user can do something with them to his/her advantage [4, 11].
The key here is the usefulness to the user. It has been recognized that many unexpected
patterns are also actionable. Hence, the method presented above, in some sense, is also able to
find some actionable patterns. However, for specific cases in which the user knows what are
the possible actions that he/she can take and in what situations to take them, a variation of the
above method is proposed to identify actionable patterns. This method consists of three steps.
1. The user specifies all the possible actions Y that he/she (or his/her organization) can take.
2. For each action Y q - Y, the user specifies the situations under which he/she is likely to take
the action. The situations are represented by a set of fuzzy patterns Act q (similar to the
expected patterns in E). The patterns in Act q are called the user-specified action patterns.
3. The system then matches each discovered pattern in B (Q,D) against the patterns in Act q using
a fuzzy matching technique. The results of this matching are used to rank the discovered
patterns in B (Q,D) . For each action Y q (or Act q ), a separate ranking will be produced.
Note that:
. In finding actionable patterns, the user does not provide what he/she expects as for finding
unexpected patterns, but rather the situations under which he/she may take some actions.
These situations may or may not be what the user expects.
. This technique associates patterns with actions to be taken in response to the patterns.
Thus, more information is given to the user, i.e., not only the actionable patterns, but also
the actions to be taken.
Let us illustrate this method with an example. Considering the following discovered patterns:
1. If P_Age > 50,
2. If killed
3. If P_Age > 50,
In this example, we consider two actions:
Action 1. Educate people to be more careful at locations with good visibility. Assume there is
only one user-specified action pattern for which action 1 is to be taken:
If
Action 2. Install speed cameras at locations with bad visibility. Again assume there is only one
user-specified action pattern for which action 2 is to be taken.
If
With these and the user specified fuzzy sets for GOOD_VISIBILITY, BAD_VISIBILITY,
FAST, SLIGHT and BAD_ACCIDENT, the ranking results are:
Action 1: Rank 1 1. If P_Age > 50,
3. If P_Age > 50, T-junct then
Rank 3 2. If killed
Action 2: Rank 1 2. If killed
3. If P_Age > 50, T-junct then
Rank 3 1. If P_Age > 50,
This ranking helps the user to identifying those patterns supporting the actions. With this, the
user may decide to educate old people to be more careful, and/or to install speed cameras at
bends. Note that the actions themselves are not ranked according to the possible benefits they
may bring to the user, which will be more helpful. This will be part of our future work.
4. An Implementation of the Proposed Technique
In this and the next section, we will describe a particular implementation of the proposed
technique. The patterns assumed in this implementation is as follows:
If
where "," means "and", and P i is a proposition of the following format:
attr OP value
where attr is the name of an attribute in the database, and value is a possible value for the
attribute attr and OP -} is the operator.
C is the consequent, which has the same format as P i . However, its attr does not have to be an
attribute name in the database. For example, in the C4.5 system [14], Class is used for the
consequent. The above representation is very common for classification patterns (rules) and
association patterns. We now present the computational formulas used in the methods
discussed in Section 3.1 and 3.2.
4.1. Confirming user knowledge and finding unexpected patterns
Before presenting the detailed computation, we first define some notations. Let E be the set of
user-expected patterns and B (previously B (Q,D) ) be the set of discovered patterns. We denote
W i as the degree of match of a discovered pattern with respect to the set of expected
patterns E. We denote w (i,j) as the degree of match between a discovered pattern and an
expected pattern Ranking of the discovered patterns is performed by sorting them in a
decreasing order according to their W i (i.e., the pattern with the highest W i will be on top). Let
us now discuss the computation of w (i,j) and W i . w (i,j) is computed in two steps:
1. Attribute name match - The attribute names of the conditions of B i and E j are compared.
The set of attribute names that are common to both the conditions of B i and E j is denoted
as A (i,j) . Then, the degree of attribute name match of the conditional parts, denoted as L (i,j) ,
is computed as follows:
| |
max(| | , | |)
where |e j | and |b i | are the numbers of attribute names in the conditional parts of E j and B i
respectively, and |A (i,j) | is the size of the set A (i,j) .
Likewise the attribute names of the consequents of B i and E j are also compared. R (i,j)
denotes the degree of match for the consequent parts. R (i,j) is either 0 or 1. This is because
we assume that there is only one consequent for each pattern. Hence, either the
consequent attributes of the two patterns are the same (R different (R
For example, we have
underweight.
fit.
The set of common attributes in the conditional parts is A {Weight}. The consequent
parts have the same attribute Health_con. Hence,
2. Attribute value match - Once an attribute of B i and E j matches, the two propositions are
compared taking into consideration both the attribute operators and attribute values. We
denote V (i,j)k the degree of value match of the kth matching attribute in A (i,j) , and Z (i,j) the
degree of value match of the consequents. The computation of the two values will be
presented in the next section.
Here, we present the computation of w (i,j) and W i . As mentioned in Section 3, the proposed
method can be used for confirming user's hypothesis and also for finding unexpected patterns.
For these two purposes, different formulas are used for computing w (i,j) and W i . Note that we
do not claim these formulas are optimal. But a large number of experiments have shown that
they produce rankings that closely model human intuition of subjective interestingness.
1. Confirming user's knowledge
A
A
A
| |
| |
| |
A
- computes the degree of match of the conditional parts of B i
- computes the degree of match of the consequent parts of B i and E j .
w (i,j) gives the degree of match of pattern B i with pattern E j .
The formula for computing W i , which is the degree of match of the discovered pattern B i
with respect to the set of expected patterns E, is defined as follows (see also Figure 1):
Figure
1. Computing W i
2. Finding unexpected patterns
For this purpose, the situation is more complex. We can have a number of ways to rank
the patterns according to the types of unexpectedness.
Unexpected consequent: The conditional parts of B i and E j are similar, but the
consequents of the two patterns are far apart. Two types of ranking are possible
depending on the user's interest.
(a) Contradictory consequent (patterns with R will be ranked higher).
A
R Z A
R Z A
| |
| |
| |
1 0Explanation: Since this ranking is to find those patterns whose conditional parts
are similar, but the consequents are contradictory, we need to give higher w (i,j)
value for B i whose consequent part has the same attribute name as
the expression R Z
.
W i is computed as follows:
(b) Unanticipated consequent (patterns with R will be ranked higher).
A
R Z A
R Z A
a
| |
| |
| |
A
R Z A
A
| |
| |
| |
Explanation: A higher value is given to w a(i,j) when the attribute names of the
consequent parts of B i and E j do not match. However, B i may match well with
another expected pattern E r . Thus, w b(i,j) is needed to take this into
consideration.
W i is computed as follows:
Unexpected reason: The consequents are similar but the conditional parts of B i and
are far apart. Again two types of ranking are possible.
(a) Contradictory conditions (patterns with |A (i,j) | > 0 will be ranked higher)
A
A
R Z A
| |
| |
| |
1 0Explanation: Since this ranking is to find those patterns whose consequents match
well, but the conditional parts are contradictory, we need to give a higher w (i,j)
value for B i whose conditional part has good attribute name match with E j .
Therefore, we have the expression L
A
|
is computed as follows:
(b) Unanticipated conditions (patterns with |A (i,j) will be ranked higher)
A
A
R Z A
a
| |
| |
| |
A
A
A
| |
| |
| |
Explanation: A higher value is given to w a(i,j) when the attribute names of the
conditional parts of B i and E j do not match well. However, B i may match well
with another expected pattern E r . Thus, w b(i,j) is needed to take this into
consideration.
W i is computed as follows:
Totally unexpected patterns: Both conditional and consequent parts of B i and E j are
very different in the sense that the attribute names in B i are unexpected.
A
R Z A
R Z A
| |
| |
Explanation: Since this ranking is to find those patterns whose attribute names (both
in the conditional parts and consequent parts) have little (or no) intersection with
the set of attribute names mentioned in E, we give higher w (i,j) value for B i whose
attribute names do not match well with those of E j .
W i is computed as follows:
4.2 Finding actionable patterns
For finding actionable patterns, the notations in Section 4.1 still apply except E, which is
replaced with Act q . The formula for matching the discovered pattern B i and the user-specified
action pattern Act q
Act q is the same as the one for confirming user's knowledge in Section
4.1. However, the computation of Z (i, are slightly different from those used in
Section 4.1, which will be discussed in the next section.
A
A
A
| |
| |
| |
Explanation: The matching formula for conformity is used because the purpose of matching
here is the same as the matching for confirming the user's knowledge.
We denote W (i,q) as the degree of match of B i with respect to the set of action patterns Act q .
W (i,q) is computed as follows:
The discovered patterns in B are ranked for each action Y q - Y according to their W (i,q) values.
5. Fuzzy Matching of Attribute Values
We now discuss how to compute V (i,j)k and Z (i,j) . For this computation, we need to consider
both the attribute values and the operators. In addition, the attribute value types (discrete or
continuous) are also important. Since the computations of V (i,j)k and Z (i,j) are the same, it
suffices to just consider the computation of V (i,j)k , the degree of match for the kth matching
attribute in A (i,j) . Two cases are considered: the matching of discrete attribute values and the
matching of continuous attribute values.
5.1. Matching of discrete attribute values
In this case, the semantic rule for each term (X) used in describing the user-specified patterns
must be properly defined over the universe (or domain) of the discrete attribute. We denote U k
as the set of possible values for the attribute. For each u - U k , the user needs to input the
membership value of u in X, denoted as -X(u). In the discrete case, the formulas for computing
V (i,j)k and Z (i,j) for finding unexpected patterns and for finding actionable patterns are the same.
Let us have a example. The user gives the following pattern:
reject
Here, poor is a fuzzy term. To describe this term, the user needs to specify the semantic rule
for poor. Assume the universe (or domain) of the discrete attribute
The user may specify that "poor" grade means:
{(F, 1), (D, 0.8), (C, 0.2)}
where the left coordinate is an element in the universe (or domain) of the "Grade"
attribute, and the right coordinate is the degree of membership of that element in the fuzzy
set poor, e.g., - poor (D) = 0.8. It is assumed that all the other attribute values not mentioned
in the set have the degree of membership of 0.
When evaluating the degree of match for V (i,j)k , two factors play an important role, namely, the
semantic rules associated with the attribute value descriptions and the operators used in the
propositions. In the discrete case, the valid operators are "=" and "-". Suppose that the two
propositions to be matched are as follows:
User-specified proposition: attr Opu X
System-discovered proposition: attr Ops S
where attr is the matching attribute, Opu and Ops belong to the set {=, -}, and X and S are
the attribute values of attr. Since the matching algorithm must take into consideration the
combination of both the operators and attribute values, four cases result:
Case 1.
Case 2.
U
| |
support
| is the size of U k . If |U k | -
"-" is not possible.
Case 3.
- .
Case 4.
U
| |
support,
possible.
5.2. Matching of continuous attribute values
When an attribute takes continuous values, the semantic rule for the term (X) takes on the
form of a continuous function. To simplify the user's task of specifying the shape of this
continuous function, we assume that the function has a curve of the form as shown in Figure
2. Thus, the user merely needs to provide the values for a, b, c, and d.
a b c d1
Figure
2. Membership function
For example, the user's pattern is:
Here, young is a term for variable Age. Suppose that in this case Age takes continuous values
from 0 to 80. The user has to provide those 4 points using the values from 0 to 80. For
example, the user may give
In the continuous case, the set of values that the operator in a proposition can take is {=, -,
-" is used to represent a range like this: . With this expansion, the
total number of possible combinations to be considered is 25. All the formulas are listed in
appendix.
In this continuous case, the formulas used for finding unexpected patterns (Section 4.1) and
for finding actionable patterns (Section 4.2) are slightly different. The difference is that for
finding unexpected patterns, it compares two propositions to see how different they are. But,
for finding actionable patterns, it checks to see whether the proposition used in the discovered
pattern is covered by the proposition used in the user-specified pattern or vice versa. For
example, we have
User-specified proposition: A - 5
System-discovered proposition:
For the fuzzy term 5 (in A - 5), assume 5. In the case of finding unexpected
patterns, V (i,j)k will be evaluated to be less than 1 because the two propositions do not cover
the same area. However, in the case of finding actionable patterns, V
6. Evaluation
The proposed technique is implemented in Visual C++ on PC. A test example is given below.
An analysis of the complexity of the algorithm is also presented.
6.1. A test example
This sub-section gives a test example. The set of patterns is generated from a real database
using C4.5. All the attribute names and also some attribute values have been encoded to
ensure confidentiality of the data. Since for pattern generation in this test we used C4.5, which
generates classification patterns and uses only one attribute as the class (or as the consequent),
we cannot test the rankings for unanticipated consequent and totally unexpected patterns. To
save space, only a small subset of the patterns generated by C4.5 is listed below for ranking.
-> Class NO
Pattern 2: A1 <= 49, A3 <= 5.49, A4 >
-> Class NO
Pattern 3: A1 > 49,
-> Class YES
Pattern 4: A1 > 49, A1 <= 50
-> Class YES
Pattern 5: A1 > 55
-> Class YES
Pattern
-> Class YES
Pattern 7: A1 > 41, A4 <=
-> Class YES
Pattern 8: A1 > 41, A1 <= 47, A3 <= 3.91, A7 > 106, A4 > 60, A10 <= 5.06
-> Class YES
Three runs of the system are conducted in this testing. In the first run, the focus is on
confirming the user's knowledge, while in the second run the focus is on finding unexpected
patterns. In the third run, the focus is on finding actionable patterns.
6.1.1 Confirming user's knowledge
The set of user expected patterns is listed below with the fuzzy set attached to each term
(attribute value used in the user's patterns).
User expected pattern set 1:
-> Class NO {(NO, 1), (YES, 0)}
Pattern 2: A1 >= Re_A1
-> Class YES {(NO, 0), (YES, 1)}
. Ranking results:
-> Class YES
confirming user specified pattern 2
-> Class NO
confirming user specified pattern 1
-> Class YES
confirming user specified pattern 2
-> Class YES
confirming user specified pattern 2
2:
-> Class NO
confirming user specified pattern 1
The rest of the patterns are cut off because of their low matching values.
6.1.2 Finding unexpected patterns
The set of user expected patterns for this test run is listed below, which is followed by three
types of ranking for finding unexpected patterns.
User expected pattern set 2:
-> Class YES {(NO, 0), (YES, 1)}
Pattern 2: A3 >= 2
-> Class YES {(NO, 0), (YES, 1)}
. Unexpected consequent:
2:
-> Class NO
contradicting user specified pattern 2
The rest of the patterns are cut off because of their low matching values.
. Contradictory conditions:
-> Class YES
contradicting user specified pattern 1
-> Class YES
contradicting user specified pattern 1
-> Class YES
contradicting user specified pattern 1
The rest of the patterns are cut off because of their low matching values.
. Unanticipated conditions:
-> Class YES
-> Class YES
-> Class YES
-> Class YES
The rest of the patterns are cut off because of their low matching values.
6.1.3 Finding actionable patterns
For simplicity, we use only two actions. One action has one user-specified action pattern,
while the other has two. Due to the confidentiality, we cannot provide the real actions, but use
First_Action, and Second_Action to represent them.
User's actions and patterns:
Action 1: First_Action
User patterns: Pattern 1: A1 >= Re_A1
-> Class YES {(NO, 0), (YES, 1)}
Action 2: Second_Action
User patterns: Pattern 1: A3 >= 2
-> Class YES {(NO, 0), (YES, 1)}
Pattern 2: A7 >= 150
-> Class YES {(NO, 0), (YES, 1)}
. Ranking results:
Action 1: First_Action
-> Class YES
actionable according to user-specified pattern 1.
-> Class YES
actionable according to user-specified pattern 1.
-> Class YES
actionable according to user-specified pattern 1.
-> Class YES
actionable according to user-specified pattern 1.
The rest of the patterns are cut off because of their low matching values.
Action 2: Second_Action
-> Class YES
actionable according to user-specified pattern 2.
-> Class YES
actionable according to user-specified pattern 2.
The rest of the patterns are cut off because of their low matching values.
6.2 Efficiency analysis
Finally, let us analyze the runtime complexity of the proposed technique. Here, we only
analyze the algorithm for finding unexpected patterns. For finding actionable patterns, the
basic algorithm is the same. Assume the maximal number of propositions in a pattern (a user-
expected pattern or a discovered pattern) is N. Assuming the attribute value matching
(computing V (i,j)k and Z (i,j) ) takes constant time. Combining the individual matching values to
calculate w (i,j) also takes constant time. The computation of W i is O(|E|). Then, without
considering the final ranking which is basically a sorting process, the worst-case time
complexity of the technique is O(|E||B|N 2 ).
7. Related Work
Although the interestingness has long been identified as an important issue in data mining [4],
most of the data mining techniques and tools do not deal with this problem. Instead, their
primary concern is to discover all the patterns in the given databases [4, 11, 15].
To date, some studies have been performed on the interestingness problem [1, 4, 8, 11, 12, 13,
15]. A number of interestingness measures have also been proposed. These measures can be
classified into two classes: objective measures and subjective measures. Objective measures
typically involve analyzing the discovered patterns' structures, their predictive performances,
and their statistical significance [4, 8, 12]. Examples of objective measures are: coverage,
certainty factor, strength, statistical significance and simplicity [3, 6, 8, 11].
It has been noted in [11], however, that objective measures are insufficient for determining the
interestingness of the discovered patterns. Subjective measures are needed. Two main
subjective measures are: unexpectedness, and actionability [4, 11].
[8] defined pattern interestingness in terms of performance, simplicity, novelty, significance,
etc. Most of the measures are objective measures with the exception of novelty. However, no
general method was proposed for handling novelty. Instead, domain-specific theories are
coded to aid in filtering out the uninteresting patterns.
[11] studied the issue of interestingness in the context of a health care application. A
knowledge discovery system, KEFIR, is built. The system analyzes health care information to
uncover "key findings". Key findings refer to the important deviations from the norms for
various indicators such as cost, usage, and quality. The degree of interestingness of a finding is
estimated by the amount of benefit that could be realized if an action can be taken in response
to the finding. Domain experts provide the recommended actions to be taken for various types
of findings. Once a finding is discovered, the system computes the estimated benefit for taking
an recommended action.
The method used in KEFIR presents a good approach for incorporating the subjective
interestingness into an application system. However, the approach is application specific. Its
domain knowledge (from domain experts) is hard-coded in the system as production rules.
The system cannot be used for any other application. In contrast, our method is general. It
does not make any domain-specific assumptions. A pattern analysis system based on our
technique can be attached to each data mining tool to help the user identify the interesting
patterns. Though domain-specific systems such as KEFIR are still the most effective method
for ranking patterns and actions, the cost of building such a system is very high.
[15] proposed to use probabilistic beliefs and belief revision as the framework for describing
subjective interestingness. Specifically, a belief system is used for defining unexpectedness. A
belief is represented as an arbitrary predicate formula. Associated with each belief is a
confidence measure. Two types of beliefs are presented, hard and soft beliefs. Basically, hard
beliefs cannot be changed even in the face of new evidences, while soft beliefs are modifiable
when a new evidence arrives. If a pattern contradicts the hard beliefs of the user, then this
pattern is unexpected and interesting. The unexpectedness of a pattern is also defined with
respect to a soft belief. However, [15] is just a proposal. No system has been implemented that
utilizes this approach. For an actual implementation, a great deal of details have to be studied.
[15] also does not handle pattern actionability. Our proposed approach has been implemented
and tested. In addition, our approach allows the user to specify his/her beliefs (expectations) in
fuzzy terms which are more natural and intuitive than complex conditional probabilities that
the user has to assign in [15].
8. Conclusion
In this paper, we study the subjective interestingness issue in data mining from a domain
independent perspective. A general method for ranking the discovered patterns according to
their interestingness is proposed. An particular implementation has also been done. This
method is characterized by asking the user to input his/her expected or action patterns and
then the system ranks the discovered patterns by matching them against the expected or the
action patterns. This method can be used to confirm user's knowledge, to find unexpected
patterns, or to discover actionable patterns. Besides these applications, the proposed technique
may also be used to discover interesting trends by periodically analyzing the deviations of the
newly discovered patterns against the old patterns. This can be done simply by using the old
patterns as the user-specified patterns.
The proposed method is simple and effective. It is also highly interactive and allows the user
to identify interesting patterns incrementally. We do not claim, however, that the issues
associated with interestingness are fully understood. Much further research is still needed, e.g.,
we still do not have a good understanding of how objective interestingness measures such as
coverage and confidence interact with subjective interestingness measures, and how actions
themselves may be ranked to give the user more information.
Acknowledgment
We would like to thank Gui-Jun Yang for implementing the user interface of
the system. We thank Hwee-Leng Ong and Angeline Pang of Information Technology Institute for
many useful discussions. The project is funded by National Science and Technology Board.
--R
Database mining: a performance perspective.
Attribute focusing: machine-assisted knowledge discovery applied to software production process control
Knowledge discovery in databases: an overview.
Data driven discovery of quantitative rules in relational database.
Incremental disocvery of rules and structure by hierachical and parallel clustering.
Problems for knowledge discovery in databases and their treatment in the statistics interpreter explora.
Selecting among rules induced from a hurricane database.
Systems for knowledge discovery in databases.
An application of KEFIR to the analysis of healthcare information.
The interestingness of deviations.
On subjective measures of interestingness in knowledge discovery
Learning useful rules from inconclusive data.
Fuzzy set theory and its applications.
--TR
--CTR
Saroj Saroj , K. K. Bharadwaj, A parallel genetic algorithm approach for automated discovery of censored production rules, Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications, p.435-441, February 12-14, 2007, Innsbruck, Austria
Bing Liu , Wynne Hsu, Domain knowledge to support the discovery process: previously discovered knowledge, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Bing Liu , Wynne Hsu, Domain knowledge to support the discovery process: user preferences, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Shichao Zhang , Feng Chen , Xindong Wu , Chengqi Zhang, Identifying bridging rules between conceptual clusters, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Kaidi Zhao , Bing Liu , Jeffrey Benkler , Weimin Xiao, Opportunity map: identifying causes of failure - a deployed data mining system, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Sugato Basu , Raymond J. Mooney , Krupakar V. Pasupuleti , Joydeep Ghosh, Evaluating the novelty of text-mined rules using lexical knowledge, Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, p.233-238, August 26-29, 2001, San Francisco, California
B. Shekar , Rajesh Natarajan, A Framework for Evaluating Knowledge-Based Interestingness of Association Rules, Fuzzy Optimization and Decision Making, v.3 n.2, p.157-185, June 2004
Richard J. Bolton , Niall M. Adams, An iterative hypothesis-testing strategy for pattern discovery, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Bing Liu , Kaidi Zhao , Jeffrey Benkler , Weimin Xiao, Rule interestingness analysis using OLAP operations, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Phan , Le-Minh Nguyen , Yasushi Inoguchi , Tu-Bao Ho , Susumu Horiguchi, Improving discriminative sequential learning by discovering important association of statistics, ACM Transactions on Asian Language Information Processing (TALIP), v.5 n.4, p.413-438, December 2006
Liqiang Geng , Howard J. Hamilton, Interestingness measures for data mining: A survey, ACM Computing Surveys (CSUR), v.38 n.3, p.9-es, 2006
Stephen D. Bay , Michael J. Pazzani, Detecting Group Differences: Mining Contrast Sets, Data Mining and Knowledge Discovery, v.5 n.3, p.213-246, July 2001 | pattern ranking;interesting patterns;post-analysis of patterns;knowledge discovery;unexpectedness |
628028 | Automatic Text Categorization and Its Application to Text Retrieval. | AbstractWe develop an automatic text categorization approach and investigate its application to text retrieval. The categorization approach is derived from a combination of a learning paradigm known as instance-based learning and an advanced document retrieval technique known as retrieval feedback. We demonstrate the effectiveness of our categorization approach using two real-world document collections from the MEDLINE database. Next, we investigate the application of automatic categorization to text retrieval. Our experiments clearly indicate that automatic categorization improves the retrieval performance compared with no categorization. We also demonstrate that the retrieval performance using automatic categorization achieves the same retrieval quality as the performance using manual categorization. Furthermore, detailed analysis of the retrieval performance on each individual test query is provided. | Introduction
Text categorization has recently become an active research topic in the area of information retrieval. The
objective of text categorization is to assign entries from a set of pre-specified categories to a document.
A document here refers to a piece of text. Categories may be derived from a sparse classification scheme
or from a large collection of very specific content identifiers. Categories may be expressed numerically
or as phrases and individual words. Traditionally this categorization task is performed manually by
domain experts. Each incoming document is read and comprehended by the expert and then it is
assigned a number of categories chosen from the set of pre-specified categories. It is inevitable that
a large amount of manual effort is required. For instance, the MEDLINE corpus, which consists of
medical journal articles, requires considerable human resources to carry out categorization using a set
of MeSH (Medical Subject Headings) categories [11].
A promising way to deal with this problem is to learn a categorization scheme automatically from
training examples. Once the categorization scheme is learned, it can be used for classifying future
documents. It involves issues commonly found in machine learning problems. Since a document may
be assigned to more than one category, the scheme also requires the assignment of multiple categories.
There is a growing body of research addressing automatic text categorization. For instance, a
probabilistic model, in the early work of Lewis [8], makes use of Bayesian independence classifiers
for categorization. He mainly studies the effect of feature selection and clustering on the automatic
categorization of newswire articles. Masand et. al. [10] adopt a memory-based reasoning strategy to
classify news stories. After k best documents are retrieved, the weight of the associated categories are
obtained by summing similarity scores from the near matches. Yang [20] develops a technique known as
Expert Network. This network links the terms in a document with its categories and there is a weight
on each link. Both approaches of Masand et. al. and Yang are similar to our approach in that these
are based on variants of the nearest neighbor algorithm. However, they do not mention a model for
parameter selection. Other methods such as decision trees [1], linear classifiers [9], context-sensitive
learning [3], and learning by combining classifiers [7] have also been proposed. These approaches
typically construct a classifier for each category and the categorization process becomes a binary decision
problem for the particular category. In contrast, our approach learns all the categories for a document
at one time. More recently Lewis et. al. [9] compare three categorization algorithms: Rocchio's,
Widrow-Hoff and Exponential Gradient on the Heart Disease subset of a MEDLINE test collection.
Yang [21] also tests her Expert Network method on the same Heart Disease collection as well as a
different MEDLINE test collection. We compare our categorization results to theirs in a later section.
All recent efforts on automatic text categorization have focused on the categorization task alone.
One useful application for automatic categorization is to support effective text retrieval. Apart from
studying the effectiveness of automatic categorization directly, the second objective of this paper is to
investigate the application of this categorization process to text retrieval. In particular, we wish to study
whether the automatically assigned categories will improve the retrieval performance compared with
no categorization. We also investigate whether automatic categorization will improve, reduce or have
no effect on the retrieval performance achieved using manual categorization. Furthermore, we analyze
the retrieval performance on the basis of each individual test query to gain insight on the interaction
of our automatic categorization and our text retrieval approaches.
This paper is organized in two parts: Part I focuses directly on the automatic categorization approach
and Part II focuses on the application of categorization to text retrieval. For Part I, a description of the
categorization approach is given in Section 2. The following section discusses different categorization
quality metrics. This is followed by a section presenting the experimental results for automatic categorization
on two document collections, namely, the HERSH [5] and the OHSUMED [6] test collections.
For Part II, Section 5 presents the text retrieval approach based upon the automatic categorization
approach. It is followed by a section describing a series of text retrieval experiments on the HERSH
corpus. Finally Section 8 provides the conclusions of this paper.
Part I
A Description of the Categorization Approach
2.1 An Outline of the Approach
The basic components of the automatic categorization approach consists of two processes, namely the
category extraction process and the parameter selection process. The category extraction process is
responsible for extracting the appropriate categories for an input document. Central to this process is
a category learning model. This model provides an algorithm to identify categories for a new document
from a collection of existing pre-categorized document examples. We propose an approach derived from
a combination of a machine learning technique known as instance-based learning and a text retrieval
technique known as retrieval feedback. Retrieval feedback has been discussed in [2], [4], [13], [16], and
[17]. It is a technique that is different from the traditional relevance feedback technique. Essentially,
retrieval feedback supports a kind of automatic query refinement procedure which does not require
manual relevance judgments from users as in traditional relevance feedback.
Many existing approaches build a separate classifier for each category. A new document is processed
by each classifier to determine if the corresponding category is appropriate. In contrast, our approach
operates at the document level. A set of categories is identified for a document in a single run. The
category extraction process operates according to the category learning model. The learning model
requires some operational parameters which will be selected in advance by the parameter selection
process. This process also makes use of the same category learning model embedded in the category
extraction process. We pose this parameter selection task as a simple optimization problem. Specifically
the parameters are chosen so that the performance of the categorization process, measured by a metric,
is optimized. We use a tuning set approach to achieve this task.
The interaction of all the components in our categorization approach is illustrated in Figure 1. For
a given domain, we first invoke the parameter selection process to determine the appropriate parameter
values. This step is carried out only once off-line at the beginning. After this step, we can determine
the categories for a new document via the category extraction process which can be done efficiently
parameters
learning
category extraction
new document set of categories
(off-line)
(on-line)
category learning
category learning
model
model
training
document
instances
parameter selection
process
process
Figure
1: The Automatic Categorization Approach
on-line. The category learning model will be presented first since it is used in both processes. Then it
is followed by a description of the parameter selection process and the category extraction process.
2.2 The Category Learning Model
Recall that the objective of this model is to select the appropriate categories for an input document. We
adopt an extension of instance-based learning. There is a collection of pre-categorized documents used
for training. Each document contains a free-text portion (typically the title and the abstract) and a
set of categories that have been manually assigned. A document in the training collection is considered
as an instance or exemplar represented by T ; C where T and C denote representations of the free-text
and the categories of the document respectively.
We adopt the vector space technique as the central representation framework for our model. Thus
T is a vector of terms:
where p is the total number of unique terms in the collection's free-text domain and t i is the weight
reflecting the relative importance of the term i for characterizing the document. Standard automatic
document indexing techniques in information retrieval can be employed to extract terms from a docu-
ment. Typically a term can be a word or a phrase aside from common function words such as "the".
Word stemming can be applied in the term extraction process. Our category learning model imposes
no restriction on word stemming methods. Similarly C is a vector representing the categories assigned
to the document.
where c i is the weight for the category i and q is the total number of unique categories.
A number of weighting schemes can be used for the vectors T and C. For instance, we can use
the product of term frequency and inverse document frequency as the weight for the terms in T . Term
frequency is the number of occurrences of a term in a document. Inverse document frequency is related
to the rarity of a term in a document collection. More details of these two quantities can be found in
[15]. For the vector C, we can use the category frequency as the weight. Usually the category frequency
is binary. Both vectors are normalized before any further processing is done.
Let S denote an incoming document which needs to be categorized. Since the free-text of the
document is available, we represent S by a vector of terms:
where s i is the weight for the term i in S. Here, the weighting scheme must be the same as the one
used in the representation of T . The vector S should also be normalized.
As in instance-based learning, this document S is matched against each instance (i.e., document)
in the training collection according to a similarity function \Delta. This function \Delta produces a score for
each training document instance. The higher the score, the higher is the similarity between S and a
document instance. A simple and effective choice of this function is:
Note that both vectors S and T have been normalized before the calculation.
Based on this score, we rank the document instances in descending order. In instance-based learning,
the categories associated with the most similar document (i.e., the highest score) are the learned
categories for S. Instead, in our approach, we gather the top N document instances to form a set \Psi
(note that j\Psij= N ). Then, the categories associated with the documents in \Psi are further considered.
If we think of document S as a query, the set \Psi can be viewed as a set containing the N most
relevant documents retrieved for the query S. In retrieval feedback, a query is automatically refined or
expanded using the terms in \Psi. For our categorization task, we wish to find the appropriate categories
for the query (i.e., document S). Inspired by the retrieval feedback technique, we expand the query
with categories selected from those associated with the documents in \Psi. To do this, a weight c 0
i is first
calculated for each category i associated with \Psi. This weight is computed by:
c ki (1)
where c ki is the weight of category i in the k-th document in the set \Psi.
We rank the categories according to two criteria. First the categories are ranked in descending order
by the number of documents in \Psi in which they occur. Next they are ranked in descending order of
the computed weight c 0
. Finally the top M categories are extracted as the learned categories for the
document S.
Note that N and M are the two parameters involved in the learning process. In the category
extraction process, a selected value for each parameter is required. Instead of using arbitrary values, we
propose a parameter selection process where good values for the parameters are determined in advance.
This also allows the parameters to capture the characteristics of the document collection.
2.3 The Parameter Selection Process
The purpose of the parameter selection process is to determine suitable parameter values to be used in
the category extraction process. We make use of a tuning set approach to achieve this. It is able to
capture specific distinct characteristics of the document collection. This process is conducted off-line
and needs to be performed only once for a given document collection.
Recall that we gather a set of documents with known categories as a training document collection.
This training document collection is further partitioned into two parts. One part, denoted by \Upsilon, is
treated as the set of exemplars. The other part, denoted by \Theta, is the tuning set. Each document in
\Theta is categorized based on the exemplars in \Upsilon using the same categorization learning model described
above. Since the correct categories (i.e. manually assigned categories) are available for the documents
in \Theta, we can measure the categorization performance by comparing the learned categories and the
correct categories using a quality metric. This allows us to evaluate the categorization performance
for a particular choice of parameters. We repeat this tuning process in a systematic way for different
combination of parameters. The algorithm for this parameter selection process is summarized as follows:
1. choose an initial set of parameter values
2. for each document in \Theta
2.1 invoke the category learning model using the exemplar set \Upsilon
3. evaluate the overall category performance using a quality metric
4. update the best parameter values if a better performance is found
5. choose the next set of parameter values
6. go to step 2 unless a termination criterion is reached
For step 5, there are a variety of ways to choose the next set of parameters. In our experiments,
we use a generate-and-test scheme which basically selects parameter values within a specified range in
a predefined manner. More advanced schemes can be adopted such as the hill-climbing scheme and
the best-first search scheme. In step 3, we need a quality metric to measure the categorization perfor-
mance. Several quality metrics are proposed, namely, the Category Perspective Metric, the Document
Perspective Metric, and the Decision Perspective Metric. These metrics will be described in the next
section.
2.4 The Category Extraction Process
This process is responsible for extracting the categories for a newly arrived document. Usually we
need to accomplish it in an on-line fashion. As illustrated in Figure 1, we make use of the category
learning model to learn the desired categories. The parameters in the learning model should have been
previously computed in the parameter selection process.
We evaluate the categorization performance using a new set of documents as the test collection
which is different from the training collection. For our test collections, the correct (manually assigned)
categories are also known for each of these documents. Therefore, we can measure the categorization
performance by comparing the categories learned with the known categories using a quality metric.
Similar to the parameter selection process, a variety of quality metrics can be used. However, it is
essential that the metric in the parameter selection process stays consistent with the metric used in the
evaluation process. We maintain this consistency within each experimental run.
3 Categorization Quality Metrics
3.1 Category Perspective Metric
This evaluation metric operates with the category as the focal point. For each category, the categorization
goal is viewed as a binary classification problem. Given a category, the algorithm decides whether
each document is in or not in this category. With a single category as the focus, let
the number of documents assigned to the category both manually and automatically.
the number of documents assigned to the category automatically but not manually.
the number of documents assigned to the category manually but not automatically.
Then two common measures, namely recall (R) and precision (P ) can be defined as:
We use the F-measure, a weighted combination of recall and precision proposed in [9], as the basis
of our quality metric:
A common usage of this measure is to set fi to 1. Hence,
c) (2)
The F 1 score is computed for each category in the domain and these scores are averaged to determine
the mean F 1 score. Since this score averages performance across categories, we refer to this metric as
the Category Perspective Metric.
3.2 Document Perspective Metric
This evaluation approach has been adopted by Yang [19]. Here the categorization results are assessed
with the document as the focal point. Since categories are associated with each document with a certain
strength (see Equation 1), all categories may be ranked in the context of each document. The greater
the ability to rank manually assigned categories higher than others, the better is the categorization
technique. A summary measure assessing this ability is the 11-AvgP (11-point average precision) score
or 10-AvgP (10-point average precision) score [14]. Both scores assess the ranking of categories for each
document and then take the average across documents. In our experiments, we compute both 10-AvgP
and 11-AvgP scores.
3.3 Decision Perspective Metric
This evaluation scheme derives from the early work of Lewis [8]. Given a document and a category, a
categorization decision is made to determine whether or not to assign this category to the document.
When automatic categorization is conducted, a number of these decisions are made. Out of these
decisions, some may match with the manual decisions, while others may not. This metric compares
the automated decisions with the manual ones. An "assignment" is defined as the positive decision to
assign a category to a document. Let
the number of correct assignments made automatically
the number of assignments made automatically
the number of assignments made manually.
Then, we can define "micro recall", "micro precision", and "micro F fi measure" as follows:
The current literature does not yet indicate which of these three metric perspectives is the most
appropriate for text retrieval. Thus we use all three perspectives with the expectation that our retrieval
experiments will offer some insights on these options.
4 Experimental Results on Categorization
4.1 The HERSH Corpus
We conducted a series of experiments using 2,344 medical documents from the MEDLINE database
referred to as the HERSH corpus [5]. Each document includes a free-text portion and a set of manually
assigned MeSH (Medical Subject Headings) categories. In our experiments, individual words are
extracted from the MeSH phrases and then stemmed. Henceforth we refer to these stemmed words as
our "categories". This approach is justified since our retrieval strategy operates at the level of word
stems. We conducted automatic categorization on this collection and evaluated the performance of the
categorization process.
We randomly divided the HERSH corpus into two partitions, namely the training collection, referred
to as TR, of 586 documents and the test collection, referred to as TE, of 1,758 documents. The division
of this corpus is the same as the experiments done by some previous work such as [19, 20] so that
performance comparison can be made. To make use of the training collection TR in the parameter
selection process, we further divided it randomly into two sets. The first set containing 146 documents
is the set of exemplars (\Upsilon) for training. The other set containing 440 documents forms the tuning set
(\Theta). We make the size of \Theta be three times of the size of \Upsilon since the size of TE is three times of the
size of TR. After the parameter selection process, the whole training set of 586 documents was used as
the set of exemplars for categorization. To evaluate the categorization performance, we used the test
collection TE. We conducted some experiments under each quality metric mentioned above and the
results are presented below.
4.1.1 Category Perspective Results on the HERSH Corpus
Three experimental runs labeled C0, C35, and C50 were conducted. They differ in the pool of categories
involved. The C0 run involves all manually assigned categories which exist in both the set of training
collection TR and the test collection TE. The C35 and C50 runs limit the category pool in C0 to those
which occur in TR with a document frequency greater than 35 and 50 respectively. Document frequency
is the number of documents to which a specific category is assigned.
Tables
present the mean F 1 scores obtained for all different parameter combinations for
the C0, C35 and C50 runs respectively in the parameter selection process. The parameter M ranges
from 10 through 60 in steps of 10 while the parameter N ranges from 5 through in steps of 5. The
tables indicate that the desirable values for N and M are 5 and 50 respectively for the C0 run, 5 and
respectively for the C35 run, respectively for the C50 run. These parameter values were
used in the final categorization process. Table 4 summarizes the results achieved in both the parameter
selection and categorization processes for all three experimental runs. The size of the category pool
diminishes drastically from C0 to C35 and above. It can be seen from this table that as the frequency
threshold on the category set increases, the F 1 score improves.
docs)
Table
1: Parameter Selection: Mean F 1 Scores for the HERSH Corpus (C0 run).
docs)
Table
2: Parameter Selection: Mean F 1 Scores for the HERSH Corpus (C35 run).
docs)
Table
3: Parameter Selection: Mean F 1 Scores for the HERSH Corpus (C50 run)
Parameter Selection Based Categorization Evaluation Based
on the TR Collection on the
Run # categories F 1 score N M # categories F 1 score
43 0.509 43 0.54
Table
4: Summary of Runs Based on Category Perspective, HERSH Corpus
4.1.2 Document Perspective Results on the HERSH Corpus
In this perspective all categories are ranked in the context of a document, thus the parameter M has no
relevance. Table 5 presents the parameter selection process based on the Document Perspective Metric.
The ALL run represents the experiment concerning all categories appearing in either the training or
the testing document collection. The TRN run represents the experiment concerning those categories
appearing in the training document collection. In both experiments, the optimal parameter value for
was 15 after the parameter selection process. Table 6 summarizes the runs based on the Document
Perspective Metric. For the ALL run, the 10-AvgP and the 11-AvgP for the testing collection are 0.4326
and 0.4789 respectively.
Table
5: Parameter Selection: Document Perspective Scores for the HERSH Corpus (D run)
Parameter Selection Based Categorization Evaluation Based on
on the TR Collection the
Run # cat. 10-AvgP 11-AvgP N # cat. 10-AvgP 11-AvgP
Table
Summary of Runs Based on Document Perspective, HERSH Corpus
docs)
Table
7: Parameter Selection: Mean Micro F 1 Scores for the HERSH Corpus (L0 run)
docs)
Table
8: Parameter Selection: Mean Micro F 1 Scores for the HERSH Corpus (L35 run)
docs)
Table
9: Parameter Selection: Mean Micro F 1 Scores for the HERSH Corpus (L50 run)
Parameter Selection Based Categorization Evaluation Based
on the TR Collection on the
Run # cat. recall precision F 1 N M # cat. recall precision F 1
43 0.723 0.514 0.601 15 20 43 0.715 0.520 0.602
Table
10: Summary of Runs Based on Decision Perspective, HERSH Corpus
4.1.3 Decision Perspective Results on the HERSH Corpus
Similar to the Category Perspective Metric, three different experimental runs, L0, L35 and L50 were
conducted based on the same pool of categories as used in the C0, C35 and C50 runs respectively.
Tables
7, 8 and 9 show the mean micro F 1 scores achieved for L0, L35 and L50 runs in the parameter
selection process. From the tables, the optimal values for N and M are 15 and respectively for
the L0 run, 15 and 20 respectively for the L35 run, 15 and 20 respectively for the L50 run. Table 10
gives the summary of the parameter selection and categorization evaluation runs based on the Decision
Perspective Metric. The table includes micro-recall and micro-precision scores. Once again it is clear
that the scores improve as the frequency threshold increases.
4.2 The OHSUMED Corpus
We conducted a series of experiments using a much larger document test corpus known as OHSUMED
[6]. It is a subset of the MEDLINE database and consists of medical documents from 1987 to 1991. These
documents are also manually assigned MeSH categories. In our experiment, we used those documents
that have both the abstract and MeSH categories assigned. The number of documents in each year is
36,890 for 1987, 47,055 for 1988, 49,805 for 1989, 49,481 for 1990, and 50,216 for 1991. Thus the total
number of documents in the OHSUMED corpus is 233,447.
The OHSUMED corpus was divided into a training collection and a test collection chronologically.
We used 183,231 documents from 1987 to 1990 as the training collection and it is also used for the
parameter selection process. The documents in 1991 was used as the test collection. Of the 183,231
documents in the training collection, we further divided it into two sets for the parameter selection
process. The first set which consists of 133,750 documents from 1987 to 1989 was used as the set of
exemplars (\Upsilon) for training. The other set which consists of 49,481 documents from 1990 was used as
the tuning set (\Theta).
4.2.1 Experimental Results on OHSUMED
The experiment for the OHSUMED corpus was conducted using the Category Perspective Metric.
We limited the category pool to those which occur in the corresponding exemplar set with a frequency
greater than 75. Table 11 presents the mean F 1 scores obtained for all different parameter combinations
tested in the parameter selection process. It indicates that the desirable values for N and M are
20 and 30 respectively. These parameter values were used in the categorization evaluation process.
Table
12 summarizes the results achieved in this experiment. It shows that the mean F 1 score for the
categorization process is 0.441.
docs) 15 0.303 0.378 0.419 0.432 0.426 0.410 0.388
Table
11: Parameter Selection: Mean F 1 Scores for the OHSUMED Corpus, Category Perspective
Parameter Selection Based Evaluation Based
on the Training Set on the Testing Set
Run # categories F 1 score N M # categories F 1 score
OHSUMED 2725 0.435 20
Table
12: Summary of Runs for the OHSUMED Corpus
4.3 Comparative Performance
Yang did a similar experiment for the HERSH corpus using the same training and testing collections
[20]. The 10-AvgP obtained by Yang based on document perspective was 0.349. Compared with this
result, our performance is quite encouraging since the 10-AvgP of our approach is 0.4326. However a
difference in our studies is that Yang uses the complete MeSH phrase as a single category. In contrast
our categories are the single word stems generated from the MeSH phrases.
For the OHSUMED corpus, Lewis et. al. conducted an experiment using the same training and
testing collections on categories associated with the Heart Disease [9]. They obtained an F 1 score of 0.53
based on category perspective using the Widrow-Hoff algorithm. Yang also conducted an experiment
on the same corpus and partition using LLSF technique [21]. The F 1 score obtained was 0.55. However,
both Lewis and Yang used only 119 categories associated with the Heart Disease in their experiments
while we used the whole set of 2,725 categories in our experiment. Comparisons are difficult since we
work with the complete OHSUMED collection. Moreover they used phrases as categories while we
adopt a word stem approach, since our focus is on retrieval based on word stems.
Part II
5 Categorization For Text Retrieval
The automatic categorization method described in Part I can support a variety of applications such
as text classification [7], text filtering [12], and text retrieval [16]. In the second part of the paper,
we investigate the application of automatic categorization to text retrieval. Text retrieval aims at
retrieving relevant documents from a document corpus given a query expressing the information need
of a user. We compare text retrieval performance using documents that are automatically categorized
with performance using manually categorized documents. We also assess retrieval performance against
a baseline which has no categories in its documents.
5.1 Document and Query Representations
Similar to categorization, documents and queries in text retrieval are represented by vectors. However,
the representation and manipulation of vectors are different from the ones used in categorization.
Each document D is represented by two vectors, namely the free-text vector and the category
vector. The free-text vector is derived from the free-text portion of the document (e.g., the title and
the abstract). The category vector is derived from the categories assigned to the document. In essence,
a document is represented as follows:
represents the weight of term j in D i and p is the vocabulary size of the free-text portions of
all documents in the corpus. Similarly, c ij represents the weight of category j in document D i and q is
the vocabulary size of all categories.
We choose this representation scheme for retrieval purposes based upon our previous work [16].
However, this technique has previously assumed the manual assignment of categories to each document.
Instead we now apply the automatic categorization technique described in Part I so that the manual
assignment step can be eliminated. This is particularly useful when human experts on categorization
are not available or not affordable.
Each query, similar to a document, is represented by a free-text vector and a category vector. The
free-text vector is constructed by the same method used for the free-text vectors of the documents.
Since the natural language queries do not arrive with search criteria identified, we use two different
ways to design the category vectors for queries:
ffl Simple-Query design : The query's free-text vector is copied to form the query's category vector.
ffl Advanced-Query design: The query's category vector is constructed by applying the category
learning model described in Part I. More specifically, the free-text vector is used to conduct an
initial retrieval run on the corpus. The top U documents retrieved are analyzed in terms of their
categories. From these, the top V categories are extracted to form the category vector. This
strategy was explored successfully for the MEDLINE collection in our previous work [16].
5.2 The Retrieval Model
The retrieval step is conducted by computing the similarity between a document D and a query Q as
follows:
where D free\Gammatext and D category are the free-text vector and the category vector for D respectively.
Similarly, Q free\Gammatext and Q category are the free-text vector and the category vector for Q respectively.
- is a parameter that allows one to control the relative emphasis on the two types of vectors during
retrieval. This technique allows the retrieved documents to be ranked against the query.
In each retrieval experiment, the documents are ranked with respect to each query. To evaluate
the retrieval performance, we compute the average of precision scores of all queries at 11 recall points
starting from 0.0 to 1.0 in steps of 0.1. Then the average of these 11 precision scores is computed to get
a single measure. This measure becomes the 11-point average precision (11-AvgP) score for evaluating
the retrieval performance. This averaging technique yields macro average data wherein each query is
allowed to contribute equally to the overall performance score for the system [14].
6 Experimental Design
The HERSH corpus in the categorization experiment in Part I was also used in our text retrieval
experiment. The automatic categorization strategies that yielded the best performance in Part I form
the basis of our retrieval experiments. The retrieval experiment was conducted on the test collection
subset (TE) composed of the 1,758 documents from the HERSH corpus. In Part II here, we refer to
this collection as the RTT (ReTrieval Test) collection. The HERSH corpus is accompanied by a set of
queries which are in the form of simple natural language expressing an information need. For each query,
there is a set of relevant documents which have been manually judged. Thus the retrieval performance
can be measured by comparing the documents retrieved by the system and the ones manually judged.
We chose those queries that have at least one relevant document in the RTT collection. There are 73
queries satisfying this requirement.
The best strategies within each of the three evaluation perspectives: Category, Decision and Document
were tested for retrieval performance. Each strategy is assessed against two baselines
Baseline 1 (B1): Retrieval without MeSH categories (i.e., retrieval using free-text alone).
ffl Baseline 2 (B2): Retrieval using free text and manually assigned MeSH categories.
To conduct text retrieval experiments, we make use of the SMART system [14] since it supports the
2 Note that the stems of the individual words of the MeSH phrases form our domain of categories for these experiments.
vector space model.
6.1 Document Representations
SMART allows a wide variety of weighting schemes for computing term weights. Each scheme is
represented by a triple: ABC. A represents the term frequency component, i.e., the number of times
the term occurs in the document. B represents the inverse document frequency component which
increases with the rarity of the term in the database. C represents the normalization component for the
length of the document. Based on results from our prior experimentation with the same test corpus
[16, 18], we used the atn schemes for documents: a stands for augmented term frequency, t represents
the inverse document frequency factor and n represents no normalization for length. If we describe this
scheme more precisely, the weight of a term in a document is given by:
where R is the number of documents in the RTT collection; n is the number of documents which contain
the term being considered; tf is the frequency of the term in the document; and m is the maximum tf
value for the current document. The objective of division by m is to normalize the term frequency by
the maximum tf observed in the document. The term log(R=n) corresponds to the inverse document
frequency weight.
A total of 9 different document representations were tested. All representations include a free-text
vector. The difference is in their MeSH category vectors.
Representation 1 (Baseline 1): No MeSH category vector.
Representation 2 (Baseline 2): The MeSH category vector is formed from manual categorization.
Representations 3-5: The MeSH category vector is derived by automatic categorization based on
the Category Perspective Metric as described in Section 4.1.1. The three best strategies, one each
from C0, C35 and C50 were tested.
Representations 6-8: The MeSH category vector is derived by automatic categorization based on
the Decision Perspective Metric as described in Section 4.1.3. The three best strategies, one each
from L0, L35 and L50 were tested.
Representation 9: The MeSH category vector is derived by automatic categorization based on the
Document Perspective Metric as described in Section 4.1.2.
6.2 Query Representations
Each of the 73 test queries arrives as a simple natural language expression of an information need.
Some queries are expressed in full sentences, others are incomplete. In each case, a free-text vector was
derived analogous to the document representation in Equation 3. Based on our previous work [16], term
weights were determined using the atc scheme in SMART. This is similar to atn used in document
representation due to at being in common. The difference is that term weights, due to the c factor, are
normalized by the following factor:
where t i is the weight of the term i in the query and p is the size of the free-text vector.
6.3 Retrieval Strategies
Given that a retrieval strategy may be defined as a combination of a document representation strategy
and a query representation strategy, a total of 17 retrieval strategies were tested. Each of the 8 document
representations involving both the free-text and the MeSH category vectors may be combined with both
the Simple-Query design and the Advanced-Query design. The Baseline 1 strategy involves only free-
text. The two parameters U and V used in the Advanced-Query design were each varied independently
across 5, 10, 15, and 20. The parameter - was varied across 0.75, 1.0, 1.25, 1.5, 1.75, 2.0. Thus a total
of 96 different parameter combinations were tested in experimentation involving advanced queries. For
the Simple-Query design, the only parameter involved is - and thus 6 parameter combinations were
tested.
7 Retrieval Results
Table
13 shows the retrieval results measured by the 11-AvgP score for the 17 different retrieval strategies
tested. The table includes the scores for the Baseline 1 (B1) and Baseline 2 (B2). The Baseline
scores reported are different for the Simple-Query design and the Advanced-Query design options.
Note that although these two design options have identical document representations, they differ in
their query representations. The remaining rows represent performance for different automatic categorization
strategies. For example, row 11 reports the results when the best document categorization
strategy in Part I under the C0 framework is combined with the Advanced-Query design. This yields a
11-AvgP score of 0:5619 which offers a significant (9:3%) improvement over the Baseline 1 performance
while being significantly worse (\Gamma7:9%) than the Baseline 2 performance of 0:6098. This row
also indicates that the query was created using the query's initial free-text vector to retrieve the top
documents. Then, the MeSH categories assigned to these 20 documents were analyzed and the best
were used to create the query's MeSH category vector. It also indicates that the parameter - in
Equation 4 was set to 1.25 during retrieval.
It can be seen from Table 13 that the manual strategies (rows 2 and 10) are significantly superior
to the Baseline 1 strategy. This is consistent with previous results suggesting that MeSH should not be
ignored during retrieval [16]. Similarly, there is a 9.5% improvement between manual strategies when
one moves from simple to advanced queries. This result is also consistent with previous studies which
have shown that investing in retrieval feedback for query design yields good returns [16].
As regards the automatic strategies, we continue to see improvements when moving from the Simple-
Query design to the Advanced-Query design. The best Simple-Query performance is 0.5106 and that
for Advanced-Query is 0.5619 (10.0% improvement). The table indicates that automatic categorization
performs worse than manual categorization. Further analysis of the underlying vocabularies yields the
explanation for this. Specifically since automatic categorization is done based on a set of exemplar
documents, the categorization vocabularies (free-text and MeSH) for the automatic collection are limited
to the vocabularies of the training collection. However for the manual runs, the vocabularies come
from the original vocabulary set of much larger size. Thus, for example, the MeSH category vectors
diff. % diff.
Row MeSH Approach 11-AvgP wrt. B1 wrt. B2 U V -
Simple-Query design
6 L0 0.5083 -1.2% -9.7%* na na 2.0
9 D 0.5080 -1.2% -9.8%* na na 1.75
Advanced-Query design
14 L0 0.5526 7.4%* -9.4%* 20 20 1.25
Table
13: Retrieval Performance (Vocabulary Differences Not Controlled). Asterisk Denotes the Difference
is Significant (p ! 0.01) Using the Non-Parametric Wilcoxon Signed Rank Test for Matched
Samples. "na" Denotes "not applicable"
assigned manually (rows 2 and 10) for the RTT collection were generated using a MeSH vocabulary of
2,968 word stems. However the MeSH category vectors generated automatically for the same collection
were produced from the 586 documents in the training collection of the HERSH corpus from Part I.
This MeSH vocabulary base contains only 1,604 word stems. This difference in underlying vocabularies
may explain the difference in performance. In order to conduct a more meaningful comparison, we
repeated our retrieval experiment involving the 17 retrieval strategies by controlling for this vocabulary
difference. In other words the MeSH categories not existing in the training collection were removed
from the manually generated representations for RTT documents.
Table
14 presents the result for this second set of retrieval runs. This table indicates that when
vocabulary differences are controlled, all automatic retrieval runs are better than retrieval without MeSH
3 . Both simple and advanced queries show better performance than the results without controlling the
3 Interestingly, manual MeSH combined with simple queries (row 2) does not yield improved results compared with
MeSH Approach % diff. % diff.
Row 11-AvgP wrt. B1 wrt. B2 U V -
Simple-Query design
6 L0 0.5402 7.0%* 6.4%* na na 2.0
9 D 0.5456 8.0%* 7.4%* na na 1.75
Advanced-Query design
Table
14: Retrieval Performance (Vocabulary Differences Controlled). Asterisk Denotes the Difference
is Significant (p ! 0.01) Using the Non-Parametric Wilcoxon Signed Rank Test for Matched Samples.
"na" Denotes "not applicable"
vocabulary differences.
7.1 Effect of Parameter Values on Retrieval Performance
The results in Tables 13 and 14 reveal only a piece of the picture regarding retrieval. In particular
each row presents only the best performance achieved over all the parameter combinations (U , V , and
- where relevant) tested for that row. In the Advanced-Query design, U is the number of top ranking
documents examined, V is the number of MeSH categories added to the query and - participates in
all retrieval processes where both free-text and MeSH are involved. For example the 0.5448 score of
row 4 in
Table
14 represents the best result over 6 parameter - values. Similarly the 0.5754 score of
row 12 represents the best from 96 parameter combinations tested under the "C50 Advanced Query
framework". To get a more complete picture we now present a query-by-query analysis that explores
the effect of parameter combinations on retrieval performance.
We explain our analysis strategy with reference to Table 15. This table provides the results of a
query-by-query statistical analysis of the 96 parameter combinations tested for C0 combined with the
Advanced-Query design. It compares retrieval performance achieved using the automatic strategy with
the retrieval performance achieved using the baselines B1 and B2. "+" indicates that the automatic
strategy is significantly better than the corresponding manual strategy statistically. "-" indicates that
the manual strategy is significantly better than the automatic strategy statistically. "=" indicates no
significant difference between the two statistically. The statistical analysis is based on a query-by-
query analysis using the Wilcoxon signed rank test. Straightforward counts indicate that C0 is always
significantly better than the B1. Moreover it is better than the B2 in 43 cases, worse in 44 cases and
equivalent in the remaining 9 cases. Thus it can be observed that within the "C0 Advanced-Query
design" framework, the automatic strategy always performs better than the baseline retrieval with no
MeSH involved. Also, across all 96 parameter combinations tried, automatic categorization is equivalent
to the Baseline 2 manual categorization.
Table
provides the query-by-query analysis for C35. This table shows that C35 also possesses a
no MeSH (row 1). This result is different from the results from our previous study [16] and is also explained by the
vocabulary differences.
U
Table
15: Statistical Analysis of C0 (Vocabulary Differences Controlled) + Implies the Automatic
Method is Significantly Better Statistically. - Implies the Automatic Method is Significantly Worse
Implies that they are Statistically Similar.
similar balance by being better in 45 cases of the parameter settings and worse in 51 cases. We perform
the query-by-query analysis only for the situation where the vocabularies are controlled. Table 17 shows
a summary of the comparison between B2 and different automatic categorization strategies tested for
retrieval. It should also be noted that the automatic strategies are almost always significantly better
than B1 statistically.
Table
provides some interesting results in that it allows us to differentiate between perspectives.
It is clear that C0 and C35 are distinct from the remaining perspectives. In fact the Decision and
Document perspectives yielded poor results based on a query-by-query analysis. If we consider that
behind each query lies a distinct user, this study recommends the use of the Category Perspective
Metric over the other metrics for text retrieval for the HERSH corpus.
U
Table
Statistical Analysis of C35 with Controlled Vocabulary. + Represents that the Automatic
Method is Significantly Better Statistically. - Represents that the Automatic Method is Significantly
Worse Represents that they are Statistically Similar.
Number of instances where strategy is
Strategy Better than B2 Similar to B2 Worse than B2
43 9 44
Table
17: Summary of Query-by-Query Analysis Comparing B2 and Different Automatic Categorization
Strategies (Vocabulary Differences Controlled).
8 Conclusion
We develop a new approach to automatic text categorization. Once the categorization model is learned,
it can be used for classifying future documents. The categorization approach derives from a machine
learning paradigm known as instance-based learning and an advanced document retrieval technique
known as retrieval feedback. We demonstrate the effectiveness of our categorization approach using two
test collections namely the HERSH corpus and the OHSUMED corpus.
Apart from developing an automatic categorization approach, we also investigate the application of
this categorization process to advanced text retrieval. Our experiments clearly indicate that the categorization
process is effective. It improves the retrieval performance compared with no categorization.
It also achieves the retrieval performance equivalent to the results using manual categorization. This
is concluded on the basis of analyzing the retrieval performance for each individual test query statis-
tically. Moreover our study indicates that the Category Perspective Metric is the one most suited for
text retrieval for the HERSH corpus. Finally this paper shows that automatic categorization is effective
for advanced text retrieval.
--R
"Automated Learning of Decision Rules for Text Catego- rization"
"Automatic Query Expansion using SMART: TREC-3 Report"
"Context-sensitive Learning Methods for Text Categorization"
"Overview of the Third Text REtrieval Conference (TREC-3)"
"A Performance and Failure Analysis of SAPHIRE with a MEDLINE Test
"OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research"
"Combining Classifiers in Text Categorization"
"Feature Selection and Feature Extraction for Text Categorization"
"Training Algorithms for Linear Text Classifiers"
"Classifying News Stories Using Memory Based Reasoning"
National Library of Medicine
"A Multi-Level Approach to Intelligent Information Filtering: Model, Systems, and Evaluation"
"Okapi at TREC- 3"
The Smart System - Experiments in Automatic Document Processing
The Transformation
"Query Expansion and MEDLINE"
"Optimal Document-Indexing Vocabulary for MEDLINE"
"Retrieval Feedback in MEDLINE"
"An Example-Based Mapping Method for Text Categorization and Retrieval"
"Expert Network: Effective and Efficient Learning from Human Decisions in Text Categorization and Retrieval"
"An Evaluation of Statistical Approaches to MEDLINE Indexing"
--TR
--CTR
Ali Selamat , Sigeru Omatu, Web page feature selection and classification using neural networks, Information SciencesInformatics and Computer Science: An International Journal, v.158 n.1, p.69-88, January 2004
Guiraude Lame, A categorization method for French legal documents on the Web, Proceedings of the 8th international conference on Artificial intelligence and law, p.219-220, May 2001, St. Louis, Missouri, United States
Innovating web page classification through reducing noise, Journal of Computer Science and Technology, v.17 n.1, p.9-17, January 2002
Miguel E. Ruiz , Padmini Srinivasan, Hierarchical Text Categorization Using Neural Networks, Information Retrieval, v.5 n.1, p.87-118, January 2002
Dina Goren-Bar , Tsvi Kuflik, Supporting user-subjective categorization with self-organizing maps and learning vector quantization, Journal of the American Society for Information Science and Technology, v.56 n.4, p.345-355, 15 February 2005
Hsin-Chang Yang , Chung-Hong Lee, Automatic Category Theme Identification and Hierarchy Generation for Chinese Text Categorization, Journal of Intelligent Information Systems, v.25 n.1, p.47-67, July 2005
Hsin-Chang Yang , Chung-Hong Lee, Mining text documents for thematic hierarchies using self-organizing maps, Data mining: opportunities and challenges, Idea Group Publishing, Hershey, PA,
B. Barla Cambazoglu , Evren Karaca , Tayfun Kucukyilmaz , Ata Turk , Cevdet Aykanat, Architecture of a grid-enabled Web search engine, Information Processing and Management: an International Journal, v.43 n.3, p.609-623, May, 2007
Fabrizio Sebastiani, Machine learning in automated text categorization, ACM Computing Surveys (CSUR), v.34 n.1, p.1-47, March 2002 | text categorization;instance-based learning;automatic classification;text retrieval;query processing |
628030 | Data Consistency in Intermittently Connected Distributed Systems. | AbstractMobile computing introduces a new form of distributed computation in which communication is most often intermittent, low-bandwidth, or expensive, thus providing only weak connectivity. In this paper, we present a replication scheme tailored for such environments. Bounded inconsistency is defined by allowing controlled deviation among copies located at weakly connected sites. A dual database interface is proposed that in addition to read and write operations with the usual semantics supports weak read and write operations. In contrast to the usual read and write operations that read consistent values and perform permanent updates, weak operations access only local and potentially inconsistent copies and perform updates that are only conditionally committed. Exploiting weak operations supports disconnected operation since mobile clients can employ them to continue to operate even while disconnected. The extended database interface coupled with bounded inconsistency offers a flexible mechanism for adapting replica consistency to the networking conditions by appropriately balancing the use of weak and normal operations. Adjusting the degree of divergence among copies provides additional support for adaptivity. We present transaction-oriented correctness criteria for the proposed schemes, introduce corresponding serializability-based methods, and outline protocols for their implementation. Then, some practical examples of their applicability are provided. The performance of the scheme is evaluated for a range of networking conditions and varying percentages of weak transactions by using an analytical model developed for this purpose. | Introduction
Advances in telecommunications and in the development of portable computers have provided for
wireless communications that permit users to actively participate in distributed computing even
while relocating from one support environment to another. The resulting distributed environment
is subject to restrictions imposed by the nature of the networking environment that provides
varying, intermittent and weak connectivity.
In particular, mobile clients encounter wide variations in connectivity ranging from high-
bandwidth, low latency communications through wired networks to total lack of connectivity
[7, 11, 23]. Between these two extremes, connectivity is frequently provided by wireless networks
characterized by low bandwidth, high latency or high cost. To overcome availability and latency
barriers, and reduce cost and power consumption mobile clients most often deliberately avoid use
of the network and thus operate switching between connected and disconnected modes of opera-
tion. To support such behavior, disconnected operation, that is the ability to operate disconnected,
is essential for mobile clients [11, 12, 26]. In addition to disconnected operation, operation that
exploits weak connectivity, that is connectivity provided by intermittent, low-bandwidth, or expensive
networks, is also desirable [18, 9]. Besides mobile computing, weak and intermittent
connectivity also applies to computing using portable laptops. In this paradigm, clients operate
disconnected most of the time, and connect occasionally through a wired telephone line or upon
returning back to their working environment.
Private or corporate databases will be stored at mobile as well as static hosts and mobile users
will query and update these databases over wired and wireless networks. These databases, for
reasons of reliability, performance, and cost will be distributed and replicated over many sites.
In this paper, we propose a replication schema that supports weak connectivity and disconnected
operation by balancing network availability against consistency guarantees.
In the proposed schema, data located at strongly connected sites are grouped together to form
clusters. Mutual consistency is required for copies located at the same cluster while degrees of
inconsistency are tolerated for copies at different clusters. The interface offered by the database
management system is enhanced with operations providing weaker consistency guarantees. Such
operations allow access to locally, i,e., in a cluster, available data. Weak reads access
bounded inconsistent copies and weak writes make conditional updates. The usual operations,
here called strict, are also supported. They offer access to consistent data and perform permanent
updates.
The schema supports disconnected operation since users can operate even when disconnected
by using only weak operations. In cases of weak connectivity, a balanced use of both weak
and strict operations provides for better bandwidth utilization, latency and cost. In cases of
strong connectivity, using only strict operations makes the schema reduce to the usual one-copy
semantics. Additional support for adaptability is possible by tuning the degree of inconsistency
among copies based on the networking conditions.
In a sense, weak operations offer a form of application-aware adaptation [19]. Application-aware
adaptation characterizes the design space between two extremes ways of providing adapt-
ability. At one extreme, adaptivity is entirely the responsibility of the application, that is there
is no system support or any standard way of providing adaptivity. At the other extreme, adaptivity
is subsumed by the system, here the database management system. Since, in general, the
system is not aware of the application semantics, it cannot provide a single adequate form of
adaptation. Weak and strict operations lie in an intermediate point between these two extremes,
serving as middleware between a database system and an application. They are tools offered
by the database system to applications. The application can at its discretion use weak or strict
transactions based on its semantics. The implementation, consistency control, and the underlying
transactional support is the job of the database management system.
The remainder of this paper is organized as follows. In Section 2, we introduce the replication
model along with an outline of a possible implementation that is based on distinguishing data
copies into core and quasi. In Sections 3 and 4, we define correctness criteria, prove corresponding
serializability-based theorems, and present protocols for maintaining weak consistency under the
concurrent execution of weak and strict transactions and for reconciling divergent copies, respec-
tively. Examples of how the schema can be used are outlined in Section 5. In Section 6, we develop
an analytical model to evaluate the performance of the schema and the interplay among its various
parameters. The model is used to demonstrate how the percentage of weak transactions can be
effectively tuned to attain the desired performance. The performance parameters considered are
the system throughput, the number of messages, and the response time. The study is performed
for a range of networking conditions, that is for different values of bandwidth and for varying
disconnection intervals. In Section 7, we provide an estimation of the reconciliation cost. This
estimation can be used for instance to determine an appropriate frequency for the reconciliation
events. In Section 8, we compare our work with related research and conclude in Section 9 by
summarizing.
2 The Consistency Model
To support autonomous operation during disconnections and improve performance, data are distributed
over mobile and stationary sites. Transactions are initiated at both mobile and stationary
hosts.
2.1 Data Correctness
As usually, a database state is defined as a mapping of every data item to a value of its domain.
Data are related by a number of restrictions called integrity constraints that express relationships
among their values. A database state is consistent if the integrity constraints are satisfied [20].
Consistency maintenance in traditional distributed environments relies on the assumption that all
sites are normally connected. This assumption, however, is no longer valid in mobile computing,
maximum number of updates per data item not reflected at all copies
range of acceptable values a data item can take
is the:
d
maximum number of transactions that operate on inconsistent data
maximum number of data items that have divergent copies
maximum number of divergent copies per data item
Table
1: Divergence among copies.
since the distributed sites are only intermittently connected. Similar considerations also hold for
widely distributed systems and for computing using portable laptops. Thus, instead of requiring
maintenance of all integrity constraints we define units of consistency, called clusters.
Data items are partitioned into clusters Cl i based on their location, so that data in strongly
connected sites belong to the same cluster. In particular, data located at the same, neighbor,
or strongly connected sites belong to the same cluster, while data residing at remote or weakly
connected sites belong to separate clusters. As an example, let each mobile host be a cluster by
each own and all fixed hosts belong to the same cluster. Other configurations are also possible.
For instance, in a wide area distributed environment all hosts in nearby locations constitute one
cluster. We relax consistency as follows:
cluster state is consistent iff all intracluster integrity constraints hold. A database
state is bounded-consistent iff all cluster states are consistent and all intercluster integrity constraints
are bounded-consistent.
Bounding inconsistency for an integrity constraint depends on the type of the constraint.
In this paper, we focus on replication constraints, where all copies x i of the same data item x
have the same value. For replicated data, bounded inconsistency means mutual consistency of
all copies in the same cluster and bounded divergence [27, 1] among copies located at different
clusters. Bounded divergence is quantified by a positive integer d, called degree of divergence;
possible definitions of d are listed in Table 1. A replication constraint for x is then called d-
consistent. Data copies are occasionally reconciled to obtain a mutual consistent value. The
degree of divergence can be tuned based on the strength of connection among clusters, by keeping
the divergence small in instances of high bandwidth availability and allowing for greater deviation
in instances of low bandwidth availability.
2.2 The Extended Database Operation Interface
To increase availability and reduce network usage we allow direct access to locally, e.g., in a
cluster, available d-consistent copies by introducing weak read and weak write operations. We call
the standard read and write operations strict read and strict write operations. In particular, a
read operation on a data item x (WR[x]) reads a locally available value of x. A weak write
operation (WW [x]) writes locally available copies and becomes permanent after reconciliation. A
strict read operation (SR[x]) reads the value written by the last strict write operation. Finally, a
strict write operation (SW [x]) writes one or more copies of x and is permanent upon the end of
the issuing transaction.
Definition 2 A transaction (T ) is a partial order (OP , !), where OP is the set of weak (WR)
or strict read (SR) , weak (WW ) or strict write (SW operations
executed by the transaction, and ! represents their execution order. The partial order must specify
the order of conflicting data operations and contains exactly one abort or commit operation which
is the last in the order. Two weak (strict) data operations conflict if they access the same copy of
a data item and at least one of them is a weak (strict) write operation.
Two types of transactions are supported, weak and strict. Upon submission, each user trans-action
is decomposed into a number of weak and strict subtransactions units according to its
semantics and the degree of consistency required by the application. A weak transaction (WT )
is a transaction where OP does not include strict operations. A strict transaction (ST ) is a
transaction where OP does not include weak operations. Weak transactions access data copies
that belong to the same cluster and thus are local at that cluster. There are two commit events
associated with each weak transaction, a local commit in its associated cluster and an implicit
global commit at reconciliation. Local commitment is expressed by an explicit commit operation,
C. Updates made by locally committed weak transactions are visible only to weak transactions in
the same cluster. These updates become permanent and visible to strict transactions only after
reconciliation when local transactions become globally committed.
2.3 Realizing the Extended Database Interface
We divide copies into core and quasi. Core copies are copies that have up-to-date and permanent
values, while quasi copies are copies that have potentially obsolete values that are only conditionally
committed. All quasi copies at a cluster are mutually consistent and bounded-inconsistent
with respect to core copies. Core copies are mutually consistent. An efficient distribution of core
and quasi copies may be accomplished using appropriate algorithms for replica placement such
as those proposed in [10]. To process the operations of a transaction, the database management
system translates operations on data items into operations on copies of these data items. We
formalize this procedure by a translation function h.
Function h maps each read operation into a number of read operations on copies of x and
returns one value (e.g., the most up to date value) as the value read by the read operation. That
is, we assume that h when applied to a read operation returns one value rather than a set of
values. In particular, h maps each SR[x] operation into a number of read operations on core
copies of x and returns one from these values as the value read by the operation. Depending
on how each weak read operation is translated, we define two types of translation functions: a
best-effort translation function that maps each WR[x] operation into a number of read operations
on locally available core or quasi copies of x and returns the most up-to-date such value, and
a conservative translation function that maps each weak read into a number of read operations
core and quasi local copies.
Conservative
Variations
Eventual
Immediate
corresponding clusters.
Writes only core copies.
Reads only local quasi copies.
Variations
Writes core and quasi copies at the
Reads local copies, returns as the value read the most recent value.
Reads core copies, returns as the value read the most recent value.
Writes local quasi copies.
Strict Read (SR)
Strict Write (SW)
Weak Read (WR)
Weak Write (WW)
Table
2: Variations of the translation function.
only on locally available quasi copies and returns the most up-to-date such value. Based on
the time of propagation of updates of core copies to quasi, we define two types of translation
functions: an eventual translation function that maps a SW [x] into writes of only core copies
and an immediate translation function that updates as well the quasi copies at the corresponding
cluster. For an immediate h, conservative and best-effort have the same result. Each WW [x]
operation is translated by h into a number of write operations of local quasi copies of x. Table 2
summarizes the semantics of operations.
How many and which core or quasi copies are actually read or written when a database operation
is issued on a data item depends on the coherency algorithm used, e.g, quorum consensus,
ROWA, [3]. Without loss of generality, we assume that there is only one quasi copy per cluster.
This assumption can be easily lifted but with significant complication in notation. Since all quasi
copies in a cluster have the same value, this single copy can be regarded as their representative.
Immediate translation and consistency. To handle integrity constraints besides replication,
in the case of an immediate translation function h, h should be defined such that the integrity
constraints between quasi copies in the same cluster are not violated. The following example is
Example 1 For simplicity consider only one cluster. Assume two data items x and y, related by
the integrity constraint x consistent database state x
and y \Gamma4, where the subscripts c, and q denote core and quasi copies respectively.
Consider the transaction program:
then
If the above program is executed as a strict transaction SW (x) SR(y) C, we get the database state
\Gamma4, in which the integrity constraint between the quasi copies
of x and y is violated. 2
The problem arises from the fact that quasi copies are updated to the current value of the
core copy without taking into consideration integrity constraints among them. Similar problems
occur when refreshing individual copies of a cache [1]. Possible solutions include: (1) Each time
a quasi copy is updated as a result of a strict write, the quasi copies of all data related to it by
some integrity constraint are also updated either after or prior to the execution of the transaction.
This update is done following a reconciliation procedure for merging core and quasi copies (as
in Section 4). In the above example, the core and quasi copies of x and y should have been
reconciled prior to the execution of the transaction, producing for instance the database state
2. Then, the execution of the transaction would result in the
database state x which is consistent. (2) If a strict transaction
updates a quasi copy at a cluster, its read operations are also mapped into reads of quasi copies
at this cluster. In cases of incompatibilities, a reconciliation procedure is again initiated having a
similar result as above. (3) Updating quasi copies is postponed by deferring any updates of quasi
copies that result from writes of the corresponding core copies. A log of weak writes resulting
from strict writes is kept. In this scenario, the execution of the transaction results in the database
state x which is consistent. The first two approaches force
an immediate reconciliation among copies, while the third approach defers this reconciliation and
is preferable in cases of low connectivity among clusters.
3 Weak Connectivity Operation
In this section, we provide serializability-based criteria, graph-based tests and a locking protocol
for correct executions that exploit weak connectivity. We use the terms read and write to refer to
the operations on data copies. When is an operation, the subscript j denotes that o belongs to
transaction j, while the subscript on a data copy identifies the cluster. A complete intracluster
schedule, IAS, is an observation of an interleaved execution of transactions in a given cluster
configuration, that includes (locally) committed weak transactions and (globally) committed strict
transactions. Formally,
be a set of transactions. A
(complete) intracluster schedule, IAS, over T is a pair (OP, ! a ) in which ! a is a partial ordering
relation such that
1.
2. For each T i and all operations op k , op l in T i , if op k ! i op l , then every operation in h(op k )
is related by ! a to every operation in h(op l ).
3. All pairs of conflicting operations are related by ! a , where two operations conflict if they
access the same copy and one of them is a write operation.
4. For all read operations, read j [x i ] there is at least one write k [x i ] such that write k [x i
read
5. If SW j [x] ! a SR j [x] and read j
y written by T j for which there is a y i 2 Cl i , where x i is a quasi copy when h is conservative
and any, quasi or core, copy when h is best effort.
Condition 1 states that the transaction managers translate each operation on a data item into
appropriate operations on data copies. Condition 2 states that the intracluster schedule preserves
the ordering stipulated by each transaction and Condition 3 that it also records the execution
order of conflicting operations. Condition 4 states that a transaction cannot read a copy unless
it has been previously initialized. Condition 5 states that if a transaction writes a data item x
before it reads x, then it must write to the same copy of x that it subsequently reads. Finally,
Condition 6 indicates that for a strict transaction, if a write is translated to a write on a data
copy at a cluster Cl i then all other writes of this transaction that may be possibly read by a weak
transaction must also write the corresponding copies at cluster Cl i . This condition is necessary
for ensuring that weak transactions do not see partial results of a strict transaction.
A read operation on a data item x reads-x-from a transaction T i if it reads (i.e., returns as the
value read) a copy of x written by T i and no other transaction writes this copy in between. We
say that a transaction T i has the same reads-from relationship in schedule S 1 as in schedule S 2 ,
if for any data item x, if T i reads-x-from T j in S 1 then it reads-x-from T j in S 2 . Given a schedule
S, the projection of S on strict transactions is the schedule obtained from S by deleting all weak
operations, and the projection of S on a cluster Cl k is the schedule obtained from S by deleting
all operations that do not access Cl k . A schedule is one-copy serializable if it is (view) equivalent
to a serial one-copy schedule [3].
3.1 Correctness Criterion
A correct concurrent execution of weak and strict transactions must maintain d-consistency among
clusters and strict consistency inside each cluster.
Definition 4 (IAS Weak Correctness) An intracluster schedule S IAS is weakly correct iff
1. all transactions have a consistent view, i.e., all constraints that can be evaluated using the
data read are valid,
2. there is a one copy serial schedule S such that (a) it has the same set of strict transactions
and operations, (b) strict transactions have the same reads-from relationship as in S IAS ,
and (c) the set of final writes on core copies is the same as in S IAS .
3. it maintains the d-degree relationship among copies.
Next, we discuss how to enforce the first two conditions. Protocols for bounding the divergence
among copies are outlined at the end of this section. The following theorem, defines correctness
in terms of equivalence to serial executions.
Theorem 1 Given that d-consistency is maintained, an intracluster schedule S is weakly correct
if its projection on strict transactions is one-copy serializable and each of its projections on a
cluster is conflict-equivalent to a serial schedule.
Proof: The first condition of the definition of correctness is guaranteed for strict transactions
from the requirement of one-copy serializability, since strict transaction get the same view as in
a one-copy serial schedule and read only core copies. For weak transactions at a cluster, the
condition is provided from the requirement of serializability of the projection of the schedule on
this cluster given that the projection of each transaction at the cluster maintains consistency
when executed alone. Thus it suffices to prove that such projections maintain consistency. This
trivially holds for weak transactions since they are local at each cluster. The condition also holds
for strict transactions, since if a strict transaction maintains d-consistency, then its projection on
any cluster also maintains d-consistency, as a consequence of condition (6) of the definition of an
IAS schedule. Finally, one copy serializability of the projection of strict transactions suffices to
guarantee 2(b) and 2(c) since strict transactions read only core copies and weak transactions do
not write core copies respectively. 2
Note, that intercluster constraints other than replication constraints among quasi copies of data
items at different sites may be violated. Weak transactions however are unaffected by such
violations, since they read only local data. Although, the above correctness criterion suffices to
ensure that each weak transaction gets a consistent view, it does not suffice to ensure that weak
transactions at different clusters get the same view, even in the absence of intercluster constraints.
The following example is illustrative.
Example 2 Assume two clusters that have both quasi and core
copies of the corresponding data items, and the following two strict transactions ST
In addition, at cluster Cl 1 we have the weak
transaction , and at cluster Cl 2 the weak transactions WT
. For simplicity, we do not show the transaction that
initializes all data copies. We consider an immediate and best effort h.
The execution of the above transactions produces the weakly correct schedule
The projection of S on strict transactions is: which is
equivalent to the 1SR schedule:
The projection of S on serializable as
The projection on
as
Thus, weak correctness does not guarantee that there is a serial schedule equivalent to the
intracluster schedule as a whole, that is including all weak and strict transactions. The following
is a stronger correctness criterion that ensures that weak transactions get the same consistent
view. Obviously, strong correctness implies weak correctness.
Definition 5 (IAS Strong Correctness) An intracluster schedule S is strongly correct iff there
is a serial schedule S S such that
1. S S is conflict-equivalent with S, and
2. In S S , (a) strict transactions have the same reads-from relationship, and (b) the set of final
writes on core copies is the same as in a one copy serial schedule.
Lemma 1 An intracluster schedule S is strongly correct if it is conflict-equivalent to a serial
schedule S S and its projection on strict transactions is equivalent to a one-copy serial schedule
S 1C such that the order of transactions in S S is consistent with the order of transactions in S 1C .
Proof: We need to prove that in S 1C strict transactions have the same read-from and final
writes as in S S which is straightforward since strict transaction only read data produced by strict
transactions and core copies are written only by strict transactions. 2
Since weak transactions do not directly conflict with weak transactions at other clusters, the
following is an equivalent statement of the above lemma,
Corollary 1 An intracluster schedule S is strongly correct if its projection on strict transactions
is equivalent to a one-copy serial schedule S 1C , and each of its projections on a cluster Cl i is
conflict-equivalent to a serial schedule S S i such that the order of transactions in S S i is consistent
with the order of transactions in S 1C .
If weak IAS correctness is used as the correctness criterion, then the transaction managers
at each cluster must only synchronize projections on that cluster. Global control is required
only for synchronizing strict transactions. Therefore, no control messages are necessary between
transaction managers at different clusters for synchronizing weak transactions. The proposed
schema is flexible, in that any coherency control method that guarantees one-copy serializability
(e.g., quorum consensus, primary copy) can be used for synchronizing core copies. The schema
reduces to one-copy serializability when only strict transactions are used.
3.2 The Serialization Graph
To determine whether an IAS schedule is correct, a modified serialization graph is used, that we
call the intracluster serialization graph (IASG) of the IAS schedule. To construct the IASG, a
replicated data serialization graph (SG) is built to represent conflicts between strict transactions.
An SG [3] is a serialization graph augmented with additional edges to take into account the fact
that operations on different copies of the same data item may also cause conflicts. Acyclicity of
the SG implies one-copy serializability of the corresponding schedule. Then, the SG is augmented
with additional edges to represent conflicts between weak transactions in the same cluster and
conflicts between weak and strict transactions. We add an edge between two transactions
An edge is called a dependency edge if it represents the fact that a
transaction reads a value produced by another transaction, and a precedence edge if it represents
the fact that a transaction reads a value that was later changed by another transaction.
It is easy to see that in the IASG there are no edges between weak transactions at different
clusters, since weak transactions at different clusters read different copies of a data item. In
addition:
be a weak transaction at cluster Cl i and ST a strict transaction. The IASG
graph induced by an IAS may include only the following edges between them:
ffl a dependency edge from ST to WT i
ffl a precedence edge from WT i to ST
Proof: Straightforward from the conflict relation, since the only conflicts between weak and strict
transactions are due to strict writes and weak reads of the same copy of a data item. 2
Theorem 2 Let S IAS be an intracluster schedule. If S IAS has an acyclic IASG then S IAS is
strongly correct.
Proof: When a graph is acyclic then each of its subgraphs is acyclic thus SG is acyclic. Acyclicity
of the SG implies one-copy serializability of the strict transactions since strict transactions read
only values written by strict transactions. Let T 1 , T 2 , . , T n be all transactions in S IAS . Thus
are the nodes of the IASG. Since IASG is acyclic it can be topologically sorted. Let
, . , T i n be a topological sort of the edges in IASG, then by a straightforward application
of the serializability theorem [3] S IAS is conflict equivalent to the serial schedule S
, .
. This order is consistent with the partial order induced by a topological sorting of the SG,
let S 1C be the corresponding serial schedule. Thus the order of transactions in S S is consistent
with the order of transactions in S 1C . 2
3.3 Protocols
Serializability. We distinguish between coherency and concurrency control protocols. Coherency
control ensures that all copies of a data item have the same value, here we must maintain this
property globally for core and locally for quasi copies. Concurrency control ensures the main-
tanance of the other integrity constraints, here the intracluster constraints. For coherency control,
we assume a generic quorum-based schema [3]. Each strict transaction reads q r core copies and
writes q w core copies per strict read and write operation. The values of q r , and q w for a data item
WR WW SR SW
x
x
x
WR WW SR SW
x
x
x
WR WW SR SW
x
x
x
(a)
WR WW SR SW
x
x
x
(b)
x
x
(c)
x
x
(d)
Figure
1: Lock compatibility matrices. A X entry indicates that the lock modes are compatible.
(a) Eventual and conservative h. (b) Eventual and best effort. (c) Immediate and conservative.
(d) Immediate and best effort h.
x are such that q r +qw ? n d , where n d is the number of available core copies of x. For concurrency
control we use strict two phase locking where each transaction releases its locks upon commitment
[3]. Weak transactions release their locks upon local commitment and strict transactions upon
global commitment. There are four lock modes (WR, WW , SR, SW ) corresponding to the four
data operations. Before the execution of each operation, the corresponding lock is requested. A
lock is granted only if the data copy is not locked in an incompatible lock mode. Figure 1 depicts
the compatibility of locks for various types of translation functions and is presented to demonstrate
the interference between operations on items. Differences in compatibility stem from the
fact the operations access different kinds of copies. The basic overhead imposed by these protocols
on the performance of weak transactions is caused by other weak transactions at the same cluster.
This overhead is small since weak transactions do not access the slow network. Strict transactions
block a weak transaction only when they access the same quasi copies. This interference is limited
and can be controlled, e.g., by letting in cases of disconnections, strict transactions access only
core copies and weak only quasi.
Bounded inconsistency among copies. At each cluster, the degree for each data item expresses
the divergence of the local quasi copy from the value of the core copy. This difference may
result either from globally uncommitted weak writes or from updates of core copies that have not
yet been reported at the cluster. As a consequence, the degree may be bounded either by limiting
the number of weak writes pending commitment or by controlling the h function. In Table 3, we
outline ways of maintaining d-consistency for different ways of defining d.
4 A Consistency Restoration Schema
After the execution of a number of weak and strict transactions, all core copies of a data item
have the same value, while its quasi copies may have as many different values as the number of
clusters. In this section, we first provide criteria for characterizing the correctness of protocols for
reconciling the different values of copies and then describe such a protocol. The exact point of
reconciliation depends on the application requirements and the distributed system characteristics.
Appropriately bound the number of weak transactions at
the distribution of weak transactions at each cluster must be
that operate on inconsistent data
can take
copies of each data item
Bound the number of clusters that can have divergent quasi
per data item
the maximum number of data items that
so that a strict write modifies the quasi copies at each
disconnected clusters since there is no way of notifying them
cluster at least after d updates. This canot be ensured for
for remote updates.
Bound the number of data items that can have quasi copies.
item not reflected at all copies
the maximum number of updates per data
adjusted.
Allow only weak writes with values inside the acceptable range
When d is defined as: Applicable Method
each cluster. In the case of a dynamic cluster reconfiguration,
the maximum number of transactions
a range of acceptable values a data item
have divergent copies
the maximum number of divergent copies
Table
3: Maintaining bounded inconsistency.
Reconciliation may be forced to keep the inconsistency inside the required limits. Alternatively, it
may be initiated periodically or on demand upon the occurrence of specific events. For example,
values may be reconciled when the network connection is reestablished, for instance when a
palmtop is plugged-back to the stationary network or a mobile host enters a cell that provides
good connectivity.
4.1 Correctness Criterion
Approaches to reconciling copies vary from purely syntactic to purely semantic ones [5]. We adopt
a purely syntactic application-independent approach. Our correctness criterion is based on the
following principle: if a core copy is written, and a strict transaction has read it, the value of the
core copy is the value selected. Otherwise, the value of any quasi copy may be chosen. Some
transactions that wrote a value that was not selected may need to be undone/compensated
or redone. This may lead to roll-back of other weak transactions that have read values written
by this transaction. However, transaction roll-back is limited and never crosses the boundaries of
a cluster.
A (complete) intercluster schedule, IES, models execution after reconciliation, where global
transaction should become aware of local writes, i.e., local transaction become globally committed.
In the schedule, we must add additional conflicts between weak and strict operations.
Definition 6 (intercluster schedule) An intercluster schedule (IES) S IES based on an intracluster
schedule a ) is a pair (OP 0
1. OP 0
2. for any op i and op j ffl OP 0
and in addition:
3. for each pair of weak write WW i [x] and strict read SR j [x] operations either WW
4. for each pair of weak write WW i [x] and strict write SW j [x] operations either WW
We extend the reads-from relationship for strict transactions as follows. A strict read operation
on a data item x reads-x-from from a transaction T i in an IES schedule if it reads a copy of x
and T i has written this copy or a quasi copy of x and no other transaction wrote this or the quasi
copy in between.
We accept as many weak writes as possible without violating the one-copy serializability
of strict transactions. Specifically, a weak write is accepted only when it does not violate the
extended read-from relationship for strict transactions.
Definition 7 (IES Correctness) An intercluster schedule is correct iff
1. it is based on a correct IAS schedule S IAS , and
2. the reads-from relationship for strict transactions is the same with their reads-from relation
in the S IAS .
4.2 The Serialization Graph
To determine correct IES schedules we define a modified serialization graph that we call the
intercluster serialization graph (IESG). To construct the IESG, we augment the serialization
graph IASG of the underlying intracluster schedule. To force conflicts among weak and strict
transactions that read different copies of the same data item, we induce
ffl first, a write order as follows: if T i weak writes and T k strict writes any copy of an item x
then either T
ffl then, a strict read order as follows: if a strict transaction ST j reads-x-from ST i in S IAS and
a weak transaction WT follows ST i then we add an edge ST j !WT .
Theorem 3 Let S IES be an IES schedule based on an IAS schedule S IAS . If S IES has an acyclic
IESG then S IES is correct.
Proof: Clearly, if the IESG graph is acyclic, the corresponding graph for the IAS is acyclic
(since to get the IESG we only add edges to the IASG). We will show that if the graph is acyclic
then the read-from relation for strict transactions in the intercluster schedule S IES is the same
as in the underlying intracluster schedule S IAS . Assume that ST j reads-x-from ST i in S IAS .
Assume for the purposes of contradiction, that ST j reads-x-from a weak
transaction WT . Then WT writes x in S IES and since ST i also writes x either (a) ST
or (b) WT ! ST i . In case (a), from the definition of the IESG, we get ST j ! WT , which is
a contradiction since ST j reads-x-from WT . In case (b) WT ! ST i , that is WT precedes ST i
which precedes ST j , which again contradicts the assumption that ST j reads-x-from WT . 2
Until there are no cycles in the IESG
rollback a weak transaction WT in the cycle
unroll all exact transactions related with a dependency edge to WT
Per data item
If the final write is on a core copy
propagate this value to all quasi copies
else
choose a value of a quasi copy
propagate this value to all core and quasi copies
Table
4: The reconciliation steps.
4.3 Protocol
To get a correct schedule we need to break potential cycles in the IES graph. Since to construct
the IESG we start from an acyclic graph and add edges between a weak and a strict transaction,
there is always at least one weak transaction in each cycle. We rollback such weak transactions.
Undoing a transaction may result in cascading aborts, of transactions that have read the values
written by the transaction; that is, transactions that are related with a dependency edge to the
transaction undone. Since weak transactions write only quasi copies in a cluster, and since only
transactions in the same cluster can read these quasi copies we get the following lemma:
Only weak transactions in the same cluster read values written by weak transactions
in that cluster.
The above lemma ensures that only weak transactions in the same cluster are affected when
a weak transaction is aborted to resolve conflicts in an intercluster schedule. In practice, fewer
transactions ever need to be aborted. In particular, we need to abort only weak transactions whose
output depends on the exact values of the data items they read. We call these transactions exact.
Most weak transactions are not exact, since by definition, weak transactions are transactions that
read local d-consistent data. Thus, even if the value they read was produced by a transaction that
was later aborted, this value was inside an acceptable range of inconsistency and this is probably
sufficient to guarantee their correctness.
Detecting cycles in the IESG can be hard. The difficulties raise from the fact that between
transactions that wrote a data item an edge can have any direction, thus resulting in polygraphs
[20]. Polynomial tests for acyclicity are possible, if we made the assumption that transactions
read a data item before writing it. Then, to get the IES graph from the IAS graph we need only:
ffl induce a read order as follows: if a strict transaction ST reads an item that was written by
a weak transaction WT we add a precedence edge SR !WT
Table
4 outlines the reconciliation steps.
In the proposed hybrid schema, weak and strict transactions coexist. Weak transactions let users
process local data thus avoiding the overhead of long network accesses. Strict transactions need
access to the network to guarantee consistency of their updates. Weak reads provide users with
the choice of reading an approximately accurate value of a datum in particular in cases of total
or partial disconnections. This value is appropriate for a variety of applications that do not
require exact values. Such applications include gathering information for statistical purposes or
making high-level decisions and reasoning in expert systems that can tolerate bounded uncertainty
in input data. Weak writes allow users to update local data without confirming these updates
immediately. Update validation is delayed till clusters are connected. Delayed updates can be
performed during periods of low network activity to reduce demand on the peaks. Furthermore,
grouping together weak updates and transmitting them as a block rather than one at a time
can improve bandwidth usage. For example, a salesperson can locally update many data items,
till these updates are finally confirmed, when the machine is plugged back to the network at
the end of the day. However, since weak writes may not be finally accepted, they must be used
only when compensating transactions are available, or when the likelihood of conflicts is very
low. For example, users can employ weak transactions to update mostly private data and strict
transactions to update frequently used, heavily shared data.
The cluster configuration is dynamic. Clusters of data may be explicitly created or merged
upon a forthcoming disconnection or connection of the associated mobile client. To accommodate
migrating locality, a mobile host may move to a different cluster upon entering a new support
environment. Besides defining clusters based on the physical location of data, other definitions are
also possible. Clusters may be defined based on the semantics of data or applications. Information
about access patterns, for instance in the form of a user's profile that includes data describing
the user's typical behavior, may be utilized in determining clusters. Some examples follow.
Example 1: Cooperative Environment. Consider the case of users working on a common
project using mobile hosts. Groups are formed that consist of users who work on similar topics of
the project. Clusters correspond to data used by people in the same group who need to maintain
consistency among their interactions. We consider data that are most frequently accessed by a
group as data belonging to this group. At each group, the copies of data items belonging to the
group are core copies, while the copies of data items belonging to other groups are quasi. A data
item may belong to more than one group if more than one group frequently accesses it. In this
case, core copies of that data item exist in all such clusters. In each cluster, operations on items
that do not belong to the group are weak, while operations on data that belong to the group are
strict. Weak updates on a data item are accepted only when they do not conflict with updates
by the owners of that data item.
Example 2: Caching. Clustering can be used to model caching in a client/server architecture.
In such a setting, a mobile host acts as a client interacting with a server at a fixed host. Data
are cached at the client for performance and availability. The cached data are considered quasi
copies. The data at the fixed host are core copies. Transactions initiated by the server are always
strict. Transactions initiated by the client that invoke updates are always weak while read-only
client transactions can be strict if strict consistency is required. At reconciliation, weak writes
are accepted only if they do not conflict with strict transactions at the server. The frequency of
reconciliation depends on the user consistency requirements and on networking conditions.
Example 3: Location Data. In mobile computing, data representing the location of a mobile
user are fast-changing. Such data are frequently accessed to locate a host. Thus, location data
must be replicated at many sites to reduce the overhead of searching. Most of the location copies
should be considered quasi. Only a few core copies are always updated to reflect changes in
location.
6 Quantitative Evaluation of Weak Consistency
To quantify the improvement in performance attained by sacrificing strict consistency in weakly
connected environments and to understand the interplay among the various parameters, we have
developed an analytical model. The analysis follows an iteration-based methodology for coupling
standard hardware resource and data contention as in [35]. Data contention is the result of
concurrency and coherency control. Resources include the network and the processing units. We
generalize previous results to take into account (a) nonuniform access of data, that takes into
consideration hotspots and the changing locality, (b) weak and strict transaction types, and (c)
various forms of data access, as indicated by the compatibility matrix of Table 1. An innovative
feature of the analysis is the employment of a vacation system to model disconnections of the
wireless medium. The performance parameters under consideration are the system throughput,
the number of messages sent, and the response time of weak and strict transactions. The study
is performed for a range of networking conditions, that is for different values of bandwidth and
for varying disconnection intervals.
6.1 Performance Model
We assume a cluster configuration with n clusters and a Poisson arrival rate for both queries and
updates. Let - q and - u respectively be the average arrival rate of queries and updates on data
items initiated at each cluster. We assume fixed length transactions with N operations on data
items, of which are queries and N are updates. Thus
the transaction rate, i.e., the rate of transactions initiated at each cluster, is
Let c be the consistency factor of the application under consideration, that is c is the fraction
of the arrived operations that are strict. To model hotspots, we divide data at each cluster into hot
and cold data sets. Let D be the number of data items per cluster, D c of which are cold and D h
hot. To capture locality, we assume that a fraction o of transactions exhibit locality, that is they
access data from the hot set with probability h and data from the cold set with probability
The remaining transactions access hot and cold data uniformly. Due to mobility, a transaction
may move to a different cluster and the data it accesses may no longer belong to the hot data of
the new cluster. This can be modeled by letting diminish. Locality is taken advantage by the
replication schema, by assuming that the probability that a hot data has a core copy at a cluster
is l, and that a cold data has a core copy is l 0 , where normally, l l be the probability
that an operation at a cluster accesses a data item for which there is a core copy at the cluster,
For simplicity, we assume that there is one quasi copy of each data item at each cluster. Let
q r be the read and q w the write quorum, and N S the mean number of operations on data copies
per strict transaction. The transaction model consists of nL states, where nL is the random
variable of items accessed by the transaction and NL its mean. Without loss of generality, we
assume that NL is equal to the number of operations. The transaction has an initial setup phase,
state 0. Then, it progress to states in that order. If successful, at the end of state nL
the transaction enters into the commit phase at state nL+1 . The transaction response time r trans
can be expressed as
where nw is the number of lock waits during the run of the transaction, r w j is the waiting time for
the jth lock contention, r E is the sum of the execution times in states excluding lock
waiting times, r INLP is the execution time in state 0, and t commit is the commit time to reflect
the updates in the database.
Resource contention analysis
We model clusters as M/G/1 systems. The average service time for the various types of requests,
all exponentially distributed, can be determined from the following parameters: t q processing
time for a query on a data copy, t u time to install an update on a data copy, t b overhead time
to propagate an update or query to another cluster. In each M/G/1 server, all requests are
processed with the same priority on a first-come, first-served basis. Clusters are connected and
later reconnected. To capture disconnections, we model each connection among two clusters as
an M/M/1 system with vacations. A vacation system is a system in which the server becomes
unavailable for occasional intervals of time. If W is the available bandwidth between two clusters
and if we assume exponentially distributed packet lengths for messages with average size m then
the service rate s r is equal to W=m. Let t r be the network transmission time.
Number of Messages. The total number of messages transmitted per second amongst clusters is:
The first term corresponds to query traffic; the second to update traffic.
Execution Time. For simplicity, we ignore the communication overhead inside a cluster, assuming
either that each cluster consists of a single node or that the communication among the nodes inside
a cluster is relatively fast. Without taking into account data contention, the average response
time for a weak read on a data item is R w
and for a weak update R w
w is the average wait time at each cluster. Let b r be 0 if q
strict read on a data item R s
and for a strict write R s
+w). The computation of w is given in the Appendix.
Average Transmission Time. The average transmission time t r equals the service time plus the
at each network link, t . The arrival rate - r at each link is Poisson
with mean M=(n(n \Gamma 1)). The computation of w r is given in the Appendix.
Throughput. The transaction throughput, i.e., input rate, is bounded by: (a) the processing time
at each cluster, (since - E[x], where - is the arrival rate of all requests at each cluster and E[x]
is the mean service time) (b) the available bandwidth, (since - r - t r ), and (c) the disconnection
intervals, (since - r - E[v], where E[v] is the mean duration of a disconnection).
Data contention analysis
We assume an eventual and best effort h. In the following, op stands for one of WR, WW , SR,
SW . Using formula (A) the response time for strict and weak transactions is:
strict +N q P q R SR +N u P u R SW
where P op is the probability that a transaction contents for an op operation on a data copy, and
R op is the average time spent waiting to get an op lock given that lock contention occurs. P q and
P u are respectively the probability that at least one operation on a data copy per strict read or
conflicts. Specifically, P An outline of the
estimation of P op and R op is given in the Appendix. For a detailed description of the model see
[21].
6.2 Performance Evaluation
The following performance results show how the percentage of weak and strict transactions can be
effectively tuned based on the prevailing networking conditions such as the available bandwidth
and the duration of disconnections to attain the desired throughput and latency. Table 5 depicts
some realistic values for the input parameters. The bandwidth depends on the type of technology
used, for infrared a typical value is 1 Mbps, for packet radio 2 Mbps, and for cellular phone 9-14
Kbps [7].
Parameter Description Value
n number of clusters 5
- q query arrival rate 12 queries/sec
- u update arrival rate 3 updates/sec
c consistency factor ranges from 0 to 1
q r read quorum ranges from 1 to n
quorum ranges from 1 to n
local transactions accessing hot data ranges from 0 to 1
h probability that a local transaction access hot data ranges from 0 to 1
l probability a hot data has a core copy at a given cluster ranges from 0 to 1
l 0 probability a cold data has a core copy at a given cluster ranges from 0 to 1
t u processing time for an update 0.02 sec
processing time for a query 0.005 sec
propagation overhead 0.00007 sec
vacation interval ranges
available bandwidth ranges
average size of a message 512 bits
c number of cold data items per cluster 800
D h number of hot data items per cluster 200
average number of operations per transaction 10
Table
5: Input parameters.18220
c
Consistency factor
Maximum
allowable
input
rate
for
c
Maximum
allowable
input
rate
for
updates
Consistency factor
Figure
2: Maximum allowable input rate of updates for various values of the consistency factor.
(left) Limits imposed by the processing rate at each cluster (- E[x]). (right) Limits imposed
by bandwidth restrictions (- r - t r ).
c
Consistency factor
Maximum
allowable
rate
for
updates
c
Maximum
allowable
rate
for
updates
Consistency factor
Figure
3: Maximum allowable input rate for updates for various values of the consistency factor.
Limits imposed by disconnections and their duration (- r - E[v]).
System throughput. Figures 2(left), 2(right), and 3 show how the maximum transaction input,
or system throughput, is bounded by the processing time, the available bandwidth, and the
disconnection intervals respectively. We assume that queries are four times more common than
updates As shown in Figure 2(left), the allowable input rate when all transactions are
almost double the rate when all transactions are strict 1). This is the result
of the increase in the workload with c caused by the fact that strict operations on data items may
be translated into more than one operation on data copies. The percentage of weak transactions
can be effectively tuned to attain the desired throughput based on the networking conditions such
as the duration of disconnections and the available bandwidth. As indicated in Figure 2(right), to
get for instance, the same throughput with 200bps as with 1000bps and must lower the
consistency factor below 0.1. The duration of disconnections may vary from seconds when they
are caused by hand offs ([17]) to minutes for instance when they are voluntary. Figure 3 depicts
the effect of the duration of a disconnection on the system throughput for both short durations
Figure
and longer ones (Figure 3(right)). For long disconnections (Figure 3(right)), only
a very small percentage of strict transactions can be initiated at disconnected sites. To keep the
throughput comparable to that for shorter disconnections (Figure 3(left)) the consistency factor
must drop at around three orders of magnitude.
Communication cost. We estimate the communication cost by the number of messages sent.
The number of messages depends on the following parameters of the replication schema: (1)
the consistency factor c, (2) the data distribution l for hot and l 0 for cold data, (3) the locality
factor and (4) the quorums, q r and q w , of the coherency schema. We assume a ROWA schema
not otherwise stated. As shown in Figure 4(left) the number of messages
increases linearly with the consistency factor. As expected the number of messages decreases with
the percentage of transactions that access hot data, since then local copies are more frequently
available. To balance the increase in the communication cost caused by diminishing locality there
may be a need to appropriately decrease the consistency factor (Figure 4(middle)). The number of
Number
of
messages
Consistency factor (c)20601000
Number
of
messages
Locality (o)
Number
of
messages
Relication of hot copies (l)
Figure
4: Number of messages. (left) For various values of c. (middle) With locality. (right) For
different replication of hot core copies. Unless otherwise stated
and
Number
of
messages
Replication of cold core data (l1)50150250
Number
of
messages
Read quorum
equal rates
4 times more queries
4 times more updates
Figure
5: Number of messages. (left) For different replication of cold core copies. (right) For
different values of the read quorum. Unless otherwise stated
and
messages decreases when the replication factor of hot core copies increases (Figure 4(right)). The
decrease is more evident since most operations are queries and the coherency schema is ROWA,
thus for most operations no messages are sent. The decrease is more rapid when transactions
exhibit locality, that is when they access hot data more frequently. On the contrary, the number
of messages increases with the replication factor of cold core copies because of additional writes
caused by coherency control (Figure 5(left)). Finally, the relationship between the read quorum
and the number of messages depends on the relative number of queries and updates (Figure
5(right)).
Transaction response time. The response time for weak and strict transactions for various
values of c is depicted in Figure 6. The larger values of response times are for 200bps bandwidth,
while faster response times are the result of higher network availability set at 2Mbps. The values
for the other input parameters are as indicated in Table 5. The additional parameters are set as
follows: (1) the locality parameters are 0:9, (2) the data replication parameters
are l the disconnection parameters are and the vacation intervals are
exponentially distributed with sec, to model disconnection intervals that correspond
to short involuntary disconnections such as those caused by hand offs [17], (4) the coherency
Response
time
(in
secs)
Consistency factor c
strict - 200bps
strict 2Mbps
Figure
Comparison of the response times for weak and strict transactions for various values of
the consistency factor.
Consistency
Time
(sec)
Response time
Processing time
Lock wait
Consistency factor0.050.150.25Time
(sec)
Response time
Processing time
Lock wait
Figure
7: (Left) Response time distribution for strict transactions. (right) Response time distribution
for weak transactions.
control schema is ROWA. The latency of weak transactions is about 50 times greater than that
of strict transactions. However, there is a trade-off involved in using weak transactions, since
updates may be aborted later. The time to propagate updates during reconciliation is not
counted. As c increases the response time for both weak and strict transactions increase since
more conflicts occur. The increase is more dramatic for smaller values of bandwidth. Figure 7(left)
and (right) show the response time distribution for strict and weak transactions respectively for
2Mbps bandwidth. For strict transactions, the most important overhead is network transmission.
All times increase as c increases. For weak transactions, the increase in the response time is the
result of longer waits for acquiring locks, since weak transactions that want to read up-to-date
data conflict with strict transactions that write them.
7 Reconciliation Cost
We provide an estimation of the cost of restoring consistency in terms of the number of weak
transactions that need to be rolled back. We focus on conflicts among strict and weak transactions
for which we have outlined a reconciliation protocol and do not consider conflicts among weak
transactions at different clusters. A similar analysis is applicable to this case also.
A weak transaction is rolled back if its writes conflict with a read of a strict transaction
that follows it in the IASG. Let P 1 be the probability that a weak transaction WT writes a
data item read by a strict transaction ST and P 2 be the probability that ST follows WT in the
serialization graph. Then the probability is the probability that a weak transaction is
rolled back. Assume that reconciliation occurs after N r transactions r of which are strict
are weak. For simplicity we assume uniform access distribution. Although it
is reasonable to assume that granule access requests from different transactions are independent,
independence cannot hold within a transaction if a transaction's granule accesses are distinct.
However, if the probability of accessing any particular granule is small, e.g., when the number
of granules is large and the access distribution is uniform, this approximation should be very
accurate. Then
Let p KL be the probability that in the IASG there is an edge from a given transaction of
type K to a given transaction of type L. Let p 0
be the probability that in the IASG
with m strict and m 0 weak transactions there is an edge from a given transaction of type K to
any transaction of type L. The formulas for p KL and p 0
are given in the Appendix.
be the probability that there is an acyclic path of length i, i.e., a path with
distinct nodes, from a given weak transaction to a given strict transaction in a IASG with m
strict and m 0 weak transactions. Then
The values of p(k; k can be computed from the following recursive relations:
where the first term is the probability of a path whose first edge is between weak transactions,
the second of a path whose first edge is between a weak and a strict transaction and includes at
least one more weak transaction and the last of a path whose first edge is between a weak and
a strict transaction and does not include any other weak transactions. Thus the actual number
of weak transaction that need to be undone or compensated because their writes cannot become
Probability
of
abort
(times
Consistency factor c
trans
trans
trans
trans
Probability
of
abort
(times
Number of transactions
Figure
8: Probability of abort for 3000 data items.0.010.030.050.07
Probability
of
abort
(times
Consistency factor c
1000 items
2000 items
3000 items
4000 items
5000 items0.010.030.050.07
1000 1500 2000 2500 3000 3500 4000 4500 5000
Probability
of
abort
(times
Number of data items
Figure
9: Probability of abort for
permanent is N We also need to roll back all exact weak transactions that read a
value written by a transaction aborted. Let e be the percentage of weak transactions that are
exact, then N roll
Figure
8(left) depicts the probability that a weak transaction cannot be accepted because
of a conflict with a strict transaction for reconciliation events occurring after varying number
of transactions and for different values of the consistency factor. Figure 9(left) shows the same
probability for varying database sizes. More accurate estimations can be achieved for specific
applications for which the access patterns of the transactions are known. These results can be
used to determine an appropriate reconciliation point, to balance the frequency of reconciliations
and the number of weak transactions that may be aborted. For instance, for a given c = 0:5, to
keep the probability below a threshold of say 0.00003, reconciliation events must take place as
often as every 85 transactions (Figure 8(right)).
8 Related Work
One-copy serializability [3] hides from the user the fact that there can be multiple copies of a
data item and ensures strict consistency. Whereas one-copy serializability may be an acceptable
criterion for strict transactions, it is too restrictive for applications that tolerate bounded inconsistency
and causes unbearable overheads in cases of weak connectivity. The weak transaction
model described in this paper was first introduced in [24] while preliminary performance results
were presented in [22].
Network Partitioning. The partitioning of a database into clusters resembles the network partition
problem [5], where site or link failures fragment a network of database sites into isolated
subnetworks called partitions. Clustering is conceptually different than partitioning in that it
is electively done to increase performance. Whereas all partitions are isolated, clusters may be
weakly connected. Clients may operate as physically disconnected even while remaining physically
connected. Strategies for network partition face similar competing goals of availability and
correctness. These strategies range from optimistic, where any transaction is allowed to be executed
in any partition, to pessimistic, where transactions in a partition are restricted by making
worst-case assumptions about what transactions at other partitions are doing. Our model offers a
hybrid approach. Strict transactions may be performed only if one-copy serializability is ensured
(in a pessimistic manner). Weak transactions may be performed locally (in an optimistic manner).
To merge updates performed by weak transactions we adopt a purely syntactic approach.
Read-only Transactions. Read-only transactions do not modify the database state, thus their
execution cannot lead to inconsistent database states. In our framework read-only transactions
with weaker consistency requirements are considered a special case of weak transactions.
In [8] two requirements for read-only transactions were introduced: consistency and currency
requirements. Consistency requirements specify the degree of consistency needed by a read-only
transaction. In this framework, a read-only transaction may have: (a) no consistency require-
ments; (b) weak consistency requirements if it requires a consistent view (that is, if all consistency
constraints that can be fully evaluated with the data read by the transaction must be true); or (c)
strong consistency requirements if the schedule of all update transactions together with all other
strong consistency queries must be consistent. While in our model strict read-only transactions
always have strong consistency requirements, weak read-only transactions can be tailored to have
any of the above degrees based on the criterion used for IAS correctness. Weak read-only transactions
may have no consistency requirement if they are ignored from the IAS schedule, weak
consistency if they are part of a weakly correct IAS schedule, and strong consistency if they are
part of a strongly correct schedule. The currency requirements specify what update transactions
should be reflected by the data read. In terms of currency requirements, strict read-only transactions
read the most-up-to-date data item available (i.e. committed). Weak read-only transactions
may read older versions of data, depending on the definition of the d-degree.
Epsilon-serializability (ESR) [25] allows temporary and bounded inconsistencies in copies to
be seen by queries during the period among the asynchronous updates of the various copies of a
data item. Read-only transactions in this framework are similar to weak read-only transactions
with no consistency requirements. ESR bounds inconsistency directly by bounding the number
of updates. In [34] a generalization of ESR was proposed for high-level type specific operations
on abstract data types. In contrast, our approach deals with low-level read and write operations.
In an N-ignorant system, a transaction need not see the results of at most N prior transactions
that it would have seen if the execution had been serial [13]. Strict transactions are 0-ignorant
and weak transactions are 0-ignorant of other weak transactions at the same cluster. Weak
transactions are ignorant of strict and weak transactions at other clusters. The techniques of
supporting N-ignorance can be incorporating in the proposed model to define d as the ignorance
factor N of weak transactions.
Mobile Database Systems. The effect of mobility on replication schemas is discussed in [2].
The need for the management of cached copies to be tuned according to the available bandwidth
and the currency requirements of the applications is stressed. In this respect, d-degree consistency
and weak transactions realize both of the above requirements. The restrictive nature of one-copy
serializability for mobile applications is also pointed out in [14] and a more relaxed criterion is
proposed. This criterion although sufficient for aggregate data is not appropriate for general
applications and distinguishable data. Furthermore, the criterion does not support any form of
adaptability to the current network conditions.
The Bayou system [6, 31] is a platform of replicated highly available, variable-consistency,
mobile databases on which to build collaborative applications. A read-any/write-any weakly-consistent
replication schema is employed. Each Bayou database has one distinguished server,
the primary, which is responsible for committing writes. The other secondary servers tentatively
accept writes and propagate them towards the primary. Each server maintains two views of
the database: a copy that only reflects committed data and another full copy that also reflects
tentative writes currently known to the server. Applications may choose between committed
and tentative data. Tentative data are similar to our quasi data, and committed data similar
to core data. Correctness is defined in terms of session, rather than on serializability as in the
proposed model. A session is an abstraction for the sequence of read and writes of an application.
Four types of guarantees can be requested per session: (a) read your writes, (b) monotonic
reads (successive reads reflect a non-decreasing set of writes), (c) writes follow read (writes are
propagated after reads on which they depend), and (d) monotonic writes (writes are propagated
after writes that logically precede them). To reconcile copies, Bayou adopts an application based
approach as opposed to the syntactic based procedure used here. The detection mechanism is
based on dependency checks, and the per-write conflict resolution method is based on client-
provided merge procedures [32].
Mobile File Systems. Coda [12] treats disconnections as network partitions and follows an
optimistic strategy. An elaborate reconciliation algorithm is used for merging file updates after the
sites are connected to the fixed network. No degrees of consistency are defined and no transaction
support is provided. [15, 16] extend Coda with a new transaction service called isolation-only
transactions (IOT). IOTs are sequences of file accesses that unlike traditional transactions have
only the isolation property. IOTs do not guarantee failure atomicity and only conditionally
guarantee permanence. IOTs are similar to weak transactions.
Methods for refining consistency semantics of cached files to allow a mobile client to select
a mode appropriate for the current networking conditions are discussed in [9], The proposed
techniques are delayed writes, optimistic replication and failing instead of fetching data in cases
of cache misses.
The idea of using different kinds of operations to access data is also adopted in [28, 29],
where a weak read operation was added to a file service interface. The semantics of operations
are different in that no weak write is provided and since there is no transaction support, the
correctness criterion is not based on one-copy serializability.
9
Summary
To overcome bandwidth, cost, and latency barriers, clients of mobile information systems switch
between connected and disconnected modes of operation. In this paper, we propose a replication
schema appropriate for such operation. Data located at strongly connected sites are grouped in
clusters. Bounded inconsistency is defined by requiring mutual consistency among copies located
at the same cluster and controlled deviation among copies at different clusters. The database
interface is extended with weak operations. Weak operations query local, potentially inconsistent
copies and perform tentative updates. The usual operations, called strict in this framework in
contradistinction to weak, are also supported. Strict operations access consistent data and perform
permanent updates. Disconnected operation is supported by using only weak operations. To
accommodate weak connectivity, a mobile client selects an appropriate combination of weak and
strict transactions based on the consistency requirements of its applications and on the prevailing
networking conditions. Adjusting the degree of divergence provides an additional support for
adaptability.
The idea of providing weak operations can be applied to other type of constraints besides
replication. Such constraints can be vertical and horizontal partitions or arithmetic constraints
[27]. Another way of defining the semantics of weak operations is by exploiting the semantics of
data. In [33], data are fragmented and later merged based on their object semantics.
--R
Data Caching Issues in an Information Retrieval System.
Replicated Data Management in Mobile Environments: Anything New Under the Sun?
Concurrency Control and Recovery in Database Systems.
Data Networks.
Consistency in Partitioned Networks.
The Bayou Architecture: Support for Data Sharing Among Mobile Users.
The Challenges of Mobile Computing.
Communication and Consistency in Mobile File Systems.
Data Replication for Mobile Computers.
Mobile Computing: Challenges in Data Management.
Disconnected Operation in the Coda File System.
Bounded Ingnorance: A Technique for Increasing Concurrency in a Replicated System.
Protocols for Maintaining Inventory Databases and User Profiles in Mobile Sales Applications.
Improving Data Consistency in Mobile Computing Using Isolation-Only Transactions
Cellular Essentials for Wireless Data Transmission.
Exploiting Weak Connectivity for Mobile File Access.
A Programming Interface for Application-Aware Adaptation in Mobile Computing
The Theory of Database Concurrency Control.
Transaction Management in Mobile Heterogeneous Environments.
A Replication Schema to Support Weak Connectivity in Mobile Information Systems.
Building Information Systems for Mobile Environments.
Maintaining Consistency of Data in Mobile Distributed Environments.
Replica Control in Distributed Systems: An Asynchronous Approach.
Experience with Disconnected Operation in a Mobile Computing Environment.
Management of Interdependent Data: Specifying Dependency and Consistency Requirements.
Service Interface and Replica Management Algorithm for Mobile File System Clients.
An Efficient Variable-Consistency Replicated File Service
Queueing Analysis
Session Guarantees for Weakly Consistent Replicated Data.
Managing Update Conflicts in Bayou
Supporting Semantics-Based Transaction Processing in Mobile Database Applications
Tolerating Bounded Inconsistency for Increasing Concurrency in Database Systems.
On the Analytical Modeling of Database Concurrency Control.
--TR
--CTR
Luis Veiga , Paulo Ferreira, PoliPer: policies for mobile and pervasive environments, Proceedings of the 3rd workshop on Adaptive and reflective middleware, p.238-243, October 19-19, 2004, Toronto, Ontario, Canada
Paolo Bellavista , Antonio Corradi , Rebecca Montanari , Cesare Stefanelli, Dynamic Binding in Mobile Applications: A Middleware Approach, IEEE Internet Computing, v.7 n.2, p.34-42, March
Joanne Holliday , Divyakant Agrawal , Amr El Abbadi, Disconnection modes for mobile databases, Wireless Networks, v.8 n.4, p.391-402, July 2002
Victor C. S. Lee , Kwok-Wa Lam , Sang H. Son , Eddie Y. M. Chan, On Transaction Processing with Partial Validation and Timestamp Ordering in Mobile Broadcast Environments, IEEE Transactions on Computers, v.51 n.10, p.1196-1211, October 2002
Wen-Chih Peng , Ming-Syan Chen, Design and Performance Studies of an Adaptive Cache Retrieval Scheme in a Mobile Computing Environment, IEEE Transactions on Mobile Computing, v.4 n.1, p.29-40, January 2005
Nadia Nouali , Anne Doucet , Habiba Drias, A two-phase commit protocol for mobile wireless environment, Proceedings of the sixteenth Australasian database conference, p.135-143, January 01, 2005, Newcastle, Australia
Guy Bernard , Jalel Ben-othman , Luc Bouganim , Grme Canals , Sophie Chabridon , Bruno Defude , Jean Ferri , Stphane Ganarski , Rachid Guerraoui , Pascal Molli , Philippe Pucheral , Claudia Roncancio , Patricia Serrano-Alvarado , Patrick Valduriez, Mobile databases: a selection of open issues and research directions, ACM SIGMOD Record, v.33 n.2, June 2004
Wanlei Zhou , Li Wang , Weijia Jia, An analysis of update ordering in distributed replication systems, Future Generation Computer Systems, v.20 n.4, p.565-590, May 2004
Paolo Bellavista , Antonio Corradi , Rebecca Montanari , Cesare Stefanelli, A mobile computing middleware for location- and context-aware internet data services, ACM Transactions on Internet Technology (TOIT), v.6 n.4, p.356-380, November 2006
Patricia Serrano-Alvarado , Claudia Roncancio , Michel Adiba, A Survey of Mobile Transactions, Distributed and Parallel Databases, v.16 n.2, p.193-230, September 2004
Srikumar Venugopal , Rajkumar Buyya , Kotagiri Ramamohanarao, A taxonomy of Data Grids for distributed data sharing, management, and processing, ACM Computing Surveys (CSUR), v.38 n.1, p.3-es, 2006 | concurrency control;mobile computing;replication;disconnected operation;adaptability;consistency;transaction management |